In order to run our app, we will need to execute the main function routine (in
chapter7.py) that loads the pre-trained cascade classifier and the pre-trained multi-layer perceptron, and applies them to each frame of the webcam live stream.
However, this time, instead of collecting more training samples, we will select the radio button that says Test. This will trigger an
EVT_RADIOBUTTON event, which binds to
FaceLayout._on_testing, disabling all training-related buttons in the GUI and switching the app to the testing mode. In this mode, the pre-trained MLP classifier is applied to every frame of the live stream, trying to predict the current facial expression.
As promised earlier, we now return to