Now that our algorithm works for single frames, how can we make sure that the image found in one frame will also be found in the very next frame?
FeatureMatching.__init__, we created some bookkeeping variables that we said we would use for feature tracking. The main idea is to enforce some coherence while going from one frame to the next. Since we are capturing roughly 10 frames per second, it is reasonable to assume that the changes from one frame to the next will not be too radical. Therefore, we can be sure that the result we get in any given frame has to be similar to the result we got in the previous frame. Otherwise, we discard the result and move on to the next frame.
However, we have to be careful not to get stuck with ...