October 2015
Intermediate to advanced
230 pages
5h 10m
English
Once we have extracted features and their descriptors from two (or more) images, we can start asking whether some of these features show up in both (or all) images. For example, if we have descriptors for both our object of interest (self.desc_train) and the current video frame (desc_query), we can try to find regions of the current frame that look like our object of interest. This is done by the following method, which makes use of the Fast Library for Approximate Nearest Neighbors (FLANN):
good_matches = self._match_features(desc_query)
The process of finding frame-to-frame correspondences can be formulated as the search for the nearest neighbor from one set of descriptors for every element of another set.
The first set of descriptors ...