Boosting
Decision trees are extremely useful, but they are often not the best-performing classifiers. In this and the next section we present two techniques, boosting and random trees, that use trees in their inner loop and so inherit many of the useful properties of trees (e.g., being able to deal with mixed and unnormalized data types and missing features). These techniques typically perform at or near the state of the art; thus they are often the best "out of the box" supervised classification techniques[252] available in the library.
Within in the field of supervised learning there is a meta-learning algorithm (first described by Michael Kerns in 1988) called statistical boosting. Kerns wondered whether it is possible to learn a strong classifier out of many weak classifiers.[253] The first boosting algorithm, known as AdaBoost, was formulated shortly thereafter by Freund and Schapire.[254] OpenCV ships with four types of boosting:
Figure 13-12. Variable importance for edible mushroom as measured by an unbiased tree (left panel) and a tree biased against poison (right panel)
CvBoost :: DISCRETE
(discrete AdaBoost)CvBoost :: REAL
(real AdaBoost)CvBoost :: LOGIT
(LogitBoost)CvBoost :: GENTLE
(gentle AdaBoost)
Each of these are variants of the original AdaBoost, and often we find that the "real" and "gentle" forms of AdaBoost work best. Real AdaBoost is a technique that utilizes confidence-rated ...
Get Learning OpenCV now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.