Chapter 44. In Depth: Decision Trees and Random Forests
Previously we have looked in depth at a simple generative classifier (naive Bayes; see Chapter 41) and a powerful discriminative classifier (support vector machines; see Chapter 43). Here we’ll take a look at another powerful algorithm: a nonparametric algorithm called random forests. Random forests are an example of an ensemble method, meaning one that relies on aggregating the results of a set of simpler estimators. The somewhat surprising result with such ensemble methods is that the sum can be greater than the parts: that is, the predictive accuracy of a majority vote among a number of estimators can end up being better than that of any of the individual estimators doing the voting! We will see examples of this in the following sections.
We begin with the standard imports:
In
[
1
]:
%
matplotlib
inlineimport
numpy
as
np
import
matplotlib.pyplot
as
plt
plt
.
style
.
use
(
'seaborn-whitegrid'
)
Motivating Random Forests: Decision Trees
Random forests are an example of an ensemble learner built on decision trees. For this reason, we’ll start by discussing decision trees themselves.
Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero in on the classification. For example, if you wanted to build a decision tree to classify animals you come across while on a hike, you might construct the one shown in Figure 44-1.
Get Python Data Science Handbook, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.