Chapter 5. Decision Trees
Decision trees are versatile machine learning algorithms that can perform both classification and regression tasks, and even multioutput tasks. They are powerful algorithms, capable of fitting complex datasets. For example, in Chapter 2 you trained a DecisionTreeRegressor model on the California housing dataset, fitting it perfectly (actually, overfitting it).
Decision trees are also the fundamental components of random forests (see Chapter 6), which are among the most powerful machine learning algorithms available today.
In this chapter we will start by discussing how to train, visualize, and make predictions with decision trees. Then we will go through the CART training algorithm used by Scikit-Learn, and we will explore how to regularize trees and use them for regression tasks. Finally, we will discuss some of the limitations of decision trees.
Training and Visualizing a Decision Tree
To understand decision trees, let’s build one and take a look at how it makes predictions. The following code trains a DecisionTreeClassifier on the iris dataset (see Chapter 4):
fromsklearn.datasetsimportload_irisfromsklearn.treeimportDecisionTreeClassifieriris=load_iris(as_frame=True)X_iris=iris.data[["petal length (cm)","petal width (cm)"]].valuesy_iris=iris.targettree_clf=DecisionTreeClassifier(max_depth=2,random_state=42)tree_clf.fit(X_iris,y_iris)
You can visualize the trained decision tree by first using the export_graphviz() function to ...