Understanding the shortcomings of decision trees

The effect of overfitting the dataset, which a decision tree often falls victim to, is best demonstrated through a simple example.

For this, we will return to the make_moons function from scikit-learn's datasets module, which we previously used in Chapter 8, Discovering Hidden Structures with Unsupervised Learning, to organize data into two interleaving half circles. Here, we choose to generate 100 data samples belonging to two half circles, in combination with some Gaussian noise with a standard deviation of 0.25:

In [1]: from sklearn.datasets import make_moons...     X, y = make_moons(n_samples=100, noise=0.25,...                       random_state=100)

We can visualize this data using matplotlib and the scatter

Get Machine Learning for OpenCV 4 - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.