The effect of overfitting the dataset, which a decision tree often falls victim to, is best demonstrated through a simple example.
For this, we will return to the make_moons function from scikit-learn's datasets module, which we previously used in Chapter 8, Discovering Hidden Structures with Unsupervised Learning, to organize data into two interleaving half circles. Here, we choose to generate 100 data samples belonging to two half circles, in combination with some Gaussian noise with a standard deviation of 0.25:
In [1]: from sklearn.datasets import make_moons... X, y = make_moons(n_samples=100, noise=0.25,... random_state=100)
We can visualize this data using matplotlib and the scatter