Learning curves
Another way to identify bias and variance is to generate learning curves. Like validation curves, we generate a number of in-sample and out-of-sample performance statistics with cross-validation. Instead of experimenting with different hyperparameter values, we utilize different amounts of training data. Again, by examining the means and standard deviations of in-sample and out-of-sample performance, we can get an idea about the amount of bias and variance inherent in our models.
Scikit-learn implements learning curves in the sklearn.model_selection module as learning_curve. Once again, we will use the KNeighborsClassifier example from Chapter 1, A Machine Learning Refresher. First, we import the required libraries and load ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access