So, how can we tell whether a model suffers from overfitting, or the other extreme, underfitting? A learning curve is usually used to evaluate the bias and variance of a model. A learning curve is a graph that compares the cross-validated training and testing scores over a various number of training samples.
For a model that fits well on the training samples, the performance of training samples should be above desire. Ideally, as the number of training samples increases, the model performance on testing samples improves; eventually the performance on testing samples becomes close to that on training samples.
When the performance on testing samples converges at a value far from the ...