July 2017
Beginner to intermediate
378 pages
10h 26m
English
A core concept in ML is the bias–variance tradeoff. It fits in the category of no free lunches. The reduction of one often increases the other. The concept is tightly linked with one of the big dangers of ML, overfitting.
Overfitting is when a model contorts itself to fit the training data just right, but in doing so, it does a terrible job generalizing to new data examples. The resulting error on the training set will be low while the error on the test set will be high.
If you are not aware of this danger, you can easily fool yourself into thinking you have grown a highly accurate model, only to be embarrassed when it fails miserably out in the real world.