Summary
In this chapter, we presented the concepts of bias and variance, as well as the trade-off between them. They are essential in understanding how and why a model may under-perform, either in-sample or out-of-sample. We then introduced the concept and motivation of ensemble learning, how to identify bias and variance in models, as well as basic categories of ensemble learning methods. We presented ways to measure and plot bias and variance, using scikit-learn and matplotlib. Finally, we talked about the difficulties and drawbacks of implementing ensemble learning methods. Some key points to remember are the following.
High-bias models usually have difficulty performing well in-sample. This is also called underfitting. It is due to the ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access