16
Advanced Boosting Algorithms
In this chapter, we are going to discuss some important algorithms that exploit different estimators to improve the overall performance of an ensemble or committee. These techniques work either by introducing a medium level of randomness in every estimator belonging to a predefined set, or by creating a sequence of estimators where each new model is forced to improve the performance of the previous ones. These techniques allow us to reduce both the bias and the variance (thereby increasing validation accuracy) when employing models with a limited capacity or that are more prone to overfit the training set.
In particular, the topics covered in the chapter are as follows:
- Gradient boosting
- Ensembles of voting classifiers, ...
Get Mastering Machine Learning Algorithms - Second Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.