6 Sequential ensembles: Newton boosting
This chapter covers
- Using Newton’s descent to optimize loss functions for training models
- Implementing and understanding how Newton boosting works
- Learning with regularized loss functions
- Introducing XGBoost as a powerful framework for Newton boosting
- Avoiding overfitting with XGBoost
In the previous two chapters, we saw two approaches to constructing sequential ensembles: In chapter 4, we introduced a new ensemble method called adaptive boosting (AdaBoost), which uses weights to identify the most misclassified examples. In chapter 5, we introduced another ensemble method called gradient boosting, which uses gradients (residuals) to identify the most misclassified examples. The fundamental intuition behind ...
Get Ensemble Methods for Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.