Gradient boosting
Similar to the AdaBoost algorithm, gradient boosting also iteratively corrects the estimators on the basis of the values returned by them; in the case of gradient boosting, the adjustment takes place on the basis of the residual error generated by the previous estimators, rather than on the weights to be assigned (as in the case of AdaBoost).
Next, we will show an example that uses the GradientBoostingClassifier class of the scikit-learn library.
The default estimators are represented by decision trees, whose characteristics are specified in the parameters (such as max_depth, which establishes the growth of decision trees).
Also, note the learning_rate parameter, which must be considered together with the warm_start
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access