Chapter 6: XGBoost Hyperparameters
XGBoost has many hyperparameters. XGBoost base learner hyperparameters incorporate all decision tree hyperparameters as a starting point. There are gradient boosting hyperparameters, since XGBoost is an enhanced version of gradient boosting. Hyperparameters unique to XGBoost are designed to improve upon accuracy and speed. However, trying to tackle all XGBoost hyperparameters at once can be dizzying.
In Chapter 2, Decision Trees in Depth, we reviewed and applied base learner hyperparameters such as max_depth, while in Chapter 4, From Gradient Boosting to XGBoost, we applied important XGBoost hyperparameters, including n_estimators and learning_rate. We will revisit these hyperparameters in this chapter in the ...
Get Hands-On Gradient Boosting with XGBoost and scikit-learn now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.