Predicting complex skill learning with boosting

We will revisit our Skillcraft data set in this section—this time in the context of another boosting technique known as stochastic gradient boosting. The main characteristic of this method is that in every iteration of boosting, we compute a gradient in the direction of the errors that are made by the model trained in the current iteration.

This gradient is then used in order to guide the construction of the model that will be added in the next iteration. Stochastic gradient boosting is commonly used with decision trees, and a good implementation in R can be found in the gbm package, which provides us with the gbm() function. For regression problems, we need to specify the distribution parameter to ...

Get Mastering Predictive Analytics with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.