Bagging and Random Forests

Chapter 3, Bagging, and Chapter 4, Random Forests, demonstrate how to improve the stability and accuracy of the basic decision tree. In this section, we will primarily use the decision tree as base learners and create an ensemble of trees in the same way that we did in Chapter 3, Bagging, and Chapter 4, Random Forests.

The split function is the primary difference between bagging and random forest algorithms for classification and regression trees. Thus, unsurprisingly, we can continue to use the same functions and packages for the regression problem as the counterparts that were used in the classification problem. We will first use the bagging function from the ipred package to set up the bagging algorithm for the housing ...

Get Hands-On Ensemble Learning with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.