Implementing decision trees

When we looked at random forests in the Testing a random forest model section of chapter 5 (Predicting the Failures of Banks - Multivariate Analysis) previously, decision trees were briefly introduced. In a decision tree, the training sample is split into two or more homogeneous sets based on the most significant independent variables. In a decision tree, the best variable to split the data into the different categories is found. Information gain and the Gini index are the most common ways to find this variable. Then, data is recursively split, expanding the leaf nodes of the tree until the stopping criterion is reached. 

Let's see how a decision tree can be implemented in R and how this algorithm is able to predict ...

Get Machine Learning with R Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.