Using a random forest model
Decision trees, introduced in Chapter 13, Training a Machine Learning Model, and which we have been using so far, are fast and easy to interpret. Their weak point, however, is overfitting—many features might seem to be a great predictor on the training dataset, but turn out to mislead the models on the external data. In other words, they don't represent the general population. The problem is that decision trees (another algorithm) don't have any internal mechanics to detect and ignore those features.
A suite of more sophisticated models was developed on top of decision models to fight overfitting. These models are usually called tree ensembles, as all of them train multiple decision trees and aggregate their predictions. ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access