7

Combining Different Models for Ensemble Learning

In the previous chapter, we focused on the best practices for tuning and evaluating different models for classification. In this chapter, we will build upon those techniques and explore different methods for constructing a set of classifiers that can often have a better predictive performance than any of its individual members. We will learn how to do the following:

  • Make predictions based on majority voting
  • Use bagging to reduce overfitting by drawing random combinations of the training dataset with repetition
  • Apply boosting to build powerful models from weak learners that learn from their mistakes

Learning with ensembles

The goal of ensemble methods is to combine different classifiers into ...

Get Python Machine Learning - Third Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.