3 Heterogeneous parallel ensembles: Combining strong learners
This chapter covers
- Combining base-learning models by performance-based weighting
- Combining base-learning models with meta-learning by stacking and blending
- Avoiding overfitting by ensembling with cross validation
- Exploring a large-scale, real-world, text-mining case study with heterogeneous ensembles
In the previous chapter, we introduced two parallel ensemble methods: bagging and random forests. These methods (and their variants) train homogeneous ensembles, where every base estimator is trained using the same base-learning algorithm. For example, in bagging classification, all the base estimators are decision-tree classifiers.
In this chapter, we continue exploring parallel ensemble ...
Get Ensemble Methods for Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.