Chapter 9. Exploring advanced methods
This chapter covers
- Reducing training variance with bagging and random forests
- Learning non-monotone relationships with generalized additive models
- Increasing data separation with kernel methods
- Modeling complex decision boundaries with support vector machines
In the last few chapters, we’ve covered basic predictive modeling algorithms that you should have in your toolkit. These machine learning methods are usually a good place to start. In this chapter, we’ll look at more advanced methods that resolve specific weaknesses of the basic approaches. The main weaknesses we’ll address are training variance, non-monotone effects, and linearly inseparable data.
To illustrate the issues, let’s consider a ...
Get Practical Data Science with R now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.