CHAPTER 5Building Predictive Models Using Penalized Linear Methods
Chapter 2 looked at a number of different data sets with an eye toward understanding the data sets, the relations between the various attributes and labels, and the nature of the problems being posed. This chapter picks those data sets up once again and runs through some case studies demonstrating the process of building predictive models by using the penalized linear methods that you saw in Chapter 4, “Penalized Linear Regression.” Generally, the model-building will be segmented into two or more parts.
You’ll recall from Chapter 4 that model building with penalized linear regression has two steps. One is to train on the whole data set to trace out coefficient curves. The other is to run cross-validation to determine the best achievable out-of-sample performance and to identify the model that achieves it. The step of determining the achievable performance encompasses the hard design work, and in many of the examples in this chapter, that’s the only step that will be presented. The purpose of training on the whole data set is to get the best estimates of the model coefficients. But it does not change your estimate of the errors, which are the gauge of performance.
This chapter runs through a variety of different types of problems: regression problems, classification problems, problems with categorical attributes, and problems with nonlinear dependence of the labels on the attributes. It looks at basis expansion ...
Get Machine Learning in Python: Essential Techniques for Predictive Analysis now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.