Chapter 11Regression

Regression is similar to classification: you have a number of input features, and you want to predict an output feature. In classification, this output feature is either binary or categorical. With regression, it is a real-valued number.

Typically, regression algorithms fall into two categories:

  • Modeling the output as a linear combination of the inputs. There is a ton of elegant math here and principled ways to handle data pathologies.
  • Ugly hacks to deal with anything nonlinear.

This chapter will review several of the more popular regression techniques in machine learning, along with some techniques for assessing how well they performed.

I have made the unconventional decision to include fitting a line (or other curves) to two-dimensional data within the chapter on regression. You usually don't see curve fitting in the context of machine learning regression, but they're really the same thing mathematically: you assume some functional form for the output as a function of the inputs (such as y = m1x1 + m2x2, where xi are inputs and mi are parameters that you set to whatever you want), and then you choose the parameters to line up as well as possible (however, you define “as well as possible”) with your training data. The distinction between them is a historical accident; fitting a curve to data was developed long before machine learning and even before computers.

11.1 Example: Predicting Diabetes Progression

The following script uses a dataset describing ...

Get The Data Science Handbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.