Regression is similar to classification: you have a number of input features, and you want to predict an output feature. In classification, this output feature is either binary or categorical. With regression, it is a real-valued number.

Typically, regression algorithms fall into two categories:

- Modeling the output as a linear combination of the inputs. There is a ton of elegant math here and principled ways to handle data pathologies.
- Ugly hacks to deal with anything nonlinear.

This chapter will review several of the more popular regression techniques in machine learning, along with some techniques for assessing how well they performed.

I have made the unconventional decision to include fitting a line (or other curves) to two-dimensional data within the chapter on regression. You usually don't see curve fitting in the context of machine learning regression, but they're really the same thing mathematically: you assume some functional form for the output as a function of the inputs (such as *y* = *m*_{1}*x*_{1} + *m*_{2}*x*_{2}, where *x*_{i} are inputs and *m*_{i} are parameters that you set to whatever you want), and then you choose the parameters to line up as well as possible (however, you define “as well as possible”) with your training data. The distinction between them is a historical accident; fitting a curve to data was developed long before machine learning and even before computers.

## 11.1 **Example: Predicting Diabetes Progression**

The following script uses a dataset describing ...