11Regression
Regression is similar to classification: you have a number of input features, and you want to predict an output feature. In classification, this output feature is either binary or categorical. With regression, it is a real‐valued number.
As typically presented, machine‐learning regression algorithms fall into two main categories:
- Modeling the output as a linear combination of the inputs (possibly with some nonlinear pre‐processing of the inputs). There is a ton of elegant math here and principled ways to handle data pathologies.
- Ugly hacks to deal with anything nonlinear.
This chapter will review several of the more popular regression techniques in machine learning, along with some techniques for assessing how well they performed.
I have made the unconventional decision to include fitting a curve (like a line, parabola, or exponential decay) to two‐dimensional data within the chapter on regression. You usually don’t see curve fitting in the context of machine‐learning regression, but they’re really the same thing mathematically: you assume some functional form for the output as a function of the inputs (such as y = m 1 x 1 + m 2 x 2, where the x i are inputs and m i are model parameters), and then you choose the parameters to line up with your training data.
The difference in terminology is partly historical accident: “fitting a curve” was developed in the early 1800s long before the invention of computers. But, there is also a difference in spirit. When ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access