February 2018
Intermediate to advanced
378 pages
10h 14m
English
As you remember from Chapter 1, Getting Started with Machine Learning, for supervised learning we need two functions: the model and the loss function. We will use the least squares loss function to assess the quality of the model. The method was proposed by Carl Friedrich Gauss at the end of the 17th century. The essence of it is to minimize the distance between data points to the regression line. The difference (deviation) between the true value yi and the predicted value h(xi) is called residual and denoted as εi. Our loss function J will be a residual sum of squares (RSS), modified just a bit. If there are n samples of feature xi and label yi, then the RSS can be calculated as:
Read now
Unlock full access