O'Reilly logo

Effective Amazon Machine Learning by Alexis Perrier

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Optimizing the learning rate

If you recall from Chapter 2Machine Learning Definitions and Concepts, under the section Regularization on linear models, the Stochastic Gradient Descent (SGD) algorithm has a parameter called the learning rate.

The SGD is based on the idea of taking each new (block of) data sample to make little corrections to the linear regression model coefficients. At each iteration, the input data samples are used either on a sample-by-sample basis or on a block-by-block basis to estimate the best correction (the so-called gradient) to make to the linear regression coefficients to further reduce the estimation error. It has been shown that the SGD algorithm converges to an optimal solution for the linear regression weights. These ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required