SGD improvements

We'll start with momentum, which extends vanilla SGD by adjusting the current weight update with the values of the previous weight updates. That is, if the weight update at step t-1 was big, it will also increase the weight update of step t. We can explain momentum with an analogy. Think of the loss function surface as the surface of a hill. Now, imagine that we are holding a ball at the top of the hill (maximum). If we drop the ball, thanks to the Earth's gravity, it will start rolling toward the bottom of the hill (minimum). The more distance it travels, the more its speed will increase. In other words, it will gain momentum (hence the name of the optimization).

Now, let's look at how to implement momentum in the weight ...

Get Advanced Deep Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.