Adamax – Adam based on infinity-norm

Now, we will look at a small variant of the Adam algorithm called Adamax. Let's recall the equation of the second-order moment in Adam:

As you may have noticed from the preceding equation, we scale the gradients inversely proportional to the norm of the current and past gradients ( norm basically means the square of values):

Instead of having just , can we generalize it to the norm? In general, when we have ...

Get Hands-On Deep Learning Algorithms with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.