Gradient-based optimization

Optimization basically involves either minimizing or maximizing some function, f(x), where x is a numerical vector or a scalar. Here, f(x) is called the objective function or criterion. In the context of neural networks, we call it the cost function, loss function, or error function. In the previous example, the loss function we want to minimize is E.

Suppose we have a function, y = f(x), where x and y are real numbers. The derivative of this function tells us how this function changes with small changes to x. So, the derivative can be used to reduce the value of the function by infinitesimally changing x. Suppose f'(x) > 0 at x. This means f(x) will increase if we increase x along positive x and hence f(x - ε) ...

Get Hands-On Transfer Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.