O'Reilly logo

Learn ARCore - Fundamentals of Google ARCore by Micheal Lanham

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Gradient descent explained

Gradient descent uses the partial derivative of the loss or error function in order to propagate the updates back to the neuron weights. Our cost function in this example is the sigmoid function, which relates back to our activation function. In order to find the gradient for the output neuron, we need to derive the partial derivative of the sigmoid function. The following graph shows how the gradient descent method walks down the derivative in order to find the minimum:

Gradient descent algorithm visualized
If you plan to spend anymore time studying neural networks, deep learning, or machine learning, you will ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required