O'Reilly logo

Hands-On Reinforcement Learning with Python by Sudharsan Ravichandiran

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Gradient descent

As a result of forward propagation, we are in the output layer. So now, we will backpropagate the network from the output layer to the input layer and update the weights by calculating the gradient of the cost function with respect to the weights to minimize the error. Sounds confusing, right? Let's begin with an analogy. Imagine you are on top of a hill, as shown in the following diagram, and you want to reach the lowest point on the hill. You will have to make a step downwards on the hill, which leads you to the lowest point (that is, you descend from the hill towards the lowest point). There could be many regions which look like the lowest points on the hill, but we have to reach the lowest point which is actually the ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required