As a result of forward propagation, we are in the output layer. So now, we will backpropagate the network from the output layer to the input layer and update the weights by calculating the gradient of the cost function with respect to the weights to minimize the error. Sounds confusing, right? Let's begin with an analogy. Imagine you are on top of a hill, as shown in the following diagram, and you want to reach the lowest point on the hill. You will have to make a step downwards on the hill, which leads you to the lowest point (that is, you descend from the hill towards the lowest point). There could be many regions which look like the lowest points on the hill, but we have to reach the lowest point which is actually the ...
Gradient descent
Get Hands-On Reinforcement Learning with Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.