Gradient descent and backpropagation

Before we start learning about what gradient descent and backpropagation have to do in the context of neural networks, let's learn what is meant by an optimization problem.

An optimization problem, briefly, corresponds to the following:

  • Minimizing a certain cost
  • Maximizing a certain profit

Let's now try to map this to a neural network. What happens if, after getting the output from a feed-forward neural network, we find that its performance is not up to the mark (which is the case almost all the time)? How are we going to enhance the performance of the NN? The answer is gradient descent and backpropagation.

We are going to optimize the learning process of the neural network with these two techniques. ...

Get Hands-On Python Deep Learning for the Web now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.