Backpropagation

Now that we know how the forward passes are computed in MLPs, as well as how to best initialize them and calculate the loss of the network, it is time for us to learn about backpropagation—a method that allows us to calculate the gradient of the network using the information from the loss function. This is where our knowledge of multivariable calculus and partial derivatives comes in handy.

If you recall, this network is fully connected, which means all the nodes in each layer are connected to—and so have an impact on—the next layer. It is for this reason that in backpropagation we take the derivative of the loss with respect to the weights of the layer closest to the output, then the one before that, and so on, until we ...

Get Hands-On Mathematics for Deep Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.