Backpropagation

We learned that gradient descent iteratively minimizes a function by calculating its gradient and using the gradient to update the function's parameters. To minimize the cost function of our multi-layer perceptron, we need to be able to calculate its gradient. Recall that multi-layer perceptrons contain layers of units that represent latent variables. We cannot use a cost function to calculate their errors; the training data indicates the desired output of the entire network, but it does not describe how the hidden units should behave. Since we cannot calculate the hidden units' errors, we cannot calculate their gradients or update their weights. A naive solution to this problem is to randomly change the weights for the hidden ...

Get Mastering Machine Learning with scikit-learn - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.