Backpropagation, or Backprop, is a core learning algorithm that we utilize in AI applications, and learning about it will be essential to creating and debugging your neural networks going forward. Short for backpropagation of error, it is the means by which ANNs calculate how erroneous they are with their predictions.

You can think of it as the complement to the gradient descent optimization algorithm that we precedingly discussed. Recall that at their core, ANNs seek to learn a set of weight parameters that help them approximate a function that replicates our training data. Backpropagation measures how error changes after each training cycle, and gradient descent tries to optimize the error. Backprop computes the gradient ...

Get Hands-On Artificial Intelligence for Beginners now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.