February 2018
Intermediate to advanced
378 pages
10h 14m
English
The most common way to train NNs these days is with a backward propagation of errors algorithm, or backpropagation (often backprop for short). As we have seen already, individual neurons remind us of linear or logistic regression a lot, so it should not come as a surprise that backpropagation usually comes together with our old friend the gradient descent algorithm. NN training works in the following way:
Read now
Unlock full access