put and actual is the calculated number from the neurons. We want to find where the
derivative of that equals 0, which is the minimum.
Equation 8-2. Back propagation
Δw t =−α t − y ϕ
Δw t −1
ϵ
is the momentum factor and propels previous weight changes into our current
weight change, whereas α is the learning rate.
Back propagation has the disadvantage of taking many epochs to calculate. Up until
1988, researchers were struggling to train simple neural networks. Their research on
how to improve this led to a whole new algorithm called QuickProp.
QuickProp
Scott Fahlman in ...