
B
Efficient Neural Network Training
Algorithms
The standard backpropagation algorithm introduced in Chapter 6 is noto-
riously slow to converge. In this appendix we will de velop two additional
training algorithms for the two-layer, feed-forward neural network o f Figure
6.10. The first of thes e, scaled conjugate gradient, makes use of the second
derivatives of the cost function with respect to the synaptic weights, i.e., of
the Hessian matrix. The second, the ex tended Kalman filter method, takes
advantage of the sta tistical properties o f the weight parameters themselves.
Both techniques are considerably more efficient than backpropagation.
B.1 The Hessian ...