
Multilayer Feedforward Networks
99
for (i=0 ; i<nout ; i++) {
delta = (targetsp] - outputs[i]) * actderiv ( outputs[i] ) ;
deltas[i] = delta ;
gradptr = grad[i] ;
for (j=0
;
j<nprev
;
j++)
gradptrQ] += delta * prevactp] ;
gradptr[nprev] += delta ; // Bias activation is always 1
}
}
The delta for each output neuron is computed and saved (as gradient
computation for hidden layers will need it). Then the partial deriva-
tive for each of the nprev weights connecting the previous layer to this
neuron are computed and summed into the epoch total. Finally, the
partial derivative for the bias weight is found. Recall that the bias
comes from a hypothetica ...