9

Evolutionary Neural Networks

9.1 Introduction

Layered feedforward neural networks have become very popular, for several reasons: they have been found in practice to generalize well and there are well-known training algorithms such as Widrow–Hoff, backpropagation, Hebbean, winner-takes-all, Kohonen self-organizing map which can often find a good set of weights. Despite using minimal training sets, the learning time very often increases exponentially and they often cannot be constructed (Muehlenbein, 1990). When global minima are hidden among the local minima, the backpropagation (BP) algorithm can end up bouncing between local minima without much overall improvement, which leads to very slow training. BP is a method requiring the computation of the gradient of error with respect to weights, which again needs differentiability. As a result, BP cannot handle discontinuous optimality criteria or discontinuous node transfer functions. BP's speed and robustness are sensitive to parameters such as learning rate, momentum and acceleration constant, and the best parameters to use seem to vary from problem to problem (Badi and Homik, 1995). A method called momentum decreases BP's sensitivity to small details in the error surface. This helps the network avoid getting stuck in shallow minima which would prevent the network from finding a lower-error solution (Vogt et al., 1988).

The automatic design of artificial neural networks has two basic sides: parametric learning and structural learning. ...

Get Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.