In this chapter we apply the techniques of previous chapters to the training of feedforward neural networks. Neural networks have found numerous practical applications, ranging from telephone echo cancellation to aiding in the interpretation of EEG data (see, e.g., [108] and [72]). The essence of neural networks lies in the connection weights between neurons. The selection of these weights is referred to as *training* or *learning*. For this reason, we often refer to the weights as the *learning parameters*. A popular method for training a neural network is the *backpropagation algorithm*, based on an unconstrained optimization problem and an associated gradient algorithm applied to the problem. This chapter is devoted to a description of neural networks and the use of techniques developed in preceding chapters for the training of neural networks.

An *artificial neural network* is a circuit composed of interconnected simple circuit elements called *neurons*. Each neuron represents a map, typically with multiple inputs and a single output. Specifically, the output of the neuron is a function of the sum of the inputs, as illustrated in Figure 13.1. The function at the output of the neuron is called the *activation function*. We use the symbol shown in Figure 13.2 to represent a single neuron. Note that the single output of the neuron may be used as an input to several other neurons, and therefore the symbol for a single ...

Start Free Trial

No credit card required