Three-layer feedforward networks trainedfor autoassociative recall are
sometimes used for data compression. These networks are discussed
in detail starting on page 77, so their inner workings will be glossed
over here. All that is important to understand at this point is that
this model consists of the usual hypothetical input layer and the
actual output layer, with a single layer of neurons hidden between the
input and output
layers.
In general, there are far fewer neurons in the
hidden layer than in the input and output layers. A three-layer
feedforward network having six input and output neurons and two
hidden-laye ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month, and much more.