*Kohonen networks* were introduced in 1982 by Finnish researcher Tuevo Kohonen.^{1} Although applied initially to image and sound analyses, Kohonen networks are nevertheless an effective mechanism for clustering analysis. Kohonen networks represent a type of *self-organizing map* (SOM), which itself represents a special class of neural networks, which we studied in Chapter 12.

The goal of SOMs is to convert a complex high-dimensional input signal into a simpler low-dimensional discrete map.^{2} Thus, SOMs are nicely appropriate for cluster analysis, where underlying hidden patterns among records and fields are sought. SOMs structure the output nodes into clusters of nodes, where nodes in closer proximity are more similar to each other than to other nodes that are farther apart. Ritter^{3} has shown that SOMs represent a nonlinear generalization of principal components analysis, another dimension-reduction technique.

SOMs are based on *competitive learning,* where the output nodes compete among themselves to be the winning node (or neuron), the only node to be activated by a particular input observation. As Haykin describes it: “The neurons become *selectively tuned* to various input patterns (stimuli) or classes of input patterns in the course of a competitive learning process.” A typical SOM architecture is shown in Figure 20.1. The input layer is shown at the bottom of the figure, with one input node for each field. Just as with neural ...

Start Free Trial

No credit card required