6

Feature Extraction and Selection

In some cases, the dimension N of a measurement vector z, i.e. the number of sensors, can be very high. In image processing, when raw image data is used directly as the input for a classification, the dimension can easily attain values of 104 (a 100 × 100 image) or more. Many elements of z can be redundant or even irrelevant with respect to the classification process.

For two reasons, the dimension of the measurement vector cannot be taken arbitrarily large. The first reason is that the computational complexity becomes too large. A linear classification machine requires in the order of KN operations (K is the number of classes; see Chapter 2). A quadratic machine needs about KN2 operations. For a machine acting on binary measurements the memory requirement is on the order of K2N. This, together with the required throughput (number of classifications per second), the state of the art in computer technology and the available budget define an upper bound to N.

A second reason is that an increase of the dimension ultimately causes a decrease of performance. Figure 6.1 illustrates this. Here, we have a measurement space with the dimension N varying between 1 and 13. There are two classes (K = 2) with equal prior probabilities. The (true) minimum error rate Emin is the one which would be obtained if all class densities of the problem were fully known. Clearly, the minimum error rate is a non-increasing function of the number of sensors. Once an element ...

Get Classification, Parameter Estimation and State Estimation: An Engineering Approach Using MATLAB now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.