In some cases, the dimension *N* of a measurement vector **z**, that is the number of sensors, can be very high. In image processing, when raw image data are used directly as the input for a classification, the dimension can easily attain values of 10^{4} (a 100 × 100 image) or more. Many elements of **z** can be redundant or even irrelevant with respect to the classification process.

For two reasons, the dimension of the measurement vector cannot be taken arbitrarily large. The first reason is that the computational complexity becomes too large. A linear classification machine requires on the order of *KN* operations (*K* is the number of classes; see Chapter 4). A quadratic machine needs about *KN*^{2} operations. For a machine acting on binary measurements the memory requirement is on the order of *K*2* ^{N}*. This, together with the required throughput (number of classifications per second), the state of the art in computer technology and the available budget define an upper bound to

A second reason is that an increase of the dimension ultimately causes a decrease in performance. Figure 7.1 illustrates this. Here we have a measurement space with the dimension *N* varying between 1 and 13. There are two classes (*K* = 2) with equal prior probabilities. The (true) minimum error rate *E*_{min} is the one that would be obtained if all class densities of the problem were fully known. Clearly, the minimum error rate is a non-increasing function of the number of sensors. Once an ...

Start Free Trial

No credit card required