6.7. Nonlinear Dimensionality Reduction

All the techniques that have been discussed so far in this chapter, as well as the LDA in the previous chapter, share a common goal: dimensionality reduction. In other words, given a high-dimensional data set X = {x1, x2,…, xn} ⊂ RN of input patterns,[4] the goal is to compute n corresponding patterns, Y = {y1, y2,…, yn} ⊂ Rm, m < N, that provide an “informative” representation of the input patterns. The word “informative” is interpreted in a different way for different methods; for example, PCA and ICA adopt different views on the issue. Another common characteristic of all the previous methods is that they respect linearity. Once a transformation matrix is computed, for each method, points in Y are ...

Get Pattern Recognition, 4th Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.