6.7.2. Graph-Based Methods
Laplacian eigenmaps
The starting point of this method is the assumption that the data points lie on a smooth manifold (hypersurface) M ⊃ X, whose intrinsic dimension is equal to m < N and it is embedded in RN, that is, M ⊂ RN. The dimension m is given as a parameter by the user. In contrast, this is not required in the kernel PCA, where m is the number of dominant components, which, in practice, is determined so that the gap between λm and λm+1 has a “large” value.
The main philosophy behind the method is to compute the low-dimensional representation of the data so that local neighborhood information in X ⊂ M is optimally preserved. In this way, one attempts to get a solution that reflects the geometric structure ...
Get Pattern Recognition, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.