Variance maximization

Let's assume that the data is N points represented as a set of D dimensional vectors, . We are going to project the data into an M-dimensional space so that M < D. M is given in advance as a hyperparameter. We will find the first principal component first.

The first principal component is considered to be the direction where the variance of the original data is max. After we find it, we can pick up the second principal component, whose variance follows the first one. This leads the linear projection to the two-dimensional space that was defined with the first and second principal components. Finding the arbitrary number ...

Get Hands-On Machine Learning with TensorFlow.js now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.