July 2017
Intermediate to advanced
360 pages
8h 26m
English
We're going to discuss kernel methods in Chapter 7, Support Vector Machines, however, it's useful to mention the class KernelPCA, which performs a PCA with non-linearly separable data sets. Just to understand the logic of this approach (the mathematical formulation isn't very simple), it's useful to consider a projection of each sample into a particular space where the dataset becomes linearly separable. The components of this space correspond to the first, second, ... principal components and a kernel PCA algorithm, therefore, computes the projection of our samples onto each of them.
Let's consider a dataset made up of a circle with a blob inside:
from sklearn.datasets import make_circles>>> Xb, Yb = make_circles(n_samples=500, ...
Read now
Unlock full access