Summary
In this chapter, we covered the techniques and algorithms we can use for dimensionality reduction. Here, we learned how the data points in a high dimensional space are distributed into a low dimensional space to make the machine learning process more efficient and accurate. One widely used approach is PCA. PCA is an algorithm that's designed to maximize the variance in the projected data space. Due to its simplicity and efficiency, it is the most popular dimensionality reduction algorithm.
Another algorithm that we looked at in this chapter was word embedding. This allows us to map data that's been placed in a discrete value into the vectors of real numbers. The pattern that's projected by embedding is similar to the context machine ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access