March 2020
Beginner to intermediate
342 pages
8h 38m
English
What’s not to love about perceptrons? They’re simple, and they can be assembled into larger structures like machine learning construction bricks. However, that simplicity comes with a distressing limitation: perceptrons work well on some datasets, and fail badly on others. More specifically, perceptrons are a good fit for linearly separable data. Let’s see what “linearly separable” means, and why it matters.
Take a look at this two-dimensional dataset:

The two classes in the data—green triangles and blue squares—are neatly arranged into distinct clusters. You could even separate them with a line, ...