k-nearest neighbors
KNN is both interpretable and fast and for small and medium datasets (for large ones, there is a scalable modification—approximate KNN). It also has an important property—similar to k-means, it works on distances, and therefore sees the interaction between the features, which many other algorithms can't do.
The logic behind KNN is very simple—for each record it predicts, it finds k nearest records (neighbors—hence the name) most similar (close in the feature space) to the given one in the training set and infers data from them. The algorithm can be used both for classification (in this case, a most frequent class for the neighbors will be taken) or regression (calculated as a weighted average of the neighbors' values). ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access