KNN is both interpretable and fast and for small and medium datasets (for large ones, there is a scalable modification—approximate KNN). It also has an important property—similar to k-means, it works on distances, and therefore sees the interaction between the features, which many other algorithms can't do.
The logic behind KNN is very simple—for each record it predicts, it finds k nearest records (neighbors—hence the name) most similar (close in the feature space) to the given one in the training set and infers data from them. The algorithm can be used both for classification (in this case, a most frequent class for the neighbors will be taken) or regression (calculated as a weighted average of the neighbors' values). ...