There are two key hyperparameters for kNN classifiers. One is obvious—k, or how many neighbors to use when making predictions. The other is less obvious—how do we define the distance between two points?
Euclidean distance is not our only option, and in some situations, such as text classification, we may want to use different metrics. We can use domain knowledge, which is our knowledge about the underlying problem we're trying to solve, to answer this question. We can also use cross-validation.
With kNN, there are three problems to watch out for, as follows:
- The first is that you need to store the entire dataset in order to make predictions. The algorithm is memory expensive.
- This algorithm also uses a ...