2.6. The Nearest Neighbor Rule
A variation of the kNN density estimation technique results in a suboptimal, yet popular in practice, nonlinear classifier. Although this does not fall in the Bayesian framework, it fits nicely at this point. In a way, this section could be considered as a bridge with Chapter 4. The algorithm for the so-called nearest neighbor rule is summarized as follows. Given an unknown feature vector x and a distance measure, then:
- Out of the N training vectors, identify the k nearest neighbors, regardless of class label. k is chosen to be odd for a two class problem, and in general not to be a multiple of the number of classes M.
- Out of these k samples, identify the number of vectors, ki, that belong to class ωi, i = ...
Get Pattern Recognition, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.