Chapter 12: K-Nearest Neighbors for Classification
K-nearest neighbors (KNN) is a good choice for a classification model when there are not many observations or features and predicting class membership does not need to be very efficient. It is a lazy learner, so it is quicker to fit than other classification algorithms but considerably slower at classifying new observations. It can also yield less accurate predictions at the extremes, though this can be improved by adjusting k appropriately. We will consider these choices carefully in the model we will develop in this chapter.
KNN is perhaps the most straightforward non-parametric algorithm we could select, making it a good diagnostic tool. No assumptions need to be made about the distribution ...
Get Data Cleaning and Exploration with Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.