Chapter 7. Other Advanced Methods
In this chapter, we show off a miscellany of machine learning models available in R. Even though the main algorithms that we’ve covered thus far really make up the majority of models, I wanted to include this chapter to provide a comprehensive view of the machine learning ecosystem in R.
We cover classification again, but through the lens of Bayesian statistics. This is a popular field of statistics and helps to transition to some other algorithms that depend on similar logic. We also cover principal component analysis, support vector machines, and k-nearest neighbor algorithms.
Naive Bayes Classification
One way to do classification with probabilities is through the use of Bayesian statistics. Although this field can have a rather steep learning curve, essentially we are trying to answer the question, “Based on the features we have, what is the probability that the outcome is class X?” A naive Bayes classifier answers this question with a rather bold assumption: all of the predictors we have are independent of one another. The advantage to doing this is that we drastically reduce the complexity of the calculations we’re doing.
Bayesian Statistics in a Nutshell
Bayesian statistics relies a lot on multiplication of probabilities. Let’s do a quick primer on this so you’re up to speed. Suppose that I ride my bike in 100 races and I win 54 of them (if only!). The probability of me winning a race, therefore, is just the number of times I’ve won divided ...
Get Introduction to Machine Learning with R now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.