Chapter 18. Naive Bayes
18.0 Introduction
Bayes’ theorem is the premier method for understanding the probability of some event, , given some new information, , and a prior belief in the probability of the event, :
The Bayesian method’s popularity has skyrocketed in the last decade, increasingly rivaling traditional frequentist applications in academia, government, and business. In machine learning, one application of Bayes’ theorem to classification comes in the form of the naive Bayes classifier. Naive Bayes classifiers combine a number of desirable qualities in practical machine learning into a single classifier. These include:
-
An intuitive approach
-
The ability to work with small data
-
Low computation costs for training and prediction
-
Often solid results in a variety of settings
Specifically, a naive Bayes classifier is based on:
where:
-
is called the posterior and is the probability that an observation is class given the observation’s values for the features, .
-
is called likelihood and is the likelihood of an observation’s values for features given their class, .
-
is called the prior and is our belief for the probability of class before looking at the data.
-
is called the marginal probability.
In naive Bayes, ...
Get Machine Learning with Python Cookbook, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.