The mechanics of Naïve Bayes

Let's start with understanding the magic behind the algorithm—how Naïve Bayes works. Given a data sample x with n features, x1, x2, …, xn (x represents a feature vector and x = (x1, x2, …, xn)), the goal of Naïve Bayes is to determine the probabilities that this sample belongs to each of K possible classes y1, y2, …, yK, that is P( yk |x) or P(x1, x2, …, xn), where k = 1, 2, …, K. It looks no different from what we have just dealt with: x, or x1, x2, …, xn, is a joint event that the sample has features with values x1, x2, …, xn respectively, yk is an event that the sample belongs to class k. We can apply Bayes' theorem right away:

Let's look at each component in detail:

  • P (yk) portrays how classes are distributed, ...

Get Python Machine Learning By Example - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.