May 2017
Beginner to intermediate
254 pages
6h 24m
English
We start with understanding the magic behind the algorithm-how naive Bayes works. Given a data sample x with n features x1, x2, ..., xn (x represents a feature vector and x = (x1, x2, ..., xn)), the goal of naive Bayes is to determine the probabilities that this sample belongs to each of K possible classes y1, y2, ..., yK, that is
or
, where k = 1, 2, ..., K. It looks no different from what we have just dealt with: x or x1, x2, ..., xn is a joint event that the sample has features with values x1, x2, ..., xn respectively, ...