Chapter 4

Naïve Bayesian Classification

4.1 Introduction

Naïve Bayesian classifiers [1] are simple probabilistic classifiers with their foundation on application of Bayes’ theorem with the assumption of strong (naïve) independence among the features. The following equation [2] states Bayes’ theorem in mathematical terms:

P(A|B)=P(A)P(B|A)P(B)

where:

A and B are events

P(A) and P(B) are the prior probabilities of A and B without regard to each other

P(A|B), also called posterior probability, is the probability of observing event A given that B is true

P(B|A), also called likelihood, is the probability of observing event B given that A is true

Suppose that vector X = (x1, x2, … xn) is an instance (with n independent features) to be classified ...

Get Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.