In the beginning of the chapter we devoted a section to understand kernel density estimation and how it can be leveraged to approximate the probability density function for the given samples from a random variable. We are going to use it in this section.
We have a set of tweets positively labeled. Another set of tweets negatively labeled. The idea is to learn the PDF of these two data sets independently using kernel density estimation.
From Bayes rule, we know that
Here, P(x | label) is the likelihood, P(label) is prior, and P(x) is the evidence. Here the label can be positive sentiment or negative sentiment.
Using the PDF learned from kernel density estimation, ...