Naive Bayes versus random forest

It is finally time to train our ML models to predict the sentiments of tweets. In this section, we are going to experiment with Naive Bayes and random forest classifiers. There are two things that we are going to do differently from the previous chapter. First, we are going to split our sample set into a train set and a validation set, instead of running k-fold cross-validation. This is also a frequently used technique, where the models learn only from a subset of the sample set and then they are tested and validated with the rest, which they did not observe at training time. This way, we can test how the models will perform in the unforeseen dataset and simulate how they are going to behave in a real-world ...

Get C# Machine Learning Projects now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.