Reading code that's been written by someone else in GitHub is easy. The most important thing we need to do is apply the models we know to new applications and create our own samples. Here, we will walk through the basic steps of creating a vocabulary from a huge collection of text and use it to train our NLP models.
In the NLP model, a vocabulary set is normally a table that maps each word or symbol to a unique token (typically, an int value) so that any sentence can be represented by a vector of int.
First, let's find some data to play with. To get started, here's a list of NLP datasets available on GitHub: https://github.com/niderhoff/nlp-datasets. From this list, you will find an English joke dataset ...