The data parser script is designed to help with the cleaning and preprocessing of our datasets. There are a number of dependencies in this script, such as pickle, codecs, re, OS, time, and numpy. This script contains three functions. The first function helps to filter words, by preprocessing word counts and creating vocabulary based on word count thresholds. The second function helps to parse all words into this script, and the third function helps to extract only the defined vocabulary from the data:
import pickleimport codecsimport reimport osimport timeimport numpy as np
The following module cleans and preprocesses the text in the training dataset:
def preProBuildWordVocab(word_count_threshold=5, all_words_path='data/all_words.txt' ...