Model persistence

The first challenge is to write the model data and code it to disk. Let's start by training the pipeline first.

Let's get the imports out of the way:

import gzipimport loggingimport osfrom pathlib import Pathfrom urllib.request import urlretrieveimport numpy as npimport pandas as pdfrom sklearn.externals import joblibfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformerfrom sklearn.linear_model import LogisticRegression as LRfrom sklearn.pipeline import Pipelinefrom tqdm import tqdm

Let's write some utils for reading the data from text files and downloading them if absent:

Let's start by setting up a download progress bar for our use. We will do this by building a small abstraction over the tqdm

Get Natural Language Processing with Python Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.