December 2018
Beginner to intermediate
684 pages
21h 9m
English
The dataset contains various numerical features (see the relevant notebook for implementation details).
Vectorizers produce scipy.sparse matrices. To combine vectorized text data with other features, we need to first convert these to sparse matrices as well; many sklearn objects and other libraries, such as LightGBM, can handle these very memory-efficient data structures. Converting the sparse matrix to a dense NumPy array risks memory overflow.
Most variables are categorical, so we use one-hot encoding since we have a fairly large dataset to accommodate the increase in features.
We convert the encoded numerical features and combine them with the document-term matrix:
train_numeric = sparse.csr_matrix(train_dummies.astype(np.int8)) ...