April 2018
Beginner to intermediate
282 pages
6h 52m
English
When you are dealing with ML problems, you usually have a relational dataset that has various types of data, and you should properly treat each of them before training ML algorithms.
For example, if you are dealing with numerical data, you may scale it by applying methods such as min-max scaling or variance scaling.
For textual data, you may want to remove stop-words such as a, an, and the, and perform operations such as stemming, parsing, and tokenization.
For categorical data, you may need to encode it using methods such as one-hot encoding, dummy coding, and feature hashing.
How about having a very high number of features? For example, when you have thousands of features, how many of them would actually ...