TF-IDF vectorizing
The most common limitation of count vectorizing is that the algorithm doesn't consider the whole corpus while considering the frequency of each token. The goal of vectorizing is normally preparing the data for a classifier; therefore, it's necessary to avoid features that are present very often, because their information decreases when the number of global occurrences increases. For example, in a corpus about a sport, the word match could be present in a huge number of documents; therefore, it's almost useless as a classification feature. To address this issue, we need a different approach. If we have a corpus C with n documents, we define Term Frequency (TF), the number of times a token occurs in a document, as the following: ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access