BoW-based NLP
The representation of input text as a bag of tokens is called BoW-based processing. The drawback of using BoW is that we discard most of the grammar and tokenization, which sometimes results in losing the context of the words. In the BoW approach, we first quantify the importance of each word in the context of each document that we want to analyze.
Fundamentally, there are three different ways of quantifying the importance of the words in the context of each document:
Binary: A feature will have a value of 1 if the word appears in the text or 0 otherwise.
Count: A feature will have the number of times the word appears in the text as its value or 0 otherwise.
Term frequency/Inverse document frequency: The value of the feature ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access