Skip to Content
Machine Learning for Finance
book

Machine Learning for Finance

by James Le, Jannes Klaas
May 2019
Intermediate to advanced
456 pages
11h 38m
English
Packt Publishing
Content preview from Machine Learning for Finance

Word embeddings

The order of words in a text matters. Therefore, we can expect higher performance if we do not just look at texts in aggregate but see them as a sequence. This section makes use of a lot of the techniques discussed in the previous chapter; however, here we're going to add a critical ingredient, word vectors.

Words and word tokens are categorical features. As such, we cannot directly feed them into a neural network. Previously, we have dealt with categorical data by turning it into one-hot encoded vectors. Yet for words, this is impractical. Since our vocabulary is 10,000 words, each vector would contain 10,000 numbers that are all zeros except for one. This is highly inefficient, so instead, we will use an embedding.

In practice, ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Machine Learning for Finance

Machine Learning for Finance

Aryan Singh
Machine Learning and Data Science Blueprints for Finance

Machine Learning and Data Science Blueprints for Finance

Hariom Tatsat, Sahil Puri, Brad Lookabaugh

Publisher Resources

ISBN: 9781789136364Supplemental Content