December 2019
Intermediate to advanced
468 pages
14h 28m
English
A word-based language model defines a probability distribution over sequences of words. Given a sequence of words of length m (for example, a sentence), it assigns a probability P(w1, ... , wm) to the full sequence of words. We can use these probabilities as follows:
The inference of the probability of a long sequence, say w1, ..., wm, is typically infeasible. We can calculate the joint probability of P(w1, ... , wm) with the chain rule of joint probability (Chapter 1, The Nuts and Bolts of Neural Networks ...
Read now
Unlock full access