Skip to Content
Python Deep Learning - Second Edition
book

Python Deep Learning - Second Edition

by Ivan Vasilev, Daniel Slater, Gianmario Spacagna, Peter Roelants, Valentino Zocca
January 2019
Intermediate to advanced
386 pages
11h 13m
English
Packt Publishing
Content preview from Python Deep Learning - Second Edition

N-grams

The inference of the probability of a long sequence, say w1, ..., wm, is typically infeasible. Calculating the joint probability of P(w1, ... , wm) would be done by applying the following chain rule:

The probability of the later words given the earlier words would be especially difficult to estimate from the data. That's why this joint probability is typically approximated by an independence assumption that the ith word is only dependent on the n-1 previous words. We'll only model the joint probabilities of combinations of n sequential words, called n-grams. For example, in the phrase the quick brown fox, we have the following n-grams: ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Python Deep Learning

Python Deep Learning

Valentino Zocca, Gianmario Spacagna, Daniel Slater, Peter Roelants

Publisher Resources

ISBN: 9781789348460Supplemental Content