Summary

In this chapter, we had a closer look at modeling sequences of observations with hidden states with the two most commonly used algorithms:

  • Generative hidden Markov model (HMM) to maximize p(X,Y)
  • Discriminative conditional random field (CRF) to maximize log p(Y|X)

HMM is a special form of Bayes Network and requires the observations to be independent. Under these circumstances, the HMM is fairly easy to estimate, which is not the case for CRF.

You learned how to implement three dynamic programming techniques, Viterbi, Baum-Welch, and alpha/beta algorithms in Scala. These algorithms are routinely used to solve optimization problems and should be an essential component of your algorithmic toolbox.

Get Scala: Guide for Data Science Professionals now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.