16.5.2 Learning the Parameters in an HMM

This is the second time we refer to the learning of graphical models. The first time was at the end of Section 16.3.2. The most natural way to obtain the unknown parameters is to maximize the likelihood/evidence of the joint probability distribution. Because our task involves both observed as well as latent variables, the EM algorithm is the first one that comes to mind. However, the underlying independencies in an HMM will be employed in order to come up with an efficient learning scheme. The set of the unknown parameters, Θ, involves (a) the initial state probabilities, Pk, k = 1,…,K, (b) the transition probabilities, Pij, i,j = 1,2,…,K, and (c) the parameters in the probability distributions associated ...

Get Machine Learning now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.