Skip to Main Content
Machine Learning
book

Machine Learning

by Sergios Theodoridis
April 2015
Intermediate to advanced content levelIntermediate to advanced
1062 pages
40h 35m
English
Academic Press
Content preview from Machine Learning

16.5.2 Learning the Parameters in an HMM

This is the second time we refer to the learning of graphical models. The first time was at the end of Section 16.3.2. The most natural way to obtain the unknown parameters is to maximize the likelihood/evidence of the joint probability distribution. Because our task involves both observed as well as latent variables, the EM algorithm is the first one that comes to mind. However, the underlying independencies in an HMM will be employed in order to come up with an efficient learning scheme. The set of the unknown parameters, Θ, involves (a) the initial state probabilities, Pk, k = 1,…,K, (b) the transition probabilities, Pij, i,j = 1,2,…,K, and (c) the parameters in the probability distributions associated ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Machine Learning

Machine Learning

Mohssen Mohammed, Muhammad Badruddin Khan, Eihab Mohammed Bashier
Machine Learning

Machine Learning

Subramanian Chandramouli, Saikat Dutt, Amit Kumar Das
Machine Learning Algorithms

Machine Learning Algorithms

Giuseppe Bonaccorso
Introducing Machine Learning

Introducing Machine Learning

Dino Esposito, Francesco Esposito

Publisher Resources

ISBN: 9780128015223