Recap of 1D HMM

Let's recap how 1D HMMs work, which we discussed in the previous chapters of this book. We have seen that HMM is a just a process over Markov chains. At any point in time, an HMM is in one of the possible states, and the next state that the model will transition to depends on the current state and the transition probability of the model. 

Suppose that there are M = {1, 2, ..., M} possible states for HMM, and the transition probability of going from some state i to state j is given by ai,j. For such a model, if at time t-1 the model is at state i, then at time t it would be in state j with a probability of ai,j. This probability is known as the transition probability. Also, we have defined the observed variable in the model, ...

Get Hands-On Markov Models with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.