O'Reilly logo

Digital Signal Processing (DSP) with Python Programming by Maurice Charbit

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

3Inferences on HMM

In this chapter, we will give a brief overview of hidden Markov models (HMM) [BAS 93]. These models are widely used in many areas. They have a fundamental property that results in the existence of recursive algorithms, meaning that the number of operations and the size of the memory needed to calculate the required values do not increase with the number of samples. The best-known example of this is the Kalman filter [EMI 60].

Throughout this chapter, for simplicity, the notation (n1 : n2) will be used to denote the sequence of integer values from n1 to n2 inclusive.

3.1. Hidden Markov models (HMM)

A hidden Markov model (HMM) is a bivariate discrete-time process (Xn, Yn), where Xn and Yn are two real random vectors of finite dimension, such that:

  • Xn, n 0, is a Markov process, i.e. for any function f, the conditional expectation of f(Xn+1) given the σ-algebra generated by {Xs; s n} (the past until n) coincides with the conditional expectation of f(Xn+1) given the σ-algebra generated by {Xn}. If the conditional distributions have a density, it can be written as:
  • Yn, n 0, is a process such that the conditional distribution of Y0, …, Yn−1 given X0, …, Xn−1 is the product of the distributions of Yk conditionally on Xk. If the conditional distributions have a density, it can be written as:
  • – the initial r.v. X0 has a known probability law. If ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required