# 3Inferences on HMM

In this chapter, we will give a brief overview of hidden Markov models (HMM) [BAS 93]. These models are widely used in many areas. They have a fundamental property that results in the existence of recursive algorithms, meaning that the number of operations and the size of the memory needed to calculate the required values do not increase with the number of samples. The best-known example of this is the Kalman filter [EMI 60].

Throughout this chapter, for simplicity, the notation (*n*_{1} : *n*_{2}) will be used to denote the sequence of integer values from *n*_{1} to *n*_{2} inclusive.

**3.1. Hidden Markov models (HMM)**

A hidden Markov model (HMM) is a bivariate discrete-time process (*X*_{n}, *Y*_{n}), where *X*_{n} and *Y*_{n} are two real random vectors of finite dimension, such that:

- –
*X*_{n},*n**≥*0, is a Markov process, i.e. for any function*f*, the conditional expectation of*f*(*X*_{n+1}) given the*σ*-algebra generated by {*X*;_{s}*s**≤**n*} (the past until*n*) coincides with the conditional expectation of*f*(*X*_{n+1}) given the*σ*-algebra generated by {*X*_{n}}. If the conditional distributions have a density, it can be written as: - –
*Y*_{n},*n**≥*0, is a process such that the conditional distribution of*Y*_{0},*…, Y*_{n−1}given*X*_{0},*…, X*_{n−1}is the product of the distributions of*Y*_{k}conditionally on*X*_{k}. If the conditional distributions have a density, it can be written as: - – the initial r.v.
*X*_{0}has a known probability law. If ...

Get *Digital Signal Processing (DSP) with Python Programming* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.