3Inferences on HMM
In this chapter, we will give a brief overview of hidden Markov models (HMM) [BAS 93]. These models are widely used in many areas. They have a fundamental property that results in the existence of recursive algorithms, meaning that the number of operations and the size of the memory needed to calculate the required values do not increase with the number of samples. The best-known example of this is the Kalman filter [EMI 60].
Throughout this chapter, for simplicity, the notation (n1 : n2) will be used to denote the sequence of integer values from n1 to n2 inclusive.
3.1. Hidden Markov models (HMM)
A hidden Markov model (HMM) is a bivariate discrete-time process (Xn, Yn), where Xn and Yn are two real random vectors of finite dimension, such that:
- – Xn, n ≥ 0, is a Markov process, i.e. for any function f, the conditional expectation of f(Xn+1) given the σ-algebra generated by {Xs; s ≤ n} (the past until n) coincides with the conditional expectation of f(Xn+1) given the σ-algebra generated by {Xn}. If the conditional distributions have a density, it can be written as:
- – Yn, n ≥ 0, is a process such that the conditional distribution of Y0, …, Yn−1 given X0, …, Xn−1 is the product of the distributions of Yk conditionally on Xk. If the conditional distributions have a density, it can be written as:
- – the initial r.v. X0 has a known probability law. If ...
Get Digital Signal Processing (DSP) with Python Programming now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.