## 12.7 Discrete-Time Markov Chains

The discrete-time process {X_{k}, k = 0, 1, 2, …} is called a Markov chain if for all i, j, k, …, m, the following is true:

$\mathit{P}\left[{\mathit{X}}_{\mathit{k}}=\mathit{j}|{\mathit{X}}_{\mathit{k}-1}=\mathit{i},{\mathit{X}}_{\mathit{k}-2}=\mathit{\alpha},\dots ,{\mathit{X}}_{0}=\mathit{\theta}\right]=\mathit{P}\left[{\mathit{X}}_{\mathit{k}}=\mathit{j}|{\mathit{X}}_{\mathit{k}-1}=\mathit{i}\right]={\mathit{p}}_{\mathit{ijk}}$

The quantity p_{ijk} is called the state transition probability, which is the conditional probability that the process will be in state j at time k immediately after the next transition, given that it is in state i at time k − 1. A Markov chain that obeys the preceding rule is called a non-homogeneous Markov chain. In this book we will consider only homogeneous Markov chains, which are Markov chains in which p_{ijk} = p_{ij}. This means that homogeneous Markov ...

Get *Fundamentals of Applied Probability and Random Processes, 2nd Edition* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.