4Markov Chains
CONCEPTS DISCUSSED IN THIS CHAPTER.– Stochastic processes with discrete time and discrete states are discussed in this chapter with the additional condition of memorylessness.
We begin by giving useful definitions:
- – Markov chains;
- – probability vector;
- – transition matrix.
As we have already seen, graphs are frequently used when studying stochastic processes. We will take this opportunity to provide some knowledge about graph theory.
We will then look at the case of an indefinite succession of states and thus to the problem of ergodicity conditions, learning about fixed points.
Recommended reading: [BRE 09, FAU 79, FAU 14, FOA 04, LES 14, PHE 77, RUE 89].
4.1. Markov chain concepts
Consider an evolving system. This system is assumed to go from a defined state to a defined state, the number of states being finite: E1, E2, E3,..., Em (finite continuation of discrete states). The change of state is random and we say that the process of changing state is a stochastic result (a scholarly word, which just means random). The system will therefore “navigate” step by step between states E1 and Em. It can clearly return to same state several times, as shown in Figure 4.1 representing the evolution of a system over time.
The states are characterized by the values of random variables X(t), a random variable for each discrete ...
Get Introduction to Stochastic Processes and Simulation now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.