Appendix C Markov Chains on Countable State Spaces

We review the basic elements of the Markov Chain theory in the case where the state space is countable or finite.

We start by recalling the definition of a Markov chain process t ) t ≥ 0 with discrete state space . The state space may be infinite – such as the set of the non‐negative integers ℕ = {0, 1, …} – or finite as {1, …, d}. Otherwise stated, we assume that ℰ = ℕ. The following definitions can be easily adapted when the state space is finite.

C.1. Definition of a Markov Chain

To define a Markov Chain we need an initial distribution π 0 = {π 0i } so that at time 0, the chain belongs to the i th state with probability π 0i :

C.1 equation

It is not restrictive to assume π 0i  > 0 (otherwise, state i can be withdrawn from the state space). We need a set of transition probabilities {p(i, j)} such that, for any t ≥ 0,

C.2 equation

The transition probabilities obviously satisfy p(i, j) ≥ 0 for all i, j ∈ ℰ, and j ∈ ℰ p(i, j) for any i ∈ ℰ. The Markov property states that instead of conditioning by t  = i] in the latter probability, we may condition on the entire history without changing the conditional probability:

C.3

for any integers i 0, …, i t − 1, i, j in (provided P t  = i, Δ t − 1 = 

Get GARCH Models, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.