Chapter 8

Stochastic Processes

8.1 Introduction

Many real-world applications of probability theory have one particular feature that data are collected sequentially in time. A few examples are weather data, stock market indices, air-pollution data, demographic data, and political tracking polls. These also have another feature in common that successive observations are typically not independent. We refer to any such collection of observations as a stochastic process. Formally, a stochastic process is a collection of random variables that take values in a set S, the state space. The collection is indexed by another set T, the index set. The two most common index sets are the natural numbers T = {0, 1, 2, . . . } and the nonnegative real numbers T = [0, ∞), which usually represent discrete time and continuous time, respectively. The first index set thus gives a sequence of random variables {X0, X1, X2, . . . } and the second, a collection of random variables {X(t), t ≥ 0}, one random variable for each time t. In general, the index set does not have to describe time and is also commonly used to describe spatial location. The state space can be finite, countably infinite, or uncountable, depending on the application.

In order to be able to analyze a stochastic process, we need to make assumptions on the dependence between the random variables. In this chapter, we will focus on the most common dependence structure, the so-called Markov property, and in the next section we give a definition ...

Get Probability, Statistics, and Stochastic Processes, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.