1Variable Length Markov Chains, Persistent Random Walks: A Close Encounter

We consider a walker on the line that at each step keeps the same direction with a probability that depends on the time already spent in the direction the walker is currently moving. These walks with memories of variable length can be seen as generalizations of directionally reinforced random walks (DRRWs) introduced in Mauldin et al. (1996). We give a complete and usable characterization of the recurrence or transience in terms of the probabilities to switch the direction. These conditions are related to some characterizations of existence and uniqueness of a stationary probability measure for a particular Markov chain: in this chapter, we define the general model for words produced by a variable length Markov chain (VLMC) and we introduce a key combinatorial structure on words. For a subclass of these VLMC, this provides necessary and sufficient conditions for existence of a stationary probability measure.

1.1. Introduction

This is the story of the encounter between two worlds: the world of random walks and the world of VLMCs. The meeting point turns around the semi-Markov property of underlying processes.

In a VLMC, unlike fixed-order Markov chains, the probability to predict the next symbol depends on a possibly unbounded part of the past, the length of which depends on the past itself. These relevant parts of pasts are called contexts. They are stored in context tree. With each context, a probability ...

Get Statistical Topics and Stochastic Models for Dependent Data with Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.