9
Models for Linear Stationary Processes
The chapter presents the theory of time-domain models for linear stationary processes. Two
contrasting and powerful classes of models, namely, the auto-regressive and moving average
models are reviewed. Following which, the general class of ARIMA models are discussed.
The main objective is to provide theoretical foundations for modeling stochastic variations in
signals. Parallels to models for deterministic LTI systems are highlighted so that the reader
can conveniently recognize the present chapter as a stochastic analogue of Chapter 4.
9.1 MOTIVATION
The previous chapter introduced two important measures, namely, ACF and CCF (and their partial
versions) to detect predictability within and across series. A formal use of these measures is carried
out through appropriate statistical tests. For instance, to test the predictability within a series, a
whiteness test for the signal is conducted, wherein the observed (estimated) ACF is subjected to a
test of hypothesis that the series falls out of a white-noise process, i.e., the ACF is zero at non-zero
lags. These procedures are detailed in Chapter 16.
The present chapter is concerned with the important step of modeling that follows the tests of
predictability. We shall specifically study two important topics: (1) the possible class of models for
linear stationary processes and (2) properties of these theoretical models. Estimation of models is
presented in Chapter 14.
At the outset we reiterate a point made earlier in Chapter 7. It is not possible to predict a random
process accurately by its very definition. The most basic prediction of any random signal is its
mean, i.e., the unconditional expectation E(v[k]). Time-series models offer the maximum possible
improvement over this basic prediction by exploiting the temporal correlations within and across
variables. Associated with this prediction is always a degree of uncertainty (inaccuracy).
A vast body of literature on time-series modeling is centered on linear random processes because
they offer mathematical convenience. Moreover a large class of random processes can be adequately
described by linear models. Development of these models is evidently based on the linear measures
presented in previous chapters. When it is required to build non-linear models, it is necessary to use
measures that test non-linear inter-sample dependencies. Such measures are based on higher-order
statistical moments.
The historical approach to time-series modeling involved breaking up the series as
y[k] = m[k]
Trend
+ s[k]
Seasonal
+ v[k]
Stationary
(9.1)
followed by a separate modeling of each component. These models are known as additive models.
The trend usually contains a polynomial type of trend while the seasonal component captures pe-
riodic behavior (if any) and seasonal effects. Both of these components could be combined into a
single deterministic component. Several efficient non-parametric and semi-parametric methods that
make use of suitable smoothing and filtering operations are available to facilitate such a decompo-
sition (see Brockwell (2002), Priestley (1981), and Shumway and Stoffer (2006)).
The main challenge is to model the stationary component, which was studied extensively by sev-
eral researchers in the late 1940s. Among these were the pioneering works of Wiener, Kolmogorov,
Cramer, Wold and other contemporaries (Priestley, 1981), whose efforts were aimed at developing
204
Get Principles of System Identification now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.