8
Time-Domain Analysis: Correlation
Functions
Three important statistical properties of a random process, namely, the auto-correlation,
cross-correlation and partial correlation function, and a fundamental random process, known
as the white-noise process are introduced in this chapter. These are central to the analysis
and model development of linear random processes. The emphasis is largely on the theoret-
ical aspects, but with interpretations for practice. The objective is to provide the theoretical
foundations necessary for building stochastic models.
8.1 MOTIVATION
One of the most intriguing and interesting problems in time-series analysis is that of prediction. The
theoretical problem setting is as follows:
Given observations of the random process up to the k
th
instant1,
{··· , v[k − 2], v[k − 1], v[k]}
predict v[k + 1], v[k + 2], ···.
Notation: Random signal and the generating process are being denoted by v [k] (or sometimes by {v[k]}).
This change in notation (with respect to the previous chapter) is applicable throughout this chapter and in the
remainder of this text as well.
The effort in modeling random processes is to build causal mathematical descriptions for the
process from observations. Predictions are then obtained by implementing the model equations on-
line.
The basic premise for model development, or even prediction is that the history (past) of the
process contains some useful information about the future. If a process does not possess this
characteristic, then there is no scope for prediction; and the series is said to be unpredictable
or an ideal random process.
Remarks: Predictability depends on the model that is being used for prediction. Shortly, we shall distinguish
between unpredictable linear and non-linear random processes.
A first step in prediction is, therefore, to develop a measure of predictability in a series. For
linear random processes the natural measure is correlation because of its excellent ability to detect
linear relationships. The idea is to compute correlation between an observation, say v[k], and each
of the past observations v[k − 1], v[k − 2], ···. If there is at least one observation in the past that
is correlated with v[k], then we can exploit this correlation to predict v[k + l], l > 0. These ideas
motivate the definition of an auto-covariance (or auto-correlation) function.
1In practice, only finite length data is available. The focus here is on the theoretical problem. The finite length problem is
taken up in subsequent chapters.
186
Time-Domain Analysis: Correlation Functions 187
8.2 AUTO-COVARIANCE FUNCTION
The auto-covariance function (ACVF) is defined as the covariance between two samples of a
series, v[k
1
] and v[k
2
]
σ
vv
[k
1
, k
2
] = E ((v[k
1
] − µ
k
1
)(v[k
2
] − µ
k
2
)) (8.1)
where µ
k
i
is the mean of the process at k
i
instant.
For stationary processes, the mean remains invariant and the ACVF is only a function of the
distance between the sampling instants l = k
1
− k
2
.
The auto-covariance function of a stationary process is only a function of the lag l between
two samples,
σ
vv
[l] = E((v[k] − µ
v
)(v[k − l] − µ
v
)) (8.2)
where µ
x
= E (v[k]) is the mean of the stationary process.
Stationarity implies that v[k] influences a future observation v[k + l] in the same way as v[k −l]
influences v[k].
Properties of ACVF
Given that ACVF is a covariance-based measure, it inherits all properties of the covariance. Some
of the useful properties are listed below.
i. ACVF measures (only) the linear dependence between v[k] and v[k −l]. This means σ
vv
[l] = 0
merely rules out linear relationships.
ii. It is a symmetric measure, i.e.,
σ
vv
[l] = σ
vv
[−l] (8.3)
iii. Like covariance, ACVF is also affected by confounding, i.e., σ
vv
[l] includes the effects of other
variables (and observations) that commonly influence both v[k] and v[k − l]. In other words,
ACVF cannot tell whether two observations are directly and indirectly correlated. It measures
the total correlation.
iv. The value of ACVF is unbounded and depends on the units of v[k].
The last property above calls for a normalized measure, very much akin to the need for normalizing
covariance, which resulted in correlation.
8.2.1 AUTO-CORRELATION FUNCTION (ACF)
Drawing parallels with the definition of correlation, the auto-correlation function (ACF) is intro-
duced:
ρ
x x
[l] =
σ
vv
[l]
σ
vv
[0]
(8.4)
The ACF inherits all the characteristics of correlation.
i. The maximum value of ACF is unity, at lag l = 0. Essentially, any sample is best correlated with
itself.
Get Principles of System Identification now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.