7
Random Processes
This is an introductory and foundational chapter for the theory of random processes. The
objective is to review concepts and definitions pertaining to random variables and random
processes. It is recommended to pay special attention to the concepts of expectation operator,
correlation, stationarity and ergodicity. Interpretations of these concepts in the context of
prediction are provided.
7.1 INTRODUCTORY REMARKS
Chapters 1 and 2 taught us that models developed in identification should ideally explain both the
deterministic and stochastic parts of a process. The theory for modeling deterministic processes was
expounded in Part I. Our journey into the modeling of random processes gets underway with this
chapter. The remaining chapters of this part (Part II) describe the measures, models and representa-
tions of random processes.
Throughout this part, we shall assume that the exogenous inputs are switched off with the ob-
jective of solely focusing on the stochastic effects. In Part IV, which is exclusively concerned with
system identification, we shall superimpose the deterministic and stochastic models under the addi-
tivity assumption.
Recall the early discussions in Chapter 1. Uncertainty is one of the inescapable truths in the
analysis of processes and measurements. The sources of uncertainty in data-driven modeling are
primarily (i) insufficient understanding of the process (modeling errors), (ii) measurement errors
and (iii) effects of unmeasured causes. The second source, is in fact, a manifestation of our inability
to perfectly design and understand the instrumentation and its characteristics. All sources of uncer-
tainty in a process lead to the same repercussion - that it is not possible to accurately predict that
process. Then we have what is known as a random process. To complement this definition, it is
useful to define, a perfectly predictable process (within the walls of theoretical analysis), known as
the deterministic process that generates a deterministic signal. Strictly such processes are never en-
countered in practice. However, the conception is useful in handling situations where the measured
response is due to a mix of known and unknown causes, such as in system identification.
The goal of probability theory and random process modeling is essentially to exploit the pre-
dictability of a random process to the maximum extent possible. With uncertainty reining the
workhorses of prediction, the simplest recourse is to list all possible predictions (outcomes) and
postulate the chance of each of those outcomes.
Example 7.1: Rainfall Prediction
It is a well-known fact that the occurrence of rainfall in any geographical location can never
be accurately predicted. The natural solution is to list both possibilities (“yes” or “no”) and
associate each outcome with their respective chances (probabilities).
Naturally, for events described by continuous-valued variables (e.g., reactor temperature, wind
speed), it is not possible to specify the chance associated with each outcome. Instead we work
with infinitesimal intervals of outcomes and what are known as probability density functions. The
cornerstones of theory of random processes are the concepts of probability and random variables. It
is fitting therefore that we begin this chapter with a review of the same.
151
Get Principles of System Identification now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.