17
Non-Parametric and Parametric Models
for Identification
The objective of this chapter to provide mathematical descriptions of the models used in linear
identification. Foundations for the material were provided in Chapters 4 and 9. We shall study
both non-parametric (response-based) and parametric (prediction-error) models. Estimation
of each of these model types is discussed in Chapters 20 and 21.
17.1 INTRODUCTION
The chapters in Parts I and II laid down the foundations on models of discrete-time deterministic
and stochastic processes, respectively, while Part III provided the paraphernalia for estimating these
models. In Part IV (this part), beginning with this chapter, our goal is to collectively apply the
concepts from the previous chapters to frame and solve identification problems. We shall begin by
studying the different model types amenable for linear identification. Essentially we shall weave the
descriptions of Chapters 4 and 5 with those of Chapters 9 to 11. The end result is a composite model
that aims to describe both the deterministic and stochastic effects of the sampled-data system.
As usual, we shall observe two broad distinctions: (i) non-parametric and (ii) parametric descrip-
tions, each with their standard merits and demerits. In addition, these models may be cast either in
time-domain or in frequency-domain. Estimation of these models is discussed in a later chapter be-
cause it requires familiarity with certain additional concepts as well as insights into some practical
aspects of identification upfront.
17.2 THE OVERALL MODEL
The idea here is to fuse the deterministic and stochastic effects to arrive at a composite model for the
measurement y[k]. An important assumption that we shall make is that the output measurement y[k]
is a superposition of the “true” response (of the deterministic process) y
true
[k], and the disturbances
/ noise v[k].
y[k] = y
true
[k] + v[k] (17.1)
Giving an LTI treatment to the deterministic and stochastic processes, and denoting these systems
by G and H, respectively, we symbolically have that
y[k] = Gu[k] + H e[k]
G, H : LTI, e[k] : WN (17.2)
where the notation Gu should be read as G operating on u (likewise for He) and not as multi-
plication. It is a common practice to refer to G as the plant model and H as the noise model in
identification literature.
Figure 17.1 portrays the foregoing ideas. It is useful to explicitly list the assumptions in writing
(17.2):
i. Additive noise: The stochastic term superposes on the true response.
479
480 Principles of System Identification: Theory and Practice
G
H
u[k]
e[k]
y[k]
+
+
v[k]
White-noise
(fictitious)
Input
Observed
output
Process
Model
Noise
Model
Stochastic
effect
FIGURE 17.1 (SEE COLOR INSERT) Input-output representation of a deterministic-plus-stochastic LTI sys-
tem.
ii. Linearity and time-invariance of G: The deterministic process is LTI. No restrictions on stability
of G are necessary at this point.
iii. Stationarity of v[k]: The stochastic signal v[k] is stationary (Definition 7.8). Further it satisfies
the spectral factorization result (recall Theorem 11.7), i.e., it can be expressed as the output of
an LTI system driven by white-noise (Definition 8.2).
Thus, we have in effect a deterministic-plus-probabilistic description for the input-output pair
{u[k], y[k]}.
Identification problem
The problem of system identification is as follows:
Given N observations of input-output data Z
N
= {y[k],u[k]}
N −1
k=0
, obtain optimal estimates of
G, H and the statistical properties of e[k] (usually µ
e
and σ
2
e
).
In order to be able to apply the estimation techniques of Chapters 14 and 16 for identification, the
stationarity assumption for purely stochastic processes has to be extended to the class of signals with
composite, i.e., deterministic plus stochastic effects. Essentially a unified definition of stationarity
for deterministic and stochastic signals is required. The following section is devoted to this aspect.
17.3 QUASI-STATIONARITY
To appreciate the need for a unified definition of stationarity, consider the mean of the output in
(17.2),
E(y[k]) = G(q
−1
)u[k] + E(v[k]) (17.3)
Thus, even with a stationary v[k] the ensemble average of the output changes with time. However,
given that u[k] is generally deterministic and that we work with a single realization of the output, the
time-average property is also a crucial one. Therefore, the question arises whether one could still use
the theory of stationary stochastic processes to identify models for the stochastic and deterministic
sub-systems, i.e., to identify G and a time-series model for v[k], given that y[k] is not even weakly
stationary?
It turns out that if the applied input is “well-behaved,” then the output can be expected to have
bounded time averages. Further, even though the ACVF of y[k] as well as the CCVF between the
output and input may change with time for short samples, they become independent of time for
Get Principles of System Identification now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.