13
Goodness of Estimators
This chapter presents the performance measures of estimators discussed in Chapter 12 - their
definitions and interpretations. In addition, it presents the Fisher information matrix, an im-
portant quantity in estimation and identification. Although somewhat theoretical and mathe-
matical in nature, the material presented in this chapter forms the backbone of identifiability,
computing errors in (model and statistical) parameter estimates, constructing confidence in-
tervals for parameters and arriving at distributions of estimates.
13.1 INTRODUCTION
From Section 12.3.1 recall an important fact - obtaining an estimate is not necessarily the only
end goal of an estimation exercise. An equally important goal is to quantify the errors incurred
in estimation and provide a confidence region for the “truth.” We have previously discussed the
sources of estimation errors in Chapter 12. A realistic experimental design can at best only minimize
some sources of errors, but not eliminating them. Zero error occurs only in an idealistic setting, since
even the most careful experiment is not impervious to factors beyond our control.
A natural recourse is to design the estimator in such a way that despite the inevitable uncertain-
ties, it should deliver an estimate as accurate and precise as possible. Such an estimator can only
be developed by understanding the influence of two contrasting aspects, namely, (i) experimental
factors within our control (e.g., sample size, sampling rate) and (ii) random variations in the pro-
cess and data on the quality of the estimate. Furthermore, we require quantitative definitions of
qualifiers such as accuracy, precision and so on. The theory presented in this chapter is exactly for
this purpose. Another useful application of this theory is that it will equip us with the knowledge of
selecting the “right” estimator from a competing set of estimators in a principled manner.
As mentioned in Section 12.3.1, there are six major qualifying characteristics of any estimator.
The developments to follow are devoted to a study of these properties. Foremost, however, is the
review of an important concept known as Fisher information (matrix), which is central to the
evaluation of variance bounds of estimators and the concept of identifiability.
Notation
The notation
ˆ
θ is used to denote the estimator and the estimate. Sometimes to emphasize the depen-
dence of the estimate (estimator) on the observed data, the functional form
ˆ
θ(.) may be used. On
other occasions the notation
ˆ
θ
N
may be used to explicitly denote its dependence on the sample size.
The true value is denoted by θ
0
. Also recall that any estimate in general is a random variable since
it is a function of observed data, which themselves contain random effects.
All the results stated in this chapter (and in general estimation theory) assume that the observed
data is stochastic. It shall be further assumed that we have a vector of observations y and that it
is characterized by a joint probability density function f (y; θ) or f (y|θ), where θ is the vector of
parameters as usual. Note that the assumption of randomness on y does not limit its applicability to
identification since measurements contain random effects as well. When required (in later chapters),
the results are applied by replacing the density functions of observed data with their conditional
densities (or with density functions of prediction errors).
317
Get Principles of System Identification now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.