14
Estimation Methods: Part I
This chapter is the first of the two chapters that review four powerful estimation philosophies.
Two among these, namely, least squares and the maximum likelihood estimation methods,
attract more attention than the rest. The present chapter is devoted mainly to the presentation
of the least squares class of methods with a brief review of the method of moments and its
generalized class. The subsequent chapter reviews the principles of maximum likelihood and
Bayesian estimation techniques. The main objective is to present the foundational ideas with
illustrative examples. Properties of these estimators are discussed in detail. The presentation
is mainly generic; applications to dynamics models are presented in Part IV. A thorough
understanding of these methods is necessary in choosing the appropriate estimator for a given
model and application.
14.1 INTRODUCTION
In the previous two chapters (Chapters 12 and 13) we learnt the general concepts of estimators
and the means to analyze their goodness. An important message from those chapters is that there
exists more than one way to estimate the parameters (unknowns). Each estimator differs in the
objective function and the assumptions that it makes on the data, which in turn have a direct impact
on the quality of the estimates. The C-R inequality states the conditions under which an efficient
estimator can be found. In the least, we should be using a best linear unbiased estimator. Chapter 12
showed how different objectives or estimation cost functions lead to different forms of estimators
with varying solutions and properties. In principle, there exist innumerable methods for estimation.
However, it suffices to study four classes of methods considering their wide usage, universal appeal
and that most of the existing methods can be cast into one of these forms.
The four classes of methods are the method of moments (MoM) or its generalized version (GMM),
least squares (LS), maximum likelihood estimation (MLE) and Bayesian methods. The first three
classes of estimators belong to the family of extremum estimators, i.e., estimators that maximize or
minimize a cost criterion. Among these, of particular interest are the LS and MLE methods because
of their wide applicability and their versatility. The method of moments is less sophisticated than
these two approaches, but can offer good starting solutions for initializing the non-linear algorithms
such as non-linear LS and MLE. Bayesian methods on the other hand are relatively recent and yet to
gain the same intensity of applicability as the LS and MLE methods. Nevertheless they are futuristic.
Keeping in view the vastness and the depth of these techniques, for instructional purposes, the
presentation is divided into two chapters. In the first part, i.e., this chapter, the focus is on the least
squares methods with a brief tour of the MoM, while the second part contained in Chapter 15 is
devoted to MLE and Bayesian estimation techniques. The division of the material is also justified
from the viewpoint that both LS and MoM do not demand the knowledge of the p.d.f. for estimation
purposes, whereas MLE and Bayesian methods are entirely set up in the probabilistic framework.
In both chapters, the purpose is to lay down the basic ideas governing these methods, derive the
generic solutions and illuminate upon the overall performance of these estimators.
The chapters to follow present application of these methods to estimation of signal properties
(Chapter 16), time-series models (Chapter 19) and input-output models (Chapters 20 and 21). In
the application to parameter estimation of dynamic models, specific issues arise. These aspects are
discussed in the respective chapters of Part IV. An interesting fact as we shall learn later is that
350
Get Principles of System Identification now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.