# 8PARAMETRIC POINT ESTIMATION

## 8.1 INTRODUCTION

In this chapter we study the theory of point estimation. Suppose, for example, that a random variable *X* is known to have a normal distribution (*μ,*σ^{2}), but we do not know one of the parameters, say *μ*. Suppose further that a sample *X*_{1}, *X*_{2},…,*X _{n}* is taken on

*X*. The problem of point estimation is to pick a (one-dimensional) statistic

*T*(

*X*

_{1,}

*X*

_{2},…,

*X*

_{n}) that best estimates the parameter

*μ*. The numerical value of

*T*when the realization is

*x*

_{1},

*x*

_{2},…,

*x*is frequently called an

_{n}*estimate*of

*μ*, while the statistic

*T*is called an

*estimator*of

*μ*. If both

*μ*and

*σ*

^{2}are unknown, we seek a joint statistic as an estimator of (

*μ, σ*

^{2}).

In Section 8.2 we formally describe the problem of parametric point estimation. Since the class of all estimators in most problems is too large it is not possible to find the “best” estimator in this class. One narrows the search somewhat by requiring that the estimators have some specified desirable properties. We describe some of these and also outline some criteria for comparing estimators.

Section 8.3 deals, in detail, with some important properties of statistics such as sufficiency, completeness, and ancillarity. We use these properties in later sections to facilitate our search for optimal estimators. Sufficiency, completeness, ...

Get *An Introduction to Probability and Statistics, 3rd Edition* now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.