6Theory of Inference Based on the Likelihood Function

6.1 Review of Likelihood-Based Estimation for Complete Data

6.1.1 Maximum Likelihood Estimation

Many methods of estimation for incomplete data can be based on the likelihood function under specific modeling assumptions. In this section, we review basic theory of inference based on the likelihood function and describe how it is implemented in the incomplete data setting. We begin by considering maximum likelihood and Bayes' estimation for complete data sets. Only basic results are given, and mathematical details are omitted. For more detailed material, see, for example Cox and Hinkley (1974) and Gelman et al. (2013).

Suppose that Y denotes the data, where Y may be scalar, vector-valued, or matrix-valued according to context. The data are assumed to be generated by a model described by a probability or density function pY(Y = y ∣ θ) = fYy ∣ θ), indexed by a scalar or vector parameter θ, where θ is known only to lie in a parameter space Ωθ. The “natural” parameter space for θ is the set of values of θ for which fYy ∣ θ) is a proper density – for example, the whole real line for means, the positive real line for variances, or the interval from zero to one for probabilities. Unless stated otherwise, we assume the natural parameter space for θ. Given the model and parameter θ, fYy ∣ θ) is a function of Y that gives the probabilities or densities of various Y values.

Definition 6.1 The likelihood function LY(θ ∣ y) is any ...

Get Statistical Analysis with Missing Data., 3rd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.