Chapter 15Maximum likelihood estimation

1 INTRODUCTION

The method of maximum likelihood (ML) estimation has great intuitive appeal and generates estimators with desirable asymptotic properties. The estimators are obtained by maximization of the likelihood function, and the asymptotic precision of the estimators is measured by the inverse of the information matrix. Thus, both the first and the second differential of the likelihood function need to be found and this provides an excellent example of the use of our techniques.

2 THE METHOD OF MAXIMUM LIKELIHOOD (ML)

Let {y1, y2, …} be a sequence of random variables, not necessarily independent or identically distributed. The joint density function of y = (y1, … , yn) ∈ ℝn is denoted by hn(·; γ0) and is known except for γ0, the true value of the parameter vector to be estimated. We assume that γ0 ∈ Γ, where Γ (the parameter space) is a subset of a finite‐dimensional Euclidean space. For every (fixed) y ∈ ℝn, the real‐valued function

equation

is called the likelihood function, and its logarithm

equation

is called the loglikelihood function.

For fixed y ∈ ℝn every value images with

(1)

is called an ML estimate of γ0. In general, there is no guarantee that an ML ...

Get Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.