15 The pillars of applied statistics I – estimation
There are two pillars of statistical theory, upon which all applied work in statistical inference rests. In this chapter we shall focus on estimation while, in the next chapter, we shall look at hypothesis testing. Among the most famous of past statisticians, Ronald Fisher, Jerzy Neyman and Egon Pearson (whose names appear in FIGURE 22.2) laid the foundations of modern methods of statistical inference in the 1920s and 1930s. They polished procedures for estimation proposed by earlier thinkers, and invented terminology and methods of their own. This was an era of fast‐moving developments in statistical theory.
For relevant historical background, a valuable resource is Jeff Miller’s website at [15.1], titled Earliest Known Uses of Some of the Words of Mathematics. The entry for ‘Estimation’ informs us that the terms ‘estimation’ and ‘estimate’, together with three criteria for defining a good estimator – ‘consistency’, ‘efficiency’ and ‘sufficiency’ – were first used by Fisher (1922), online at [15.2]. Fisher defined the field in a way that sounds quite familiar to us today: ‘Problems of estimation are those in which it is required to estimate the value of one or more of the population parameters from a random sample of the population.’ In the same article, he presented ‘maximum likelihood’ as a method of (point) estimation with some very desirable statistical properties.
Neyman, who subsequently pioneered the technique ...
Get A Panorama of Statistics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.