7.1 The likelihood principle

7.1.1 Introduction

This section would logically come much earlier in the book than it is placed, but it is important to have some examples of Bayesian procedures firmly in place before considering this material. The basic result is due to Birnbaum (1962), and a more detailed consideration of these issues can be found in Berger and Wolpert (1988).

The nub of the argument here is that in drawing any conclusion from an experiment only the actual observation x made (and not the other possible outcomes that might have occurred) is relevant. This is in contrast to methods by which, for example, a null hypothesis is rejected because the probability of a value as large as or larger than that actually observed is small, an approach which leads to Jeffreys’ criticism that was mentioned in Section 4.1 when we first considered hypothesis tests, namely, that ‘a hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred’. Virtually all of the ideas discussed in this book abide by this principle, which is known as the likelihood principle (there are some exceptions, for example Jeffreys’ rule is not in accordance with it). We shall show that it follows from two other principles, called the conditionality principle and the sufficiency principle, both of which are hard to argue against.

In this section, we shall write x for a particular piece of data, not necessarily one-dimensional, the density of which depends ...

Get Bayesian Statistics: An Introduction, 4th Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.