Chapter 11. Hypothesis Testing

Back to the Euro problem

In “The Euro problem” I presented a problem from MacKay’s Information Theory, Inference, and Learning Algorithms:

A statistical statement appeared in “The Guardian” on Friday January 4, 2002:

When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. ‘It looks very suspicious to me,’ said Barry Blight, a statistics lecturer at the London School of Economics. ‘If the coin were unbiased, the chance of getting a result as extreme as that would be less than 7%.’

But do these data give evidence that the coin is biased rather than fair?

We estimated the probability that the coin would land face up, but we didn’t really answer MacKay’s question: Do the data give evidence that the coin is biased?

In Chapter 4 I proposed that data are in favor of a hypothesis if the data are more likely under the hypothesis than under the alternative or, equivalently, if the Bayes factor is greater than 1.

In the Euro example, we have two hypotheses to consider: I’ll use F for the hypothesis that the coin is fair and B for the hypothesis that it is biased.

If the coin is fair, it is easy to compute the likelihood of the data, . In fact, we already wrote the function that does it.

    def Likelihood(self, data, hypo):
        x = hypo / 100.0
        head, tails = data
        like = x**heads * (1-x)**tails
        return like

To use it we can create a Euro suite ...

Get Think Bayes now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.