Chapter 2The logic of Bayesian networks and influence diagrams

2.1 Reasoning with graphical models

2.1.1 Beyond detective stories

Sherlock Holmes was a lucky guy. He had Conan Doyle at his disposal to arrange the plot in such a way that the mechanism of eliminative induction worked neatly, the very mechanism simple-minded Watson was apparently unable to appreciate, as Holmes remarked in the novel The Sign of Four (Conan Doyle 1953, p. 111).

‘You will not apply my precept’, he said, shaking his head. ‘How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?’

Holmes' friend did not really deserve such a rebuke: the ‘precept’ is not easy to apply. The philosopher of science John Earman has observed, with respect to Holmes' dictum, that

(…) the presupposition of the question can be given a respectable Bayesian gloss, namely, no matter how small the prior probability of the hypothesis, the posterior probability of the hypothesis goes to unity if all of the competing hypotheses are eliminated. This gloss fails to work if the Bayesian agent has been so unfortunate as to assign the true hypothesis a zero prior. (Earman 1992, p. 163)

In different words, the gloss fails to work if the agent has been so unfortunate as to choose the wrong model. This does not usually happen in literary fiction.

The classic English detective story is the paradigm of eliminative induction at work. The suspects in the Colonel's ...

Get Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science, 2nd Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.