4Bayesian Statistics

To anyone sympathetic with the current neo‐Bernoullian neo‐Bayesian Ramseyesque Finettist Savageous movement in statistics, the subject of testing goodness of fit is something of an embarrassment.

F. J. Anscombe (1962)

4.1 The Bayesian Paradigm

There are several paradigms for approaching statistical inference, but the two dominant ones are frequentist (sometimes called classical or traditional) and Bayesian. The overview in the previous chapter covered mainly classical approaches. According to the Bayesian paradigm, the unobservable parameters in a statistical model are treated as random. When no data are available, a prior distribution is used to quantify our knowledge about the parameter. When data are available, we can update our prior knowledge using the conditional distribution of parameters, given the data. The transition from the prior to the posterior is possible via the Bayes theorem.

Suppose that before the experiment our prior distribution describing normal theta is normal pi left-parenthesis normal theta right-parenthesis period The data are coming from the assumed model (likelihood) that depends on the parameter and is denoted by f left-parenthesis x vertical-bar normal theta right-parenthesis period Bayes theorem updates the prior to the posterior by accounting for the data ,


Get Nonparametric Statistics with Applications to Science and Engineering with R, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.