9.3 Data augmentation by Monte Carlo
9.3.1 The genetic linkage example revisited
We can illustrate this technique with the example on genetic linkage we considered in connection with the EM algorithm. Recall that we found that likelihood of the augmented data was
In this method, we suppose that at each stage we have a ‘current’ distribution for η, which initially is the prior distribution. At all stages, this has to be a proper distribution, so we may as well take our prior as the uniform distribution Be(1, 1), which in any case differs little from the reference prior Be(0, 0). At the tth stage in the imputation step, we pick m possible values , … , of η by some (pseudo-) random mechanism with the current density, and then for each of these values of we generate a value for the augmented data , which in the particular example simply means picking a value y(i)1 with a binomial distribution of index x1 and parameter . Since we had a Be(1, 1) prior, this gives a posterior
In the posterior ...
Get Bayesian Statistics: An Introduction, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.