3Model Fit, Model Comparison and Outlier Detection
3.1 Introduction
An important part of any statistical analysis is an assessment of how well the predictions from a particular model fit with the observed data. If the model does not describe the data well, then any outputs based on predictions from that model, such as parameter estimates, uncertainty in parameter estimates, expected net benefit, optimal treatment strategy and the uncertainty in the optimal strategy, will be a poor reflection of the evidence base. In the context of comparative effectiveness research and health technology assessment, it is therefore essential that we aim to identify good fitting models, so that any judgements made as to the most effective or cost-effective intervention are a fair reflection of the available evidence.
Methods for assessing how well the predictions from a particular model fit the observed data are well established in the field of frequentist statistics (McCullagh and Nelder, 1989). Many of these ideas translate naturally into Bayesian inference (Spiegelhalter et al., 2002; Gelman et al., 2004) and can be computed using WinBUGS. Because a Bayesian analysis estimates a posterior distribution for the model parameters, model predictions for the observed data also have posterior distributions, and so too do measures of model fit. In this chapter, we describe the posterior mean residual deviance as a measure of global model fit. We then describe how the posterior mean deviances and the ...
Get Network Meta-Analysis for Decision-Making now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.