Chapter 6Sequential Monte Carlo Inference

In the last chapter, we introduced the class of dynamic linear models (DLMs), with which it became possible to undertake learning about latent states and error variances with a set of closed-form expressions, while accommodating parameter and model uncertainty in a conditional framework. These methods relied on the linearity of the underlying state-space model, as well as the assumption that error variances were normally distributed. Though good approximations may be obtained under these conditions, there are a number of situations—particularly situations involving financial data and models—in which those foundational assumptions will prove restrictive. In particular, the class of DLMs defined the state space in a way that all parameters in the equations except the error variance were known a priori. It is often the case that we would like to estimate unknown parameters in the state equations as well.

Sequential Monte Carlo (SMC) methods offer tractable solutions for both of these situations in state-space time series models. Using SMC methods, one can estimate states in nonlinear, non-Gaussian models. As with DLMs, state learning with SMC may be undertaken online, solving the filtering problem, or retrospectively, solving the smoothing problem. In addition, one can expand the set of unknown quantities targeted for inference to include both states and parameters, allowing filtered and smoothed parameter estimates to be obtained for the ...

Get Bayesian Risk Management now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.