9Deep Learning‐Based Filters
9.1 Introduction
Different classes of Bayesian filters, which were discussed in Chapters 5 and 6, rely on the availability of a fairly accurate state‐space model. However, without domain knowledge, data‐driven approaches must be followed to build state‐space models from raw data. Learning filtering models using supervised learning, which relies on labeled datasets, may be impractical. Hence, generative models are required with the ability of learning and inferring hidden structures and latent states directly from data. This chapter explores the potential of variational inference for learning different classes of Bayesian filters. Furthermore, deep neural networks are deployed to construct state‐space models from data. Such models can be tractably learned by optimizing a lower bound on the likelihood of data. Such deep learning‐based filters will be able to handle severe nonlinearities with temporal and spatial dependencies. Deep learning‐based filters will be very helpful for counterfactual analysis, which refers to inferring the probability of occurring an event given circumstances that are different from the empirically observed ones.
In what follows, after reviewing variational inference and amortized variational inference, different deep learning‐based filtering algorithms are presented, which use supervised or unsupervised learning. In the formulation of filters, let us consider a nonlinear dynamic system with the latent state vector , the ...
Get Nonlinear Filters now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.