Nonlinear Filters

Book description

NONLINEAR FILTERS

Discover the utility of using deep learning and (deep) reinforcement learning in deriving filtering algorithms with this insightful and powerful new resource

Nonlinear Filters: Theory and Applications delivers an insightful view on state and parameter estimation by merging ideas from control theory, statistical signal processing, and machine learning. Taking an algorithmic approach, the book covers both classic and machine learning-based filtering algorithms.

Readers of Nonlinear Filters will greatly benefit from the wide spectrum of presented topics including stability, robustness, computability, and algorithmic sufficiency. Readers will also enjoy:

  • Organization that allows the book to act as a stand-alone, self-contained reference
  • A thorough exploration of the notion of observability, nonlinear observers, and the theory of optimal nonlinear filtering that bridges the gap between different science and engineering disciplines
  • A profound account of Bayesian filters including Kalman filter and its variants as well as particle filter
  • A rigorous derivation of the smooth variable structure filter as a predictor-corrector estimator formulated based on a stability theorem, used to confine the estimated states within a neighborhood of their true values
  • A concise tutorial on deep learning and reinforcement learning
  • A detailed presentation of the expectation maximization algorithm and its machine learning-based variants, used for joint state and parameter estimation
  • Guidelines for constructing nonparametric Bayesian models from parametric ones

Perfect for researchers, professors, and graduate students in engineering, computer science, applied mathematics, and artificial intelligence, Nonlinear Filters: Theory and Applications will also earn a place in the libraries of those studying or practicing in fields involving pandemic diseases, cybersecurity, information fusion, augmented reality, autonomous driving, urban traffic network, navigation and tracking, robotics, power systems, hybrid technologies, and finance.

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Dedication
  5. List of Figures
  6. List of Table
  7. Preface
  8. Acknowledgments
  9. Acronyms
  10. 1 Introduction
    1. 1.1 State of a Dynamic System
    2. 1.2 State Estimation
    3. 1.3 Construals of Computing
    4. 1.4 Statistical Modeling
    5. 1.5 Vision for the Book
  11. 2 Observability
    1. 2.1 Introduction
    2. 2.2 State‐Space Model
    3. 2.3 The Concept of Observability
    4. 2.4 Observability of Linear Time‐Invariant Systems
    5. 2.5 Observability of Linear Time‐Varying Systems
    6. 2.6 Observability of Nonlinear Systems
    7. 2.7 Observability of Stochastic Systems
    8. 2.8 Degree of Observability
    9. 2.9 Invertibility
    10. 2.10 Concluding Remarks
  12. 3 Observers
    1. 3.1 Introduction
    2. 3.2 Luenberger Observer
    3. 3.3 Extended Luenberger‐Type Observer
    4. 3.4 Sliding‐Mode Observer
    5. 3.5 Unknown‐Input Observer
    6. 3.6 Concluding Remarks
  13. 4 Bayesian Paradigm and Optimal Nonlinear Filtering
    1. 4.1 Introduction
    2. 4.2 Bayes' Rule
    3. 4.3 Optimal Nonlinear Filtering
    4. 4.4 Fisher Information
    5. 4.5 Posterior Cramér–Rao Lower Bound
    6. 4.6 Concluding Remarks
  14. 5 Kalman Filter
    1. 5.1 Introduction
    2. 5.2 Kalman Filter
    3. 5.3 Kalman Smoother
    4. 5.4 Information Filter
    5. 5.5 Extended Kalman Filter
    6. 5.6 Extended Information Filter
    7. 5.7 Divided‐Difference Filter
    8. 5.8 Unscented Kalman Filter
    9. 5.9 Cubature Kalman Filter
    10. 5.10 Generalized PID Filter
    11. 5.11 Gaussian‐Sum Filter
    12. 5.12 Applications
    13. 5.13 Concluding Remarks
  15. 6 Particle Filter
    1. 6.1 Introduction
    2. 6.2 Monte Carlo Method
    3. 6.3 Importance Sampling
    4. 6.4 Sequential Importance Sampling
    5. 6.5 Resampling
    6. 6.6 Sample Impoverishment
    7. 6.7 Choosing the Proposal Distribution
    8. 6.8 Generic Particle Filter
    9. 6.9 Applications
    10. 6.10 Concluding Remarks
  16. 7 Smooth Variable‐Structure Filter
    1. 7.1 Introduction
    2. 7.2 The Switching Gain
    3. 7.3 Stability Analysis
    4. 7.4 Smoothing Subspace
    5. 7.5 Filter Corrective Term for Linear Systems
    6. 7.6 Filter Corrective Term for Nonlinear Systems
    7. 7.7 Bias Compensation
    8. 7.8 The Secondary Performance Indicator
    9. 7.9 Second‐Order Smooth Variable Structure Filter
    10. 7.10 Optimal Smoothing Boundary Design
    11. 7.11 Combination of SVSF with Other Filters
    12. 7.12 Applications
    13. 7.13 Concluding Remarks
  17. 8 Deep Learning
    1. 8.1 Introduction
    2. 8.2 Gradient Descent
    3. 8.3 Stochastic Gradient Descent
    4. 8.4 Natural Gradient Descent
    5. 8.5 Neural Networks
    6. 8.6 Backpropagation
    7. 8.7 Backpropagation Through Time
    8. 8.8 Regularization
    9. 8.9 Initialization
    10. 8.10 Convolutional Neural Network
    11. 8.11 Long Short‐Term Memory
    12. 8.12 Hebbian Learning
    13. 8.13 Gibbs Sampling
    14. 8.14 Boltzmann Machine
    15. 8.15 Autoencoder
    16. 8.16 Generative Adversarial Network
    17. 8.17 Transformer
    18. 8.18 Concluding Remarks
  18. 9 Deep Learning‐Based Filters
    1. 9.1 Introduction
    2. 9.2 Variational Inference
    3. 9.3 Amortized Variational Inference
    4. 9.4 Deep Kalman Filter
    5. 9.5 Backpropagation Kalman Filter
    6. 9.6 Differentiable Particle Filter
    7. 9.7 Deep Rao–Blackwellized Particle Filter
    8. 9.8 Deep Variational Bayes Filter
    9. 9.9 Kalman Variational Autoencoder
    10. 9.10 Deep Variational Information Bottleneck
    11. 9.11 Wasserstein Distributionally Robust Kalman Filter
    12. 9.12 Hierarchical Invertible Neural Transport
    13. 9.13 Applications
    14. 9.14 Concluding Remarks
  19. 10 Expectation Maximization
    1. 10.1 Introduction
    2. 10.2 Expectation Maximization Algorithm
    3. 10.3 Particle Expectation Maximization
    4. 10.4 Expectation Maximization for Gaussian Mixture Models
    5. 10.5 Neural Expectation Maximization
    6. 10.6 Relational Neural Expectation Maximization
    7. 10.7 Variational Filtering Expectation Maximization
    8. 10.8 Amortized Variational Filtering Expectation Maximization
    9. 10.9 Applications
    10. 10.10 Concluding Remarks
  20. 11 Reinforcement Learning‐Based Filter
    1. 11.1 Introduction
    2. 11.2 Reinforcement Learning
    3. 11.3 Variational Inference as Reinforcement Learning
    4. 11.4 Application
    5. 11.5 Concluding Remarks
  21. 12 Nonparametric Bayesian Models
    1. 12.1 Introduction
    2. 12.2 Parametric vs Nonparametric Models
    3. 12.3 Measure‐Theoretic Probability
    4. 12.4 Exchangeability
    5. 12.5 Kolmogorov Extension Theorem
    6. 12.6 Extension of Bayesian Models
    7. 12.7 Conjugacy
    8. 12.8 Construction of Nonparametric Bayesian Models
    9. 12.9 Posterior Computability
    10. 12.10 Algorithmic Sufficiency
    11. 12.11 Applications
    12. 12.12 Concluding Remarks
  22. References
  23. Index
  24. Wiley End User License Agreement

Product information

  • Title: Nonlinear Filters
  • Author(s): Peyman Setoodeh, Saeid Habibi, Simon Haykin
  • Release date: April 2022
  • Publisher(s): Wiley
  • ISBN: 9781118835814