2

BAYESIAN ESTIMATION

2.1 INTRODUCTION

In this chapter we motivate the idea of Bayesian estimation from probabilistic perspective, that is, we perform the required estimation using the underlying densities or mass functions. We start with the “batch” approach and evolve to the Bayesian sequential techniques. We discuss the most popular formulations: maximum a posteriori (MAP), maximum likelihood (ML), minimum variance (MV) or equivalently minimum mean-squared error (MMSE) and least-squares (LS) methods. Bayesian sequential techniques are then developed. The main idea is to develop the proper perspective for the subsequent chapters and construct a solid foundation for the techniques to follow.

2.2 BATCH BAYESIAN ESTIMATION

Suppose we are trying to estimate a random parameter X from data Y =y. Then the associated conditional density Pr(X|Y = y) is called the posterior density because the estimate is conditioned “after (post) the measurements” have been acquired. Estimators based on this a posteriori density are usually called Bayesian because they are constructed from Bayes’ theorem, since Pr(X|Y) is difficult to obtain directly. That is, Bayes’rule is defined

(2.1)

img

where Pr(X) is called the prior density (before measurement), Pr(Y|X) is called the likelihood (more likely to be true) and Pr(Y) is called the evidence (normalizes the posterior to assure its integral is unity). Bayesian ...

Get Bayesian Signal Processing: Classical, Modern and Particle Filtering Methods now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.