9.18 EXPECTATION–MAXIMIZATION ALGORITHM

The expectation–maximization (EM) algorithm is a recursive technique for finding the ML estimate in situations where there are unobservable factors influencing the samples. In conventional ML estimation, the likelihood function or, equivalently, the log-likelhood function is used to estimate θ from . Suppose the samples are incomplete in the sense that there are other random variables correlated with {Xn}, but these are not directly measurable. Let those random variables be denoted by .

Consider the joint pdf for measurable X and unmeasurable Y. If Y is also measurable, then the ML estimator for θ would be the function of X and Y that maximizes the joint pdf. However, since only X is observable, it is possible to estimate θ by maximizing only , which can be expressed as

(9.273)

The influence of the nonmeasurable Y on the estimate is ignored in conventional ...

Get Probability, Random Variables, and Random Processes: Theory and Signal Processing Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.