The expectation–maximization (EM) algorithm is a recursive technique for finding the ML estimate in situations where there are unobservable factors influencing the samples. In conventional ML estimation, the likelihood function or, equivalently, the log-likelhood function is used to estimate θ from . Suppose the samples are incomplete in the sense that there are other random variables correlated with {Xn}, but these are not directly measurable. Let those random variables be denoted by .

Consider the joint pdf for measurable X and unmeasurable Y. If Y is also measurable, then the ML estimator for θ would be the function of X and Y that maximizes the joint pdf. However, since only X is observable, it is possible to estimate θ by maximizing only , which can be expressed as


The influence of the nonmeasurable Y on the estimate is ignored in conventional ...

Get Probability, Random Variables, and Random Processes: Theory and Signal Processing Applications now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.