APPENDIX EEXPECTATION–CONDITIONAL MAXIMIZATION (ECM) ALGORITHM

The expectation-maximization (EM) algorithm is a widely applied iterative method in modern statistics for finding maximum likelihood estimates (MLEs) (Dempster et al., 1977; McLachlan and Krishnan, 2008). The EM algorithm is numerically stable (i.e., it monotonically increases the likelihood function), but it can be extremely slow to converge in certain situations. Even before the article by Dempster et al. (1977), estimation for mixture models had been recognized as a natural application of the EM algorithm (McLachlan and Peel, 2000, pp. 47–48; Redner and Walker, 1984). If we interpret our mixture model as an incomplete-data problem and let yi denote the “missing” group indicator (yi = 1 for an observation arising from the Gompertz distribution and yi = 0 for an observation arising from the Weibull distribution), such that the hypothetical complete data for subject i is given by the vector (ai, δi, xi, yi), the complete-data likelihood function taking right-censoring into account and conditioning on age at entry into the study is, from P[(δi, xi, yi) | ai] = P[yi | ai] × likelihood for δi, xi given yi and ai,

(E.1)images

where

(E.2)images

By introducing a function denoted π(ai), involving the five parameters and age at entry into ...

Get The Biostatistics of Aging: From Gompertzian Mortality to an Index of Aging-Relatedness now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.