CHAPTER 11 Extensions of Generalized Linear Models

This final chapter introduces alternatives to maximum likelihood (ML) and Bayes for fitting linear and generalized linear models (GLMs). We also present an extension of the GLM that permits an additive predictor in place of the linear predictor. A complete exposition of these topics is beyond the scope of this book. We aim here merely to present a brief overview and give you references for further study.

Section 11.1 presents alternative ways to estimate model parameters. For the linear model, M-estimation methods minimize a function of the residuals, the sum of squared residuals being one special case. Some such estimates are more robust than least squares, because they are less affected by severe outliers or by contamination of the data. Regularization methods modify ML to give sensible answers in situations that are unstable because of causes such as collinearity. For the GLM, the penalized likelihood regularization method modifies the log-likelihood function by adding a penalty term, resulting in estimates that tend to have smaller variance than ML estimators.

Regularization methods are especially useful when the number p of model parameters is very large. Such datasets are common in genomics, biomedical imaging, functional magnetic resonance imaging, tomography, signal processing, image analysis, market basket data, and portfolio allocation in finance. Sometimes p is even larger than n. Section 11.2 discusses the fitting ...

Get Foundations of Linear and Generalized Linear Models now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.