Chapter 8Linear models

Linear models are intended to relate two sets of random variables using linear relationships. They are very general and appear routinely in many statistical applications. A first set of variables, called response variables, are to be predicted from a second set of variables, called predictors or covariates. In the standard approach, linear combinations of the predictors are used to get a predictor function approaching responses, assuming implicitly that the predictors are real random variables obeying Euclidean geometry in real space. In this approach, the predictor function is transformed—if necessary—by some nonlinear function, known as link function, and errors or residuals are measured as Euclidean differences between responses and the (transformed) predictor function. There is an extensive literature on general linear models (e.g., Anderson, 1984). The two sets of variables may have very different characteristics (categorical, real, discrete) and the link function choices are also multiple. Here we are interested in cases where responses or predictors have compositional character and we pay special attention to the space where operations are performed. When the response is compositional, we must be aware that residuals should be computed with the perturbation difference defined, and their size with the Aitchison norm or squared compositional distances, that is, within the framework of the Aitchison geometry of the simplex and not within the framework ...

Get Modeling and Analysis of Compositional Data now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.