CHAPTER 2 Linear Models: Least Squares Theory

The next two chapters consider fitting and inference for the ordinary linear model. For n independent observations with μi = E(yi) and , denote the covariance matrix by

numbered Display Equation

Let denote the n × p model matrix, where xij is the value of explanatory variable j for observation i. In this chapter we will learn about model fitting when

numbered Display Equation

where is a p × 1 parameter vector with pn and I is the n × n identity matrix. The covariance matrix is a diagonal matrix with common value σ2 for the variance. With the additional assumption of a normal random component, this is the normal linear model, which is a generalized linear model (GLM) with identity link function. We will add the normality assumption in the next chapter. Here, though, we will obtain many results about fitting linear models and comparing models that do not require distributional assumptions. ...

Get Foundations of Linear and Generalized Linear Models now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.