CHAPTER 2 Linear Models: Least Squares Theory
The next two chapters consider fitting and inference for the ordinary linear model. For n independent observations with μi = E(yi) and , denote the covariance matrix by
Let denote the n × p model matrix, where xij is the value of explanatory variable j for observation i. In this chapter we will learn about model fitting when
where is a p × 1 parameter vector with p ≤ n and I is the n × n identity matrix. The covariance matrix is a diagonal matrix with common value σ2 for the variance. With the additional assumption of a normal random component, this is the normal linear model, which is a generalized linear model (GLM) with identity link function. We will add the normality assumption in the next chapter. Here, though, we will obtain many results about fitting linear models and comparing models that do not require distributional assumptions. ...
Get Foundations of Linear and Generalized Linear Models now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.