Chapter 14Further topics in the linear model
1 INTRODUCTION
In Chapter 13, we derived the ‘best’ affine unbiased estimator of β in the linear regression model (y, Xβ, σ2V) under various assumptions about the ranks of X and V. In this chapter, we discuss some other topics relating to the linear model.
Sections 14.2–14.7 are devoted to constructing the ‘best’ quadratic estimator of σ2. The multivariate analog is discussed in Section 14.8. The estimator
known as the least‐squares (LS) estimator of σ2, is the best quadratic unbiased estimator in the model (y, Xβ, σ2I). But if var(y) ≠ σ2In, then in (1) will, in general, be biased. Bounds for this bias which do not depend on X are obtained in Sections 14.9 and 14.10.
The statistical analysis of the disturbances ɛ = y − Xβ is taken up in Sections 14.11–14.14, where predictors that are best linear unbiased with scalar variance matrix (BLUS) and best linear unbiased with fixed variance matrix (BLUF) are derived.
Finally, we show how matrix differential calculus can be useful in sensitivity analysis. In particular, we study the sensitivities of the posterior moments of β in a Bayesian framework.
2 BEST QUADRATIC UNBIASED ESTIMATION OF σ2
Let (y, Xβ, σ2V) be the linear regression model. In the previous chapter, we considered the estimation ...
Get Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.