9Eigenvalues and Eigenvectors
This chapter represents something of a change of gear in the study of matrices. So far, we have learned how to do various calculations using matrix notation and exploit the fairly simple rules of matrix algebra. Most importantly, we have used linearity to calculate the expected values of matrix functions of random variables and so computed means and variances of estimators. However, all these calculations could in principle have been done using ‘sigma’ notation with scalar quantities. Matrix algebra simply confers the benefits of simplicity and economy in what would otherwise be seriously complicated calculations.
What happens in this chapter is different in kind because the methods don't really have any counterparts in scalar algebra. One enters a novel and rather magical world where seemingly intractable problems turn out to have feasible solutions. Careful attention to detail will be necessary to keep abreast of the new ideas. This is a relatively technical chapter, and some of the arguments are quite intricate. This material does not all need to be completely absorbed in order to make use of the results in the chapters to come, and readers who are happy to take such results on trust may want to skip or browse it at first reading. The material essential for understanding least squares inference relates to the diagonalization of symmetric idempotent matrices.
9.1 The Characteristic Equation
Let be square , and consider for a scalar the scalar equation ...
Get An Introduction to Econometric Theory now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.