Chapter 17Topics in psychometrics
1 INTRODUCTION
We now turn to some optimization problems that occur in psychometrics. Most of these are concerned with the eigenstructure of variance matrices, that is, with their eigenvalues and eigenvectors. Sections 17.2–17.6 deal with principal components analysis. Here, a set of p scalar random variables x1, …, xp is transformed linearly and orthogonally into an equal number of new random variables v1, …, vp. The transformation is such that the new variables are uncorrelated. The first principal component v1 is the normalized linear combination of the x variables with maximum variance, the second principal component v2 is the normalized linear combination having maximum variance out of all linear combinations uncorrelated with v1, and so on. One hopes that the first few components account for a large proportion of the variance of the x variables. Another way of looking at principal components analysis is to approximate the variance matrix of x, say Ω, which is assumed known, ‘as well as possible’ by another positive semidefinite matrix of lower rank. If Ω is not known, we use an estimate S of Ω based on a sample of x and try to approximate S rather than Ω.
Instead of approximating S, which depends on the observation matrix X (containing the sample values of x), we can also attempt to approximate X directly. For example, we could approximate X be a matrix of lower rank, say . Employing a singular‐value decomposition we can write , where
Get Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.