Chapter 7. Matrix Applications

I hope that now, after the past two theory-heavy chapters, you feel like you just finished an intense workout at the gym: exhausted but energized. This chapter should feel like a bike ride through the hills in the countryside: effortful at times but offering a fresh and inspiring perspective.

The applications in this chapter are loosely built off of those in Chapter 4. I did this to have some common threads that bind the chapters on vectors and on matrices. And because I want you to see that although the concepts and applications become more complex as you progress in linear algebra, the foundations are still built on the same simple principles such as linear weighted combinations and the dot product.

Multivariate Data Covariance Matrices

In Chapter 4, you learned how to compute the Pearson correlation coefficient as the vector dot product between two data variables, divided by the product of the vector norms. That formula was for two variables (e.g., height and weight); what if you have multiple variables (e.g., height, weight, age, weekly exercise…​)?

You could imagine writing a double for loop over all of the variables, and applying the bivariate correlation formula to all pairs of variables. But that is cumbersome and inelegant, and therefore antithetical to the spirit of linear algebra. The purpose of this section is to show you how to compute covariance and correlation matrices from multivariate datasets.

Let’s start with covariance. Covariance ...

Get Practical Linear Algebra for Data Science now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.