# Chapter 8. Matrix Inverse

We are moving toward solving matrix equations. Matrix equations are like regular equations (e.g., solve for *x* in 4*x* = 8) but…they have matrices. By this point in the book, you are well aware that things get complicated when matrices get involved. Nonetheless, we must embrace that complexity, because solving matrix equations is a huge part of data science.

The matrix inverse is central to solving matrix equations in practical applications, including fitting statistical models to data (think of general linear models and regression). By the end of this chapter, you will understand what the matrix inverse is, when it can and cannot be computed, how to compute it, and how to interpret it.

# The Matrix Inverse

The inverse of matrix $\mathbf{A}$ is another matrix ${\mathbf{A}}^{-1}$ (pronounced “A inverse”) that multiplies $\mathbf{A}$ to produce the identity matrix. In other words, ${\mathbf{A}}^{-1}\mathbf{A}=\mathbf{I}$. That is how you “cancel” a matrix. Another conceptualization is that we want to linearly transform a matrix into the identity matrix; the matrix inverse contains that linear transformation, and matrix multiplication is the mechanism of applying that transformation to the matrix.

But why do we even need to invert matrices? We need to “cancel” a matrix in order to solve problems that can be expressed in the form $\mathbf{A}\mathbf{x}=\mathbf{b}$, where $\mathbf{A}$ and $\mathbf{b}$ are known quantities and we want to solve for $\mathbf{x}$. The solution has the following general form:

Get *Practical Linear Algebra for Data Science* now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.