Skip to main content

Linear Algebra

Module 16 Diagonalization

In this module you will learn
  • How to diagonalize a matrix.
  • When a matrix can and cannot be diagonalized.
Suppose \(\mathcal{T}\) is a linear transformation and \(\vec v_{1}\) and \(\vec v_{2}\) are eigenvectors with eigenvalues \(\lambda_{1}\) and \(\lambda_{2}\text{.}\) With this setup, for any \(\vec a\in\Span\Set{\vec v_1,\vec v_2}\text{,}\) we can compute \(\mathcal{T}(\vec a)\) with minimal effort.
Let’s get specific. Define \(\mathcal{T}:\R^{2}\to\R^{2}\) to be the linear transformation with matrix \(M=\mat{1&2\\3&2}\text{.}\) Let \(\vec v_{1}=\mat{-1\\1}\) and \(\vec v_{2}=\mat{2\\3}\text{,}\) and notice that \(\vec v_{1}\) is an eigenvector for \(\mathcal{T}\) with eigenvalue \(-1\) and that \(\vec v_{2}\) is an eigenvector for \(\mathcal{T}\) with eigenvalue \(4\text{.}\) Let \(\vec a=\vec v_{1}+\vec v_{2}\text{.}\)
Now,
\begin{equation*} \mathcal{T}(\vec a)=\mathcal{T}(\vec v_{1}+\vec v_{2})=\mathcal{T}(\vec v_{1})+\mathcal{T}(\vec v_{2})=-\vec v_{1}+4\vec v_{2}. \end{equation*}
We didn’t need to refer to the entries of \(M\) to compute \(\mathcal{T}(\vec a)\text{.}\)
Exploring further, let \(\mathcal{V}=\Set{\vec v_1,\vec v_2}\) and notice that \(\mathcal{V}\) is a basis for \(\R^{2}\text{.}\) By definition \([\vec a]_{\mathcal{V}}=\mat{1\\1}\text{,}\) and so we just computed
\begin{equation*} \mathcal{T}\mat{1\\1}_{\mathcal{V}}= \mat{-1\\4}_{\mathcal{V}}. \end{equation*}
When represented in the \(\mathcal{V}\) basis, computing \(\mathcal{T}\) is easy. In general,
\begin{equation*} \mathcal{T}(\alpha\vec v_{1}+\beta\vec v_{2})=\alpha\mathcal{T}(\vec v_{1})+\beta\mathcal{T}(\vec v_{2})=-\alpha\vec v_{1}+4\beta\vec v_{2}, \end{equation*}
and so
\begin{equation*} \mathcal{T}\mat{\alpha\\\beta}_{\mathcal{V}}= \mat{-\alpha\\4\beta}_{\mathcal{V}}. \end{equation*}
In other words, \(\mathcal{T}\text{,}\) when acting on vectors written in the \(\mathcal{V}\) basis, just multiplies each coordinate by an eigenvalue. This is enough information to determine the matrix for \(\mathcal{T}\) in the \(\mathcal{V}\) basis:
\begin{equation*} [\mathcal{T}]_{\mathcal{V}}=\mat{-1&0\\0&4}. \end{equation*}
The matrix representations \([\mathcal{T}]_{\mathcal{E}}=\mat{1&2\\3&2}\) and \([\mathcal{T}]_{\mathcal{V}}=\mat{-1&0\\0&4}\) are equally valid, but writing \(\mathcal{T}\) in the \(\mathcal{V}\) basis gives a very simple matrix!

Section 16.1 Diagonalization

Recall that two matrices are similar if they represent the same transformation but in possibly different bases. The process of diagonalizing a matrix \(A\) is that of finding a diagonal matrix that is similar to \(A\text{,}\) and you can bet that this process is closely related to eigenvectors/values.
Let \(\mathcal{T}:\R^{n}\to\R^{n}\) be a linear transformation and suppose that \(\mathcal{B}=\Set{\vec b_1,\ldots,\vec b_n}\) is a basis so that
\begin{equation*} [\mathcal{T}]_{\mathcal{B}}= \matc{\alpha_1&0&\cdots &0\\0&\alpha_2&\cdots &0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&\alpha_n} \end{equation*}
is a diagonal matrix. This means that \(\vec b_{1},\ldots,\vec b_{n}\) are eigenvectors for \(\mathcal{T}\text{!}\) The proof goes as follows:
\begin{equation*} [\mathcal{T}]_{\mathcal{B}}[\vec b_{1}]_{\mathcal{B}}= \matc{\alpha_1&0&\cdots &0\\0&\alpha_2&\cdots &0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&\alpha_n}\matc{1\\0\\\vdots\\0}= \matc{\alpha_1\\0\\\vdots\\0}=\alpha_{1}[\vec b_{1}]_{\mathcal{B}}=[\alpha_{1}\vec b_{1}]_{\mathcal{B}}, \end{equation*}
and in general
\begin{equation*} [\mathcal{T}]_{\mathcal{B}}[\vec b_{i}]_{\mathcal{B}}= \alpha_{i}[\vec b_{i}]_{\mathcal{B}}=[\alpha_{i}\vec b_{i}]_{\mathcal{B}}. \end{equation*}
Therefore, for \(i=1,\ldots,n\text{,}\) we have
\begin{equation*} \mathcal{T}\vec b_{i}=\alpha_{i}\vec b_{i}. \end{equation*}
Since \(\mathcal{B}\) is a basis, \(\vec b_{i}\neq \vec 0\) for any \(i\text{,}\) and so each \(\vec b_{i}\) is an eigenvector for \(\mathcal{T}\) with corresponding eigenvalue \(\alpha_{i}\text{.}\)
We’ve just shown that if a linear transformation \(\mathcal{T}:\R^{n}\to\R^{n}\) can be represented by a diagonal matrix, then there must be a basis for \(\R^{n}\) consisting of eigenvectors for \(\mathcal{T}\text{.}\) The converse is also true.
Suppose again that \(\mathcal{T}:\R^{n}\to\R^{n}\) is a linear transformation and that \(\mathcal{B}=\Set{\vec b_1,\ldots,\vec b_n}\) is a basis of eigenvectors for \(\mathcal{T}\) with corresponding eigenvalues \(\alpha_{1},\ldots,\alpha_{n}\text{.}\) By definition,
\begin{equation*} \mathcal{T}(\vec b_{i})=\alpha_{i}\vec b_{i}, \end{equation*}
and so
\begin{equation*} \mathcal{T}\matc{k_1\\k_2\\\vdots\\k_n}_{\mathcal{B}}= \matc{\alpha_1k_1\\\alpha_2k_2\\\vdots\\\alpha_nk_n}_{\mathcal{B}}\qquad\text{which is equivalent to}\qquad [\mathcal{T}]_{\mathcal{B}}\matc{k_1\\k_2\\\vdots\\k_n}= \matc{\alpha_1k_1\\\alpha_2k_2\\\vdots\\\alpha_nk_n}. \end{equation*}
The only matrix that does this is
\begin{equation*} [\mathcal{T}]_{\mathcal{B}}= \matc{\alpha_1&0&\cdots &0\\0&\alpha_2&\cdots &0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&\alpha_n}, \end{equation*}
which is a diagonal matrix.
What we’ve shown is summarized by the following theorem.
Now that we have a handle on representing a linear transformation by a diagonal matrix, let’s tackle the problem of diagonalizing a matrix itself.

Definition 16.1.2. Diagonalizable.

A matrix is diagonalizable if it is similar to a diagonal matrix.
Suppose \(A\) is an \(n\times n\) matrix. \(A\) induces some transformation \(\mathcal{T}_{A}:\R^{n}\to\R^{n}\text{.}\) By definition, this means \(A=[\mathcal{T}_{A}]_{\mathcal{E}}\text{.}\) The matrix \(B\) is similar to \(A\) if there is some basis \(\mathcal{V}\) so that \(B=[\mathcal{T}_{A}]_{\mathcal{V}}\text{.}\) Using change-of-basis matrices, we see
\begin{equation*} A=\BasisChange{\mathcal{V}}{\mathcal{E}}[\mathcal{T}_{A}]_{\mathcal{V}}\BasisChange{\mathcal{E}}{\mathcal{V}}=\BasisChange{\mathcal{V}}{\mathcal{E}}B\BasisChange{\mathcal{E}}{\mathcal{V}}. \end{equation*}
In other words, \(A\) and \(B\) are similar if there is some invertible change-of-basis matrix \(P\) so
\begin{equation*} A=PBP^{-1}. \end{equation*}
Based on our earlier discussion, \(B\) will be a diagonal matrix if and only if \(P\) is the change-of-basis matrix for a basis of eigenvectors. In this case, we know \(B\) will be the diagonal matrix with eigenvalues along the diagonal (in the proper order).

Example 16.1.3.

Let \(A=\mat{1&2&5\\-11&14&5\\-3&2&9}\) be a matrix and notice that \(\vec v_{1}=\mat{5\\5\\1}\text{,}\) \(\vec v_{2}=\mat{1\\1\\1}\text{,}\) and \(\vec v_{3}=\mat{1\\3\\1}\) are eigenvectors for \(A\text{.}\) Diagonalize \(A\text{.}\)

Solution.

First, we find the eigenvalues that correspond to the eigenvectors \(\vec v_{1}, \vec v_{2}\text{,}\) and \(\vec v_{3}\text{.}\) Computing,
\begin{equation*} A\vec v_{1}=\matc{20\\20\\4}=4\vec v_{1},\qquad A\vec v_{2}=\mat{8\\8\\8}=8\vec v_{2},\qquad\text{and}\qquad A\vec v_{3}=\mat{12\\36\\12}=12\vec v_{3}, \end{equation*}
and so the eigenvalue corresponding to \(\vec v_{1}\) is \(4\text{,}\) to \(\vec v_{2}\) is \(8\text{,}\) and to \(\vec v_{3}\) is \(12\text{.}\)
The change-of-basis matrix which converts from the \(\Set{\vec v_1,\vec v_2,\vec v_3}\) to the standard basis is
\begin{equation*} P = \mat{5&1&1\\5&1&3\\1&1&1}, \end{equation*}
and
\begin{equation*} P^{-1}= \mat{\frac{1}{4}&0&-\frac{1}{4}\\\frac{1}{4}&-\frac{1}{2}&\frac{5}{4}\\-\frac{1}{2}&\frac{1}{2}&0}. \end{equation*}
Define \(D\) to be the \(3\times 3\) matrix with the eigenvalues of \(A\) along the diagonal (in the order, \(4,8,12\)). That is, the matrix \(A\) written in the basis of eigenvectors is
\begin{equation*} D = \mat{4&0&0\\0&8&0\\0&0&12}. \end{equation*}
We now know
\begin{equation*} A = PDP^{-1}= \mat{5&1&1\\5&1&3\\1&1&1}\mat{4&0&0\\0&8&0\\0&0&12}\mat{\frac{1}{4}&0&-\frac{1}{4}\\\frac{1}{4}&-\frac{1}{2}&\frac{5}{4}\\-\frac{1}{2}&\frac{1}{2}&0}, \end{equation*}
and that \(D\) is the diagonalized form of \(A\text{.}\)

Section 16.2 Non-diagonalizable Matrices

Is every matrix diagonalizable? Unfortunately the world is not that sweet. But, we have a tool to tell if a matrix is diagonalizable—checking to see if there is a basis of eigenvectors.

Example 16.2.1.

Is the matrix \(R=\mat{0&-1\\1&0}\) diagonalizable?

Solution.

Computing, \(\Char(R)=\lambda^{2}+1\) has no real roots. Therefore, \(R\) has no real eigenvalues. Consequently, \(R\) has no real eigenvectors, and so \(R\) is not diagonalizable
 1 
If we allow complex eigenvalues, then \(R\) is diagonalizable and is similar to the matrix \(\mat{i&0\\0&-i}\text{.}\) So, to be more precise, we might say \(R\) is not real diagonalizable.
.

Example 16.2.2.

Is the matrix \(D=\mat{5&0\\0&5}\) diagonalizable?

Solution.

For every vector \(\vec v\in \R^{2}\text{,}\) we have \(D\vec v=5\vec v\text{,}\) and so every non-zero vector in \(\R^{2}\) is an eigenvector for \(D\text{.}\) Thus, \(\mathcal{E}=\Set{\xhat, \yhat}\) is a basis of eigenvectors for \(\R^{2}\text{,}\) and so \(D\) is diagonalizable
 2 
Of course, every square matrix is similar to itself and \(D\) is already diagonal, so of course it’s diagonalizable.
.

Example 16.2.3.

Is the matrix \(J=\mat{5&1\\0&5}\) diagonalizable?

Solution.

Computing, \(\Char(J)=(5-\lambda)^{2}\) which has a double root at 5. Therefore, 5 is the only eigenvalue of \(J\text{.}\) The eigenvectors of \(J\) all lie in
\begin{equation*} \Null(J-5I)=\Span\Set{\mat{1\\0}}. \end{equation*}
Since this is a one dimensional space, there is no basis for \(\R^{2}\) consisting of eigenvectors for \(J\text{.}\) Therefore, \(J\) is not diagonalizable.

Example 16.2.4.

Is the matrix \(K=\mat{5&1\\0&2}\) diagonalizable?

Solution.

Computing, \(\Char(K)=(5-\lambda)(2-\lambda)\) which has roots at 5 and 2. Therefore, 5 and 2 are the eigenvalues of \(K\text{.}\) The eigenvectors of \(K\) lie in one of
\begin{equation*} \Null(K-5I) = \Span\Set{\mat{1\\0}}\qquad\text{or}\qquad\Null(K-2I) = \Span\Set{\mat{-1\\3}}. \end{equation*}
Picking one eigenvector from each null space, we have that \(\Set{\mat{1\\0},\mat{-1\\3}}\) is a basis for \(\R^{2}\) consisting of eigenvectors of \(K\text{.}\) Thus, \(K\) is diagonalizable.

Takeaway 16.2.5.

Not all matrices are diagonalizable, but you can check if an \(n\times n\) matrix is diagonalizable by determining whether there is a basis of eigenvectors for \(\R^{n}\text{.}\)

Section 16.3 Geometric and Algebraic Multiplicities

When analyzing linear transformations or matrices, we’re often interested in studying the subspaces where vectors are stretched by only one eigenvalue. These are called the eigenspaces.

Definition 16.3.1. Eigenspace.

Let \(A\) be an \(n\times n\) matrix with eigenvalues \(\lambda_{1},\ldots,\lambda_{m}\text{.}\) The eigenspace of \(A\) corresponding to the eigenvalue \(\lambda_{i}\) is the null space of \(A-\lambda_{i}I\text{.}\) That is, it is the space spanned by all eigenvectors that have the eigenvalue \(\lambda_{i}\text{.}\)
The geometric multiplicity of an eigenvalue \(\lambda_{i}\) is the dimension of the corresponding eigenspace. The algebraic multiplicity of \(\lambda_{i}\) is the number of times \(\lambda_{i}\) occurs as a root of the characteristic polynomial of \(A\) (i.e., the number of times \(x-\lambda_{i}\) occurs as a factor).
Now is the time when linear algebra and regular algebra (the solving of non-linear equations) combine. We know, every root of the characteristic polynomial of a matrix gives an eigenvalue for that matrix. Since the degree of the characteristic polynomial of an \(n\times n\) matrix is always \(n\text{,}\) the fundamental theorem of algebra tells us exactly how many roots to expect.
Recall that the multiplicity of a root of a polynomial is the power of that root in the factored polynomial. So, for example \(p(x)=(4-x)^{3}(5-x)\) has a root of \(4\) with multiplicity \(3\) and a root of \(5\) with multiplicity \(1\text{.}\)

Example 16.3.2.

Let \(R=\mat{0&-1\\1&0}\) and find the geometric and algebraic multiplicity of each eigenvalue of \(R\text{.}\)

Solution.

Computing, \(\Char(R)=\lambda^{2}+1\) which has no real roots. Therefore, \(R\) has no real eigenvalues.
 1 
If we allow complex eigenvalues, then the eigenvalues \(i\) and \(-i\) both have geometric and algebraic multiplicity of 1.

Example 16.3.3.

Let \(D=\mat{5&0\\0&5}\) and find the geometric and algebraic multiplicity of each eigenvalue of \(D\text{.}\)

Solution.

Computing, \(\Char(D)=(5-\lambda)^{2}\text{,}\) so \(5\) is an eigenvalue of \(D\) with algebraic multiplicity \(2\text{.}\) The eigenspace of \(D\) corresponding to 5 is \(\R^{2}\text{.}\) Thus, the geometric multiplicity of \(5\) is \(2\text{.}\)

Example 16.3.4.

Let \(J=\mat{5&1\\0&5}\) and find the geometric and algebraic multiplicity of each eigenvalue of \(J\text{.}\)

Solution.

Computing, \(\Char(J)=(5-\lambda)^{2}\text{,}\) so \(5\) is an eigenvalue of \(J\) with algebraic multiplicity \(2\text{.}\) The eigenspace of \(J\) corresponding to \(5\) is \(\Span\Set{\mat{1\\0}}\text{.}\) Thus, the geometric multiplicity of \(5\) is \(1\text{.}\)

Example 16.3.5.

Let \(K=\mat{5&1\\0&2}\) and find the geometric and algebraic multiplicity of each eigenvalue of \(K\text{.}\)

Solution.

Computing, \(\Char(K)=(5-\lambda)(2-\lambda)\text{,}\) so \(5\) and \(2\) are eigenvalues of \(K\text{,}\) both with algebraic multiplicity \(1\text{.}\) The eigenspace of \(K\) corresponding to \(5\) is \(\Span\Set{\mat{1\\0}}\) and the eigenspace corresponding to \(2\) is \(\Span\Set{\mat{-1\\3}}\text{.}\) Thus, both \(5\) and \(2\) have a geometric multiplicity of \(1\text{.}\)
Consider the following two theorems.
We can now deduce the following.

Proof.

Let \(A\) be an \(n\times n\) matrix with eigenvalues \(\lambda_{1},\ldots,\lambda_{k}\text{.}\) Let \(E_{1},\ldots,E_{k}\) be bases for the eigenspaces corresponding to \(\lambda_{1},\ldots,\lambda_{k}\text{.}\) We will start by showing \(E=E_{1}\cup\cdots\cup E_{k}\) is a linearly independent set using the following two lemmas.
No New Eigenvalue Lemma. Suppose that \(\vec v_{1},\ldots,\vec v_{k}\) are linearly independent eigenvectors of a matrix \(A\text{,}\) and let \(\lambda_{1},\ldots,\lambda_{k}\) be the corresponding eigenvalues. Then, any eigenvector for \(A\) contained in \(\Span\Set{\vec v_1,\ldots,\vec v_k}\) must have one of \(\lambda_{1},\ldots,\lambda_{k}\) as its eigenvalue.
The proof goes as follows. Suppose \(\vec v=\sum_{i\leq k}\alpha_{i}\vec v_{i}\) is an eigenvector for \(A\) with eigenvalue \(\lambda\text{.}\) We now compute \(A\vec v\) in two different ways: once by using the fact that \(\vec v\) is an eigenvector, and again by using the fact that \(\vec v\) is a linear combination of other eigenvectors. Observe
\begin{equation*} A\vec v=\lambda \vec v=\lambda\left(\sum_{i\leq k}\alpha_{i}\vec v_{i}\right) =\sum_{i\leq k}\alpha_{i}\lambda\vec v_{i} \end{equation*}
and
\begin{equation*} A\vec v=A\left(\sum_{i\leq k}\alpha_{i}\vec v_{i}\right) =\sum_{i\leq k}\alpha_{i} A\vec v_{i} =\sum_{i\leq k}\alpha_{i}\lambda_{i}\vec v_{i}. \end{equation*}
We now have
\begin{equation*} \vec 0=A\vec v-A\vec v = \sum_{i\leq k}\alpha_{i}\lambda\vec v_{i} -\sum_{i\leq k}\alpha_{i}\lambda_{i}\vec v_{i} =\sum_{i\leq k}\alpha_{i}(\lambda-\lambda_{i})\vec v_{i}. \end{equation*}
Because \(\vec v_{1},\ldots,\vec v_{k}\) are linearly independent, we know \(\alpha_{i}(\lambda-\lambda_{i})=0\) for all \(i\leq k\text{.}\) Further, because \(\vec v\) is non-zero (it’s an eigenvector), we know at least one \(\alpha_{i}\) is non-zero. Therefore \(\lambda-\lambda_{i}=0\) for at least one \(i\text{.}\) In other words, \(\lambda=\lambda_{i}\) for at least one \(i\text{,}\) which is what we set out to show
 2 
You may notice that we’ve proved something stronger than we needed: if an eigenvector is a linear combination of linearly independent eigenvectors, the only non-zero coefficients of that linear combination must belong to eigenvectors with the same eigenvalue.
.
Basis Extension Lemma. Let \(P=\Set{\vec p_1,\ldots,\vec p_a}\) and \(Q=\Set{\vec q_1,\ldots,\vec q_b}\) be linearly independent sets, and suppose \(P\cup\Set{\vec q}\) is linearly independent for all non-zero \(\vec q\in\Span Q\text{.}\) Then \(P\cup Q\) is linearly independent.
To show this, suppose \(\vec 0=\alpha_{1}\vec p_{1}+\cdots+\alpha_{a}\vec p_{a}+\beta_{1}\vec q_{1}+\cdots+\beta_{b}\vec q_{b}\) is a linear combination of vectors in \(P\cup Q\text{.}\) Let \(\vec q=\beta_{1}\vec q_{1}+\cdots+\beta_{b}\vec q_{b}\text{.}\) First, note that \(\vec q\) must be the zero vector. If not, \(\vec 0=\alpha_{1}\vec p_{1}+\cdots+\alpha_{a}\vec p_{a}+\vec q\) is a non-trivial linear combination of vectors in \(P\cup\Set{\vec q}\text{,}\) which contradicts the assumption that \(P\cup\Set{\vec q}\) is linearly independent. Since we’ve established \(\vec 0=\vec q=\beta_{1}\vec q_{1}+\cdots+\beta_{b}\vec q_{b}\text{,}\) we conclude \(\beta_{1}=\cdots=\beta_{b}=0\) because \(Q\) is linearly independent. It follows that since \(\vec 0=\alpha_{1}\vec p_{1}+\cdots+\alpha_{a}\vec p_{a}+\vec q =\alpha_{1}\vec p_{1}+\cdots+\alpha_{a}\vec p_{a}+\vec 0\text{,}\) we must have that \(\alpha_{1}=\cdots=\alpha_{a}=0\) because \(P\) is linearly independent. This shows that the only way to express \(\vec 0\) as a linear combination of vectors in \(P\cup Q\) is as the trivial linear combination, and so \(P\cup Q\) is linearly independent.
Now we can put our lemmas to good use. We will use induction to show that \(E=E_{1}\cup\cdots\cup E_{k}\) is linearly independent. By assumption \(E_{1}\) is linearly independent. Now, suppose \(U=E_{1}\cup\cdots\cup E_{j}\) is linearly independent. By construction, every non-zero vector \(\vec v\in\Span E_{j+1}\) is an eigenvector for \(A\) with eigenvalue \(\lambda_{j+1}\text{.}\) Therefore, since \(\lambda_{j+1}\neq \lambda_{i}\) for \(1\leq i\leq j\text{,}\) we may apply the No New Eigenvalue Lemma to see that \(\vec v\notin\Span U\text{.}\) It follows that \(U\cup\Set{\vec v}\) is linearly independent. Since \(E_{j+1}\) is itself linearly independent, we may now apply the Basis Extension Lemma to deduce that \(U\cup E_{j+1}\) is linearly independent. This shows that \(E=E_{1}\cup\cdots\cup E_{k}\) is linearly independent.
To conclude notice that by construction, \(\text{geometric mult}(\lambda_{i})=\abs{E_i}\text{.}\) Since \(E=E_{1}\cup\cdots\cup E_{k}\) is linearly independent, the \(E_{i}\)’s must be disjoint and so \(\sum\text{geometric mult}(\lambda_{i})=\sum \abs{E_i}=\abs{E}\text{.}\) If \(\sum\text{geometric mult}(\lambda_{i})=n\text{,}\) then \(E\subseteq\R^{n}\) is a linearly independent set of \(n\) vectors and so is a basis for \(\R^{n}\text{.}\) Finally, because we have a basis for \(\R^{n}\) consisting of eigenvectors for \(A\text{,}\) we know \(A\) is diagonalizable.
Conversely, if there is a basis \(E\) for \(\R^{n}\) consisting of eigenvectors, we must have a linearly independent set of \(n\) eigenvectors. Grouping these eigenvectors by eigenvalue, an application of the No New Eigenvalue Lemma shows that each group must actually be a basis for its eigenspace. Thus, the sum of the geometric multiplicities must be \(n\text{.}\)
Finally, if complex eigenvalues are allowed, the algebraic multiplicities sum to \(n\text{.}\) Since the algebraic multiplicities bound the geometric multiplicities, the only way for the geometric multiplicities to sum to \(n\) is if corresponding geometric and algebraic multiplicities are equal.

Exercises 16.4 Exercises

1.

For each of the matrices below, find the geometric and algebraic multiplicity of each eigenvalue.
  1. \(\displaystyle A=\mat{2&0\\-2&1}\)
  2. \(\displaystyle B=\mat{3&0\\0&3}\)
  3. \(\displaystyle C=\mat{3&0\\3&0}\)
  4. \(\displaystyle D=\mat{0&3/2&4\\0&1&0\\-1&1&4}\)
  5. \(\displaystyle E=\mat{2&1/2&0\\0&1&0\\0&1/2&2}\)
Solution.
    1. \(\lambda=1\text{,}\) Algebraic: 1, Geometric: 1
    2. \(\lambda=2\text{,}\) Algebraic: 1, Geometric: 1
    1. \(\lambda=3\text{,}\) Algebraic: 2, Geometric: 2
    1. \(\lambda=0\text{,}\) Algebraic: 1, Geometric: 1
    2. \(\lambda=3\text{,}\) Algebraic: 1, Geometric: 1
    1. \(\lambda=1\text{,}\) Algebraic: 1, Geometric: 1
    2. \(\lambda=2\text{,}\) Algebraic: 2, Geometric: 1
    1. \(\lambda=2\text{,}\) Algebraic: 2, Geometric: 2
    2. \(\lambda=1\text{,}\) Algebraic: 1, Geometric: 1

2.

For each matrix from question 16.4.1, diagonalize the matrix if possible. Otherwise explain why the matrix cannot be diagonalized.
Solution.
  1. \(\displaystyle \mat{2&0\\0&1}\)
  2. \(\displaystyle \mat{3&0\\0&3}\)
  3. \(\displaystyle \mat{3&0\\0&0}\)
  4. Not diagonalizable.
  5. \(\displaystyle \mat{2&0&0\\0&2&0\\0&0&1}\)

3.

Give an example of a \(4\times 4\) matrix with \(2\) and \(7\) as its only eigenvalues.
Solution.
\(\mat{2&0&0&0\\0&2&0&0\\0&0&7&0\\0&0&0&7}\)

4.

Can the geometric multiplicity of an eigenvalue ever be \(0\text{?}\) Explain.
Solution.
No. If \(\lambda\) is an eigenvalue of a matrix \(A\text{,}\) then \(det(A - \lambda I)=0\) and therefore \(A - \lambda I\) is not invertible. Specifically, \(\Nullity(A - \lambda I) \geq 1\) and hence there exists at least one eigenvector for the eigenvalue \(\lambda\text{.}\) Therefore the geometric multiplicity of \(\lambda\) is at least one.

5.

  1. Show that if \(\vec v_{1}\) and \(\vec v_{2}\) are eigenvectors for a matrix \(M\) corresponding to different eigenvalues, then \(\vec v_{1}\) and \(\vec v_{2}\) are linearly independent.
  2. If possible, give an example of a non-diagonalizable \(3\times 3\) matrix where \(1\) and \(-1\) are the only eigenvalues.
  3. If possible, give an example of a non-diagonalizable \(2\times 2\) matrix where \(1\) and \(-1\) are the only eigenvalues.
Solution.
  1. Let \(\vec v_{1}\) and \(\vec v_{2}\) be eigenvectors for a matrix \(M\) corresponding to distinct eigenvalues \(\lambda_{1}\) and \(\lambda_{2}\) respectively. Let \(a,b \in \R\) be such that \(a \vec v_{1}+ b\vec v_{2}= \vec 0\text{.}\) Multiplying both sides by \(M - \lambda_{1}I\text{,}\) we get
    \begin{equation*} b(\lambda_{2}- \lambda_{1}) \vec v_{2}= \vec 0. \end{equation*}
    Since \(\vec v_{2}\) is an eigenvector, it is nonzero. Hence, either \(b=0\) or \(\lambda_{2}- \lambda_{1}= 0\text{.}\) Since \(\lambda_{1}\neq\lambda_{2}\) we know \(\lambda_{2}- \lambda_{1}\neq 0\) and so \(b=0\text{.}\)
    We have deduced that \(a \vec v_{1}= \vec 0\text{.}\) However, since \(\vec v_{1}\) is nonzero (because it is an eigenvector), we must have that \(a = 0\text{.}\) This means that \(\vec v_{1}\) and \(\vec v_{2}\) are linearly independent.
  2. \(\displaystyle \mat{1&1&0\\0&1&0\\0&0&-1}\)
  3. This is impossible. Suppose that for some matrix \(M\text{,}\) \(\vec v_{1}\) is an eigenvector corresponding to \(1\) and \(\vec v_{2}\) is an eigenvector corresponding to \(-1.\) By 16.4.5.a, \(\vec v_{1}\) and \(\vec v_{2}\) are linearly independent and thus form a basis for \(\R^{2}\text{.}\) Since, \(\R^{2}\) has a basis consisting of eignvectors of \(M\text{,}\) we know \(M\) is diagonalizable.