Skip to main content

Linear Algebra

Module 14 Determinants

In this module you will learn
  • The definition of the determinant of a linear transformation and of a matrix.
  • How to interpret the determinant as a change-of-volume factor.
  • How to relate the determinant of \(S\circ T\) to the determinant of \(S\) and of \(T\text{.}\)
  • How to compute the determinants of elementary matrices and how to compute determinants of large matrices using row reduction.
Linear transformations transform vectors, but they also change sets.
Figure 14.0.1.
It turns out to be particularly useful to track by how much a linear transformation changes area/volume. This number (which is associated with a linear transformation with the same domain and codomain) is called the determinant
 1 
This number is almost the determinant. The only difference is that the determinant might have a \(\pm\) in front.
.

Section 14.1 Volumes

In this module, most examples will be in \(\R^{2}\) because they’re easier to draw. The definitions given will extend to \(\R^{n}\) for any \(n\text{,}\) however we need to establish some conventions to properly express these ideas in English. In English, we say that a two-dimensional figure has an area and a three-and-up dimensional figure has a volume. In this section, we will use the term volume to also mean area where appropriate.
To measure how volume changes, we need to compare input volumes and output volumes. The easiest volume to compute is that of the unit \(n\)-cube, which has a special notation.

Definition 14.1.1. Unit \(n\)-cube.

The unit \(n\)-cube is the \(n\)-dimensional cube with sides given by the standard basis vectors and lower-left corner located at the origin. That is
\begin{equation*} C_{n}=\Set{\vec x\in\R^n:\vec x=\sum_{i=1}^n\alpha_i\vec e_i\text{ for some }\alpha_1,\ldots,\alpha_n\in[0,1]}=[0,1]^{n}. \end{equation*}
\(C_{2}\) should look familiar as the unit square in \(\R^{2}\) with lower-left corner at the origin.
Figure 14.1.2.
\(C_{n}\) always has volume \(1\)
 1 
The fact that the volume of \(C_{n}\) is \(1\) is actually by definition.
, and by analyzing the image of \(C_{n}\) under a linear transformation, we can see by how much a given transformation changes volume.

Example 14.1.3.

Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{2x-y\\x+\tfrac{1}{2}y}\text{.}\) Find the volume of \(\mathcal{T}(C_{2})\text{.}\)

Solution.

Recall that \(C_{2}\) is the unit square in \(\R^{2}\) with sides given by \(\xhat = \mat{1\\0}\) and \(\yhat = \mat{0\\1}\text{.}\) Applying the linear transformation \(\mathcal{T}\) to \(\xhat\) and \(\yhat\text{,}\) we obtain
\begin{equation*} \mathcal{T}(\xhat)=\mat{2\\1}\qquad\text{and}\qquad \mathcal{T}(\yhat)=\mat{-1\\\frac{1}{2}}. \end{equation*}
Plotting \(\mat{2\\1}\) and \(\mat{-1\\\frac{1}{2}}\text{,}\) we see \(\mathcal{T}(C_{2})\) is a parallelogram with base \(\sqrt{5}\) and height \(\frac{2\sqrt{5}}{5}\text{.}\)
Figure 14.1.4.
Therefore, the volume of \(\mathcal{T}(C_{2})\) is 2.
Let \(\Vol(X)\) stand for the volume of the set \(X\text{.}\) Given a linear transformation \(\mathcal{S}:\R^{n}\to\R^{n}\text{,}\) we can define a number
\begin{equation*} \VolChange(\mathcal{S})=\frac{\Vol(\mathcal{S}(C_{n}))}{\Vol(C_{n})}=\frac{\Vol(\mathcal{S}(C_{n}))}{1}=\Vol(\mathcal{S}(C_{n})). \end{equation*}
A priori, \(\VolChange(\mathcal{S})\) only describes how \(\mathcal{S}\) changes the volume of \(C_{n}\text{.}\) However, because \(\mathcal{S}\) is a linear transformation, \(\VolChange(\mathcal{S})\) actually describes how \(\mathcal{S}\) changes the volume of any figure.
A full proof of the above theorem requires calculus and limits, but the linear algebra ideas are based on the following theorems.

Proof.

Fix \(\mathcal{T}:\R^{n}\to\R^{n}\text{,}\) \(X\subseteq \R^{n}\text{,}\) and \(\vec p\in \R^{n}\text{.}\) Combining linearity with the definition of set addition, we see
\begin{equation*} \mathcal{T}(X+\Set{\vec p}) = \mathcal{T}(X)+\mathcal{T}(\Set{\vec p}) = \mathcal{T}(X)+\Set{\mathcal{T}(\vec p)} \end{equation*}
and so \(\mathcal{T}(X+\Set{\vec p})\) is just a translation of \(\mathcal{T}(X)\text{.}\) Since translations don’t change volume, \(\mathcal{T}(X+\Set{\vec p})\) and \(\mathcal{T}(X)\) must have the same volume.
Rather than giving a formal proof of the above theorem, let’s make a motivating picture.
Figure 14.1.8.
The argument now goes: there are \(k^{n}\) copies of \(B_{n}\) in \(C_{n}\) and \(k^{n}\) copies of \(\mathcal{T}(B_{n})\) in \(T(C_{n})\text{.}\) Thus,
\begin{equation*} \VolChange(\mathcal{T}) =\frac{\Vol(\mathcal{T}(C_{n}))}{\Vol(\mathcal{C}_{n})}=\frac{k^{n}\Vol(\mathcal{T}(B_{n}))}{k^{n}\Vol(\mathcal{B}_{n})}=\frac{\Vol(\mathcal{T}(B_{n}))}{\Vol(\mathcal{B}_{n})}. \end{equation*}
Now we can finally show that for a linear transformation \(\mathcal{T}:\R^{n}\to\R^{n}\text{,}\) the number “\(\VolChange(\mathcal{T})\)” actually corresponds to how much \(\mathcal{T}\) changes the volume of any figure by.
The argument goes as follows: for a figure \(X\subseteq \R^{n}\text{,}\) we can fill it with shrunken and translated copies, \(B_{n}\text{,}\) of \(C_{n}\text{.}\) The same number of copies of \(\mathcal{T}(B_{n})\) fit inside \(\mathcal{T}(X)\) as do \(B_{n}\)’s fit inside \(X\text{.}\) Therefore, the change in volume between \(\mathcal{T}(X)\) and \(X\) must be the same as the change in volume between \(\mathcal{T}(B_{n})\) and \(B_{n}\text{,}\) which is \(\VolChange(\mathcal{T})\text{.}\)
Figure 14.1.9.

Section 14.2 The Determinant

The determinant of a linear transformation \(\mathcal{T}:\R^{n}\to\R^{n}\) is almost the same as \(\VolChange(\mathcal{T})\text{,}\) but with one twist: orientation.

Definition 14.2.1. Determinant.

The determinant of a linear transformation \(\mathcal{T}:\R^{n}\to \R^{n}\text{,}\) denoted \(\det(\mathcal{T})\) or \(\Abs{\mathcal{T}}\), is the oriented volume of the image of the unit \(n\)-cube. The determinant of a square matrix is the determinant of its induced transformation.
We need to understand what the term oriented volume means. We’ve previously defined the orientation of a basis, and we can use the orientation of a basis to define whether a linear transformation is orientation preserving or orientation reversing.

Definition 14.2.2. Orientation Preserving Linear Transformation.

Let \(\mathcal{T}:\R^{n}\to\R^{n}\) be a linear transformation. We say \(\mathcal{T}\) is orientation preserving if the ordered basis \(\Set{\mathcal{T}(\vec e_1),\ldots, \mathcal{T}(\vec e_n)}\) is positively oriented and we say \(\mathcal{T}\) is orientation reversing if the ordered basis \(\Set{\mathcal{T}(\vec e_1),\ldots, \mathcal{T}(\vec e_n)}\) is negatively oriented. If \(\Set{\mathcal{T}(\vec e_1),\ldots, \mathcal{T}(\vec e_n)}\) is not a basis for \(\R^{n}\text{,}\) then \(\mathcal{T}\) is neither orientation preserving nor orientation reversing.
Figure 14.2.3.
Figure 14.2.4.
In the figure above, \(\mathcal{T}\) is orientation preserving and \(\mathcal{S}\) is orientation reversing.
For an arbitrary linear transformation \(\mathcal{Q}:\R^{n}\to\R^{n}\) and a set \(X\subseteq\R^{n}\text{,}\) we define the oriented volume of \(\mathcal{Q}(X)\) to be \(+\Vol\mathcal{Q}(X)\) if \(\mathcal{Q}\) is orientation preserving and \(-\Vol\mathcal{Q}(X)\) if \(\mathcal{Q}\) is orientation reversing.

Example 14.2.5.

Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{2x+y\\-x+\tfrac{1}{2}y}\text{.}\) Find \(\det(\mathcal{T})\text{.}\)

Solution.

This is the same \(\mathcal{T}\) as from the previous example where we computed \(\Vol\mathcal{T}(C_{2})=2\text{.}\) Since \(\mathcal{T}\) is orientation preserving, we conclude that \(\det(\mathcal{T})=2\text{.}\)

Example 14.2.6.

Let \(\mathcal{S}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{S}\mat{x\\y}=\mat{-x+y\\x+y}\text{.}\) Find \(\det(\mathcal{S})\text{.}\)

Solution.

By drawing a picture, we see that \(\mathcal{S}(C_{2})\) is a square and \(\Vol\mathcal{S}(C_{2})=2\text{.}\) However, \(\mathcal{S}(\xhat) = \mat{-1\\1}\) and \(\mathcal{S}(\yhat) = \mat{1\\1}\) form a negatively oriented basis, and so \(\mathcal{S}\) is orientation reversing. Therefore, \(\det(\mathcal{S}) = - \Vol\mathcal{S}(C_{2}) = -2\text{.}\)

Example 14.2.7.

Let \(\mathcal{P}:\R^{2}\to\R^{2}\) be projection onto the line with equation \(x+2y=0\text{.}\) Find \(\det(\mathcal{P})\text{.}\)

Solution.

Because \(\mathcal{P}\) projects everything to a line, we know \(\mathcal{P}(C_{2})\) must be a line segment and therefore has volume zero. Thus \(\det(\mathcal{P})=0\text{.}\)

Section 14.3 Determinants of Composition

Volume changes are naturally multiplicative. If a linear transformation \(\mathcal{T}\) changes volume by a factor of \(\alpha\) and \(\mathcal{S}\) changes volume by a factor of \(\beta\text{,}\) then \(\mathcal{S}\circ \mathcal{T}\) changes volume by a factor of \(\beta\alpha\text{.}\) Thus, determinants must also be multiplicative
 1 
To fully argue this, we need to show that the composition of two orientation-reversing transformations is orientation preserving.
.
Figure 14.3.1.
This means that we can compute the determinant of a complicated transformation by breaking it up into simpler ones and computing the determinant of each piece.

Section 14.4 Determinants of Matrices

The determinant of a matrix is defined as the determinant of its induced transformation. That means, the determinant is multiplicative with respect to matrix multiplication (because it’s multiplicative with respect to function composition).
We will derive an algorithm for finding the determinant of a matrix by considering the determinant of elementary matrices. But first, consider the following theorem.

Proof.

Let \(M\) be an \(n\times n\) matrix and let \(\mathcal{T}_{M}\) be its induced transformation. We know the sides of \(\mathcal{T}_{M}(C_{n})\) are given by \(\Set{\mathcal{T_M}(\vec e_1),\ldots,\mathcal{T_M}(\vec e_n)}\text{.}\) And, by definition,
\begin{equation*} [\mathcal{T}_{M}(\vec e_{i})]_{\mathcal{E}}= M[\vec e_{i}]_{\mathcal{E}}= \text{ $i$th column of $M$.} \end{equation*}
Therefore \(\mathcal{T}_{M}(C_{n})\) is the parallelepiped whose sides are given by the columns of \(M\text{.}\)
This means we can think about the determinant of a matrix by considering its columns. Now we are ready to consider the determinants of the elementary matrices!
There are three types of elementary matrices corresponding to the three elementary row operations. For each one, we need to understand how the induced transformation changes volume.
Multiply a row by a non-zero constant \(\alpha\text{.}\) Let \(E_{m}\) be such an elementary matrix. Scaling one row of \(I\) is equivalent to scaling one column of \(I\text{,}\) and so the columns of \(E_{m}\) specify a parallelepiped that is scaled by \(\alpha\) in one direction.
For example, if
\begin{equation*} E_{m}=\mat{1&0&0\\0&1&0\\0&0&\alpha}\qquad\text{then}\qquad \Set{\vec e_1,\vec e_2,\vec e_3}\mapsto\Set{\vec e_1,\vec e_2,\alpha\vec e_3}. \end{equation*}
Thus \(\det(E_{m})=\alpha\text{.}\)
Swap two rows. Let \(E_{s}\) be such an elementary matrix. Swapping two rows of \(I\) is equivalent to swapping two columns of \(I\text{,}\) so \(E_{s}\) is \(I\) with two columns swapped. This reverses the orientation of the basis given by the columns.
For example, if
\begin{equation*} E_{s}=\mat{0&1&0\\1&0&0\\0&0&1}\qquad\text{then}\qquad \Set{\vec e_1,\vec e_2,\vec e_3}\mapsto\Set{\vec e_2,\vec e_1,\vec e_3}. \end{equation*}
Thus \(\det(E_{s})=-1\text{.}\)
Add a multiple of one row to another. Let \(E_{a}\) be such an elementary matrix. The columns of \(E_{a}\) are the same as the columns of \(I\) except that one column where \(\vec e_{i}\) is replaced with \(\vec e_{i}+\alpha\vec e_{j}\text{.}\) This has the effect of shearing \(C_{n}\) in the \(\vec e_{j}\) direction.
Figure 14.4.3.
Figure 14.4.4.
Since \(C_{n}\) is sheared in a direction parallel to one of its other sides, its volume is not changed. Thus \(\det(E_{a})=1\text{.}\)

Takeaway 14.4.5.

The determinants of elementary matrices are all easy to compute and the determinant of the most-used type of elementary matrix is \(1\text{.}\)
Now, by decomposing a matrix into the product of elementary matrices, we can use the multiplicative property of the determinant (and the formulas for the determinants of the different types of elementary matrices) to compute the determinant of an invertible matrix.

Example 14.4.6.

Use elementary matrices to find the determinant of \(A=\mat{1&2\\3&4}\text{.}\)

Solution.

We can row-reduce \(A\) with the following steps.
\begin{equation*} \mat{1&2\\3&4}\to \mat{1&2\\0&-2}\to \mat{1&2\\0&1}\to \mat{1&0\\0&1}. \end{equation*}
The elementary matrices corresponding to these steps are
\begin{equation*} E_{1}=\mat{1&0\\-3&1}\qquad E_{2}=\mat{1&0\\0&-\frac{1}{2}}\qquad\text{and}\qquad E_{3}=\mat{1&-2\\0&1}, \end{equation*}
and so \(E_{3} E_{2} E_{1} A = I\text{.}\) Therefore
\begin{equation*} A=E_{1}^{-1}E_{2}^{-1}E_{3}^{-1}I=E_{1}^{-1}E_{2}^{-1}E_{3}^{-1}= \mat{1&0\\3&1}\mat{1&0\\0&-2}\mat{1&2\\0&1}. \end{equation*}
Using the fact that the determinant is multiplicative, we get
\begin{align*} \det(A)&=\det\left(\mat{1&0\\3&1}\mat{1&0\\0&-2}\mat{1&2\\0&1}\right)\\ &=\det\left(\mat{1&0\\3&1}\right)\det\left(\mat{1&0\\0&-2}\right)\det\left(\mat{1&2\\0&1}\right)\\ &=(1)(-2)(1) = -2. \end{align*}

Section 14.5 Determinants and Invertibility

We can use elementary matrices to compute the determinant of any invertible matrix by decomposing it into the product of elementary matrices. But, what about non-invertible matrices?
Let \(M\) be an \(n\times n\) matrix that is not invertible. Then, we must have \(\Nullity(M)>0\) and \(\Dim(\Col(M))=\Rank(M)<n\text{.}\) Geometrically, this means there is at least one line of vectors, \(\Null(M)\text{,}\) that gets collapsed to \(\vec 0\text{,}\) and the column space of \(M\) must be “flattened” (i.e., it has lost a dimension). Therefore, the volume of the parallelepiped given by the columns of \(M\) must be zero, and so \(\det(M)=0\text{.}\)
Based on this argument, we have the following theorem.

Proof.

If \(A\) is invertible, \(A=E_{1}\cdots E_{k}\text{,}\) where \(E_{1},\ldots,E_{k}\) are elementary matrices, and so
\begin{equation*} \det(A)=\det(E_{1}\cdots E_{k}) = \det(E_{1})\cdots \det(E_{k}). \end{equation*}
All elementary matrices have non-zero determinants, and so \(\det(A)\neq 0\text{.}\)
Conversely, if \(A\) is not invertible, \(\Rank(A)<n\text{,}\) which means the parallelepiped given by the columns of \(A\) is “flattened” and has zero volume.
We now have another way to tell if a matrix is invertible! But, for an invertible matrix \(A\text{,}\) how do \(\det(A)\) and \(\det(A^{-1})\) relate? Well, by definition
\begin{equation*} AA^{-1}=I, \end{equation*}
and so
\begin{equation*} \det(AA^{-1})=\det(A)\det(A^{-1})= \det(I)=1, \end{equation*}
which gives
\begin{equation*} \det(A^{-1})=\frac{1}{\det(A)}. \end{equation*}

Section 14.6 Determinants and Transposes

Somewhat mysteriously, we have the following theorem.
Volume Theorem II can be concisely stated as \(\det(A)=\det(A^{T})\text{,}\) and joins other strange transpose-related facts (like \(\Rank(A)=\Rank(A^{T})\)).
We can prove Volume Theorem II using elementary matrices.

Proof.

Suppose \(A\) is not invertible. Then, neither is \(A^{T}\) and so \(\det(A)=\det(A^{T})=0\text{.}\)
Suppose \(A\) is invertible and \(A=E_{1}\cdots E_{k}\) where \(E_{1},\ldots, E_{k}\) are elementary matrices. We then have
\begin{equation*} A^{T} = E_{k}^{T}\cdots E_{1}^{T}, \end{equation*}
which follows from the fact that the transpose reverses the order of matrix multiplication (i.e., \((XY)^{T}=Y^{T}X^{T}\)). However, for each \(E_{i}\text{,}\) we may observe that \(E_{i}^{T}\) is another elementary matrix of the same type and with the same determinant. Therefore,
\begin{align*} \det(A^{T}) = \det(E_{k}^{T}\cdots E_{1}^{T})&=\det(E_{k}^{T})\cdots \det(E_{i}^{T})\\ &= \det(E_{k})\cdots \det(E_{1})\\ &= \det(E_{1})\cdots \det(E_{k}) = \det(E_{1}\cdots E_{k})=\det(A). \end{align*}
The key observations for this proof are that (i) \(\det(E_{i}^{T})=\det(E_{i})\) and (ii) since the \(\det(E_{i})\)’s are just scalars, the order in which they are multiplied doesn’t matter.

Exercises 14.7 Exercises

1.

Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{3x-y\\x-\tfrac{1}{4}y}\text{.}\) Find the volume of \(\mathcal{T}(C_{2})\text{.}\)
Solution.
The volume of \(\mathcal{T}(C_{2})\) is equal to the absolute value of the determinant of \(\mathcal{T}\text{.}\) We have that
\begin{equation*} [\mathcal{T}]_{\mathcal{E}}= \mat{ 3 & - 1 \\ 1 & -1/4 }, \end{equation*}
so \(\det \mathcal{T} = -3/4 + 1 = 1/4\text{.}\) Since this number is positive, it is also the desired volume.

2.

Let \(\mathcal{S}:\R^{3}\to\R^{3}\) be defined by \(\mathcal{S}\mat{x\\y\\z}=\matc{2x+y+z\\x-\tfrac{1}{2}y\\z}\text{.}\) Find the volume of \(\mathcal{S}(C_{3})\text{.}\)
Solution.
We start by computing the determinant of \(\mathcal{S}\text{.}\) The determinant of \(\mathcal{S}\) can be computed from \([\mathcal{S}]_{\mathcal{E}}\text{,}\) which is given by
\begin{equation*} [\mathcal{S}]_{\mathcal{E}}= \mat{ 2 & 1 & 1 \\ 1 & -1/2 & 0 \\ 0 & 0 & 1 }. \end{equation*}
Since determinant is preserved by row operations of the form “add a multiple of one row to another”, we can partially row reduce \([\mathcal{S}]_{\mathcal{E}}\) (using only that row operation) without changing the determinant. Thus, the determinant of \([\mathcal{S}]_{\mathcal{E}}\) is the same as the determinant of
\begin{equation*} \mat{ 2 & 1 & 1 \\ 0 & -1 & -1/2 \\ 0 & 0 & 1 }. \end{equation*}
This matrix is triangular, so the determinant is just the product of the entries on the diagonal. Therefore, \(\det [\mathcal{S}]_{\mathcal{E}}= -2\text{.}\) But volume is non-negative, so the volume of \(\mathcal{S}(C_{3})\) is \(2\text{.}\)

3.

Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{x+2y\\-x-y}\text{.}\)
  1. Draw \(\mathcal{E}\) and \(\mathcal{T}(\mathcal{E})\) and then determine whether \(\mathcal{T}\) is orientation preserving or orientation reversing.
  2. Find \(\det(\mathcal{T})\text{.}\)
Solution.
  1. Computing, we see \(\mathcal{T}(\xhat) = \mat{1\\-1}\) and \(\mathcal{T}(\yhat)=\mat{2\\-1}\text{.}\) Drawing these two vectors, we see that \(\mathcal{T}(\xhat),\mathcal{T}(\yhat)\) can be continuously transformed back into \(\xhat,\yhat\) while staying linearly independent the whole time. Therefore \(\mathcal{T}\) is orientation preserving.
  2. \(\det \mathcal{T}\) is equal to the determinant of the matrix
    \begin{equation*} [\mathcal{T}]_{\mathcal{E}}=\mat{ 1 & 2 \\ -1 & -1 }, \end{equation*}
    which is \(1\text{.}\)

4.

For each linear transformation defined below, find its determinant.
  1. \(\mathcal{S}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{S}\) shortens every vector by a factor of \(\tfrac{2}{3}\text{.}\)
  2. \(\mathcal{R}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{R}\) is rotation counter-clockwise by \(90^{\circ}\text{.}\)
  3. \(\mathcal{F}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{F}\) is reflection across the line \(y=-x\text{.}\)
  4. \(\mathcal{G}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{G}(\vec x)=\mathcal{P}(\vec x)+ \mathcal{Q}(\vec x)\) and where \(\mathcal{P}\) is projection onto the line \(y=x\) and \(\mathcal{Q}\) is projection onto the line \(y=-\tfrac{1}{2}x\text{.}\)
  5. \(\mathcal{T}:\R^{3}\to\R^{3}\text{,}\) where \(\mathcal{T}\mat{x\\y\\z}=\matc{x-y+z\\z+x-\tfrac{1}{3}y\\z}\text{.}\)
  6. \(\mathcal{J}:\R^{3}\to\R^{3}\text{,}\) where \(\mathcal{J}\mat{x\\y\\z}=\matc{0\\0\\x+y+z}\text{.}\)
  7. \(\mathcal{K}\circ \mathcal{H}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{H}\mat{x\\y}=\matc{x+2y\\-x-y}\text{,}\) and \(\mathcal{K}\mat{x\\y}=\matc{-x-2y\\x+y}\text{.}\)
Solution.
  1. The matrix for \(\mathcal{S}\) in any basis is \(\mat{ 2/3 & 0 \\ 0 & 2/3 }\text{,}\) so the determinant is \(4/9\text{.}\)
  2. \(\mathcal{R}\) does not change volume or orientation so its determinant is \(1\text{.}\)
  3. \(\mathcal{F}\) does not change volume but it reverses orientation so its determinant is \(-1\text{.}\)
  4. Though the determinants of \(\mathcal{P}\) and \(\mathcal{Q}\) are both \(0\text{,}\) the determinant of \(\mathcal{G}\) is not zero! We can compute the standard matrix for \(\mathcal{G}\) by noticing \(\mathcal{G}(\xhat)=\mat{13/10\\1/10}\) and \(\mathcal{G}(\yhat)=\mat{1/10\\7/10}\text{.}\) Therefore
    \begin{equation*} [\mathcal{G}]_{\mathcal{E}}=\mat{13/10&1/10\\1/10&7/10} \end{equation*}
    and so \(\det\mathcal{G}=9/10\text{.}\)
  5. The matrix \([\mathcal{T}]_{\mathcal{E}}\) is given by
    \begin{equation*} [\mathcal{T}]_{\mathcal{E}}= \mat{ 1 & -1 & 1 \\ 1 & -1/3 & 1 \\ 0 & 0 & 1 }. \end{equation*}
    Subtracting the first row from the second gives the matrix
    \begin{equation*} \mat{ 1 & -1 & 1 \\ 0 & 2/3 & 0 \\ 0 & 0 & 1 }, \end{equation*}
    which has the same determinant as \([\mathcal{T}]_{\mathcal{E}}\text{,}\) and since this matrix is upper triangular, its determinant is simply \(2/3\text{.}\)
  6. The map \(\mathcal{J}\) maps every vector in \(\R^{3}\) into \(\Span\Set{\mat{ 0 \\ 0 \\ 1}}\text{,}\) hence \(\mathcal{J}\) is not invertible. Therefore, \(\det \mathcal{J} = 0\text{.}\)
  7. The determinant of the composition of the two maps is just the product of the determinants of the two maps. The matrices for \(\mathcal{K}\) and \(\mathcal{H}\) (with respect to \(\mathcal{E}\)) are
    \begin{equation*} \mat{ 1 & 2 \\ -1 & -1 }\qquad\text{and}\qquad \mat{ -1 & -2 \\ 1 & 1 } \end{equation*}
    and each has determinant one, so the determinant of \(\mathcal{K}\circ\mathcal{H}\) is also \(1\text{.}\)

5.

Let \(A=\mat{2&3\\1&5}\text{.}\)
  1. Use elementary matrices to find \(\det(A)\text{.}\)
  2. Draw a picture of the parallelogram given by the rows of \(A\text{.}\)
  3. Draw a picture of the parallelogram given by the columns of \(A\text{.}\)
  4. How do the areas of the parallelograms drawn in parts 14.7.5.b and 14.7.5.c relate?
Solution.
  1. Put \(E_{1}= \mat{ 1 & 0 \\ -1/2 & 1 }\text{.}\) Then
    \begin{equation*} E_{1}A = \mat{ 2 & 3 \\ 0 & 7/2 }. \end{equation*}
    Put \(E_{2}= \mat{ 1 & 0 \\ 0 & 2/7 }\text{.}\) Then
    \begin{equation*} E_{2}E_{1}A = \mat{ 2 & 3 \\ 0 & 1 }. \end{equation*}
    Put \(E_{3}= \mat{ 1 & -3 \\ 0 & 1 }\text{.}\) Then
    \begin{equation*} E_{3}E_{2}E_{1}A = \mat{ 2 & 0 \\ 0 & 1 }. \end{equation*}
    Finally, put \(E_{4}= \mat{ 1/2 & 0 \\ 0 & 1 }\text{.}\) Then
    \begin{equation*} E_{4}E_{3}E_{2}E_{1}A = \mat{ 1 & 0 \\ 0 & 1 }. \end{equation*}
    Therefore,
    \begin{equation*} \det A = \det E_{1}^{-1}\det E_{2}^{-1}\det E_{3}^{-1}\det E_{4}^{-1}= 7. \end{equation*}
  2. They have the same area.

6.

Let \(A=\mat{1&2&0\\0&2&1\\1&2&3}\text{.}\)
  1. Use elementary matrices to find \(\det(A)\text{.}\)
  2. Find \(\det(A^{-1})\text{.}\)
  3. Find \(\det(A^{T})\text{,}\) and compare your answer with 14.7.6.a. Are they the same? Explain.
Solution.
  1. By row reducing and keeping track of our steps, we see that
    \begin{equation*} E_{5}E_{4}E_{3}E_{2}E_{1} A = I \end{equation*}
    where
    \begin{equation*} E_{1}= \mat{ 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & 0 & 1 }\qquad E_{2}= \mat{ 1 & 0 & 0 \\ 0 & 1/2 & 0 \\ 0 & 0 & 1 } \end{equation*}
    \begin{equation*} E_{3}= \mat{ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1/3 }\qquad E_{4}= \mat{ 1 & -2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 } \end{equation*}
    \begin{equation*} E_{5}= \mat{ 1 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 } \end{equation*}
    Therefore \(A=E_{1}^{-1}E_{2}^{-1}E_{3}^{-1}E_{4}^{-1}E_{5}^{-1}\text{.}\) By thinking about the relationship between elementary matrices and determinants, we see that \(\det E_{1}^{-1}=\det E_{4}^{-1}=\det E_{5}^{-1}= 1\) and that \(\det E_{2}^{-1}= 2\) and \(\det E_{3}^{-1}= 3\text{.}\) Therefore \(\det A = 6\text{.}\)
  2. \(\det(A^{-1}) = 1/\det(A) = 1/6\text{.}\)
  3. We have
    \begin{equation*} A^{T}= \mat{ 1 & 0 & 1 \\ 2 & 2 & 2 \\ 0 & 1 & 3 }. \end{equation*}
    Note that
    \begin{equation*} (E_{1}A)^{T}= A^{T}E_{1}^{T}= \mat{ 1 & 0 & 0 \\ 2 & 2 & 0 \\ 0 & 1 & 3 }. \end{equation*}
    This matrix is lower triangular, so the determinant is equal to the product of the diagonal entries, which is still \(6\text{.}\) Further \(\det(E_{1}^{T}) = 1\text{,}\) and so \(\det(A^{T}) = 6\text{.}\)

7.

Let \(A\) be an \(n \times n\) matrix that can be decomposed into the product of elementary matrices.
  1. What is \(\Rank(A)\text{?}\) Justify your answer.
  2. What is \(\Null(A^{-1})\text{?}\) Justify your answer.
Solution.
  1. The rank of \(A\) is equal to \(n\text{,}\) since each elementary matrix has non-zero determinant and since \(A\) can be expressed as a product of elementary matrices, it also has non-zero determinant.
  2. The nullspace of \(A^{-1}\) is trivial (i.e. equal to \(\Set{\vec 0}\)), since \(A^{-1}\) is invertible.

8.

Anna and Ella are studying the relationship between determinant and volume. In particular, they are studying \(\mathcal{S}:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}\) defined by \(\mathcal{S}\mat{x \\ y \\ z}=\mat{4x \\ 2z \\ 0}\text{,}\) and \(\mathcal{T}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2}\) defined by \(\mathcal{T}\mat{x \\ y \\ z}=\mat{2x \\ 8z}\text{.}\)
For each conversation below, (a) evaluate Anna and Ella’s arguments as correct, mostly correct, or incorrect; (b) point out where each argument makes correct/incorrect statements; (c) give a correct numerical value for the determinant or explain why it doesn’t exist.
  1. Anna says:
    Since the image of \(C_{3}\) under \(\mathcal{S}\) is the parallelepiped generated by \(\mat{4 \\ 0 \\ 0},\mat{0 \\ 0 \\ 0}, and \mat{0 \\ 2 \\ 0}\text{,}\) which is 2-dimensional parallelogram, the volume of \(\mathcal{S}(C_{3})\) is just the area of this parallelogram, which is 8. Thus, \(\det(\mathcal{S})=8\text{.}\)
    Ella says:
    \(\det(\mathcal{S})\) is undefined, because \(\mathcal{S}\) is not invertible.
  2. Anna says:
    Since the image of \(C_{3}\) under \(\mathcal{T}\) is the parallelepiped generated by \(\mat{2 \\ 0}\text{,}\) \(\mat{0 \\ 0}\text{,}\) and \(\mat{0 \\ 8}\text{,}\) which is a parallelogram in \(\mathbb{R}^{2}\text{,}\) the signed volume of \(\mathcal{T}(C_{3})\) is just the signed area of this parallelogram, which is 16. Thus, \(\det(\mathcal{T})=16\text{.}\)
    Ella says:
    \(\det(\mathcal{T})\) is undefined, because \(\det(\mathcal{T})\) is only defined when the domain and codomain of \(\mathcal{T}\) are the same.
Solution.
  1. Anna’s argument is incorrect.
    Reason: Since \(\mathcal{S}\) is a linear transformation on \(\R^{3}\text{,}\) its determinant is given by the signed change of 3-dimensional volume. Anna’s argument is incorrect because she considered the 2-dimensional volume of \(\mathcal{S}(C_{3})\text{.}\)
    Ella’s argument is incorrect.
    Reason: The determinant is defined for all linear transformations from \(\R^{n}\) to \(\R^{n}\text{,}\) no matter whether it is invertible or not.
    Finally, \(\det(\mathcal{S})=0\text{,}\) because since \(\mathcal{S}(C_{3})\) is a 2-dimensional object in \(\R^{3}\text{,}\) its 3-dimensional volume is \(0\text{.}\) Therefore, \(\VolChange(\mathcal{S})=0\text{,}\) and we conclude that \(\det(\mathcal{S})=0\text{.}\)
  2. Anna’s argument is incorrect.
    Reason: The determinant function is only defined for linear transformations with same domain and codomain.
    Ella’s argument is correct.
    Finally, \(\det(\mathcal{T})\) is undefined, because the domain and codomain of \(\mathcal{T}\) are not the same.