The definition of the determinant of a linear transformation and of a matrix.
How to interpret the determinant as a change-of-volume factor.
How to relate the determinant of \(S\circ T\) to the determinant of \(S\) and of \(T\text{.}\)
How to compute the determinants of elementary matrices and how to compute determinants of large matrices using row reduction.
Linear transformations transform vectors, but they also change sets.
Figure14.0.1.
It turns out to be particularly useful to track by how much a linear transformation changes area/volume. This number (which is associated with a linear transformation with the same domain and codomain) is called the determinant 1
This number is almost the determinant. The only difference is that the determinant might have a \(\pm\) in front.
.
Section14.1Volumes
In this module, most examples will be in \(\R^{2}\) because they’re easier to draw. The definitions given will extend to \(\R^{n}\) for any \(n\text{,}\) however we need to establish some conventions to properly express these ideas in English. In English, we say that a two-dimensional figure has an area and a three-and-up dimensional figure has a volume. In this section, we will use the term volume to also mean area where appropriate.
To measure how volume changes, we need to compare input volumes and output volumes. The easiest volume to compute is that of the unit \(n\)-cube, which has a special notation.
Definition14.1.1.Unit \(n\)-cube.
The unit \(n\)-cube is the \(n\)-dimensional cube with sides given by the standard basis vectors and lower-left corner located at the origin. That is
\begin{equation*}
C_{n}=\Set{\vec x\in\R^n:\vec x=\sum_{i=1}^n\alpha_i\vec e_i\text{ for some }\alpha_1,\ldots,\alpha_n\in[0,1]}=[0,1]^{n}.
\end{equation*}
\(C_{2}\) should look familiar as the unit square in \(\R^{2}\) with lower-left corner at the origin.
Figure14.1.2.
\(C_{n}\) always has volume \(1\) 1
The fact that the volume of \(C_{n}\) is \(1\) is actually by definition.
, and by analyzing the image of \(C_{n}\) under a linear transformation, we can see by how much a given transformation changes volume.
Example14.1.3.
Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{2x-y\\x+\tfrac{1}{2}y}\text{.}\) Find the volume of \(\mathcal{T}(C_{2})\text{.}\)
Solution.
Recall that \(C_{2}\) is the unit square in \(\R^{2}\) with sides given by \(\xhat = \mat{1\\0}\) and \(\yhat = \mat{0\\1}\text{.}\) Applying the linear transformation \(\mathcal{T}\) to \(\xhat\) and \(\yhat\text{,}\) we obtain
Plotting \(\mat{2\\1}\) and \(\mat{-1\\\frac{1}{2}}\text{,}\) we see \(\mathcal{T}(C_{2})\) is a parallelogram with base \(\sqrt{5}\) and height \(\frac{2\sqrt{5}}{5}\text{.}\)
Figure14.1.4.
Therefore, the volume of \(\mathcal{T}(C_{2})\) is 2.
Let \(\Vol(X)\) stand for the volume of the set \(X\text{.}\) Given a linear transformation \(\mathcal{S}:\R^{n}\to\R^{n}\text{,}\) we can define a number
A priori, \(\VolChange(\mathcal{S})\) only describes how \(\mathcal{S}\) changes the volume of \(C_{n}\text{.}\) However, because \(\mathcal{S}\) is a linear transformation, \(\VolChange(\mathcal{S})\) actually describes how \(\mathcal{S}\) changes the volume of any figure.
Theorem14.1.5.
Let \(\mathcal{T}:\R^{n}\to\R^{n}\) be a linear transformation and let \(X\subseteq \R^{n}\) be a subset with volume \(\alpha\text{.}\) Then the volume of \(\mathcal{T}(X)\) is \(\alpha\!\cdot\!\VolChange(\mathcal{T})\text{.}\)
A full proof of the above theorem requires calculus and limits, but the linear algebra ideas are based on the following theorems.
Theorem14.1.6.
Suppose \(\mathcal{T}:\R^{n}\to\R^{n}\) is a linear transformation, \(X\subseteq \R^{n}\) is a subset, and the volume of \(\mathcal{T}(X)\) is \(\alpha\text{.}\) Then for any \(\vec p\in \R^{n}\text{,}\) the volume of \(\mathcal{T}(X+\Set{\vec p})\) is \(\alpha\text{.}\)
Proof.
Fix \(\mathcal{T}:\R^{n}\to\R^{n}\text{,}\)\(X\subseteq \R^{n}\text{,}\) and \(\vec p\in \R^{n}\text{.}\) Combining linearity with the definition of set addition, we see
and so \(\mathcal{T}(X+\Set{\vec p})\) is just a translation of \(\mathcal{T}(X)\text{.}\) Since translations don’t change volume, \(\mathcal{T}(X+\Set{\vec p})\) and \(\mathcal{T}(X)\) must have the same volume.
Theorem14.1.7.
Fix \(k\) and let \(B_{n}\) be \(C_{n}\) scaled to have side lengths \(\frac{1}{k}\) and let \(\mathcal{T}:\R^{n}\to\R^{n}\) be a linear transformation. Then
Rather than giving a formal proof of the above theorem, let’s make a motivating picture.
Figure14.1.8.
The argument now goes: there are \(k^{n}\) copies of \(B_{n}\) in \(C_{n}\) and \(k^{n}\) copies of \(\mathcal{T}(B_{n})\) in \(T(C_{n})\text{.}\) Thus,
Now we can finally show that for a linear transformation \(\mathcal{T}:\R^{n}\to\R^{n}\text{,}\) the number “\(\VolChange(\mathcal{T})\)” actually corresponds to how much \(\mathcal{T}\) changes the volume of any figure by.
The argument goes as follows: for a figure \(X\subseteq \R^{n}\text{,}\) we can fill it with shrunken and translated copies, \(B_{n}\text{,}\) of \(C_{n}\text{.}\) The same number of copies of \(\mathcal{T}(B_{n})\) fit inside \(\mathcal{T}(X)\) as do \(B_{n}\)’s fit inside \(X\text{.}\) Therefore, the change in volume between \(\mathcal{T}(X)\) and \(X\) must be the same as the change in volume between \(\mathcal{T}(B_{n})\) and \(B_{n}\text{,}\) which is \(\VolChange(\mathcal{T})\text{.}\)
Figure14.1.9.
Section14.2The Determinant
The determinant of a linear transformation \(\mathcal{T}:\R^{n}\to\R^{n}\) is almost the same as \(\VolChange(\mathcal{T})\text{,}\) but with one twist: orientation.
Definition14.2.1.Determinant.
The determinant of a linear transformation \(\mathcal{T}:\R^{n}\to \R^{n}\text{,}\) denoted \(\det(\mathcal{T})\) or \(\Abs{\mathcal{T}}\), is the oriented volume of the image of the unit \(n\)-cube. The determinant of a square matrix is the determinant of its induced transformation.
We need to understand what the term oriented volume means. We’ve previously defined the orientation of a basis, and we can use the orientation of a basis to define whether a linear transformation is orientation preserving or orientation reversing.
Definition14.2.2.Orientation Preserving Linear Transformation.
Let \(\mathcal{T}:\R^{n}\to\R^{n}\) be a linear transformation. We say \(\mathcal{T}\) is orientation preserving if the ordered basis \(\Set{\mathcal{T}(\vec e_1),\ldots, \mathcal{T}(\vec e_n)}\) is positively oriented and we say \(\mathcal{T}\) is orientation reversing if the ordered basis \(\Set{\mathcal{T}(\vec e_1),\ldots, \mathcal{T}(\vec e_n)}\) is negatively oriented. If \(\Set{\mathcal{T}(\vec e_1),\ldots, \mathcal{T}(\vec e_n)}\) is not a basis for \(\R^{n}\text{,}\) then \(\mathcal{T}\) is neither orientation preserving nor orientation reversing.
Figure14.2.3.
Figure14.2.4.
In the figure above, \(\mathcal{T}\) is orientation preserving and \(\mathcal{S}\) is orientation reversing.
For an arbitrary linear transformation \(\mathcal{Q}:\R^{n}\to\R^{n}\) and a set \(X\subseteq\R^{n}\text{,}\) we define the oriented volume of \(\mathcal{Q}(X)\) to be \(+\Vol\mathcal{Q}(X)\) if \(\mathcal{Q}\) is orientation preserving and \(-\Vol\mathcal{Q}(X)\) if \(\mathcal{Q}\) is orientation reversing.
Example14.2.5.
Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{2x+y\\-x+\tfrac{1}{2}y}\text{.}\) Find \(\det(\mathcal{T})\text{.}\)
Solution.
This is the same \(\mathcal{T}\) as from the previous example where we computed \(\Vol\mathcal{T}(C_{2})=2\text{.}\) Since \(\mathcal{T}\) is orientation preserving, we conclude that \(\det(\mathcal{T})=2\text{.}\)
Example14.2.6.
Let \(\mathcal{S}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{S}\mat{x\\y}=\mat{-x+y\\x+y}\text{.}\) Find \(\det(\mathcal{S})\text{.}\)
Solution.
By drawing a picture, we see that \(\mathcal{S}(C_{2})\) is a square and \(\Vol\mathcal{S}(C_{2})=2\text{.}\) However, \(\mathcal{S}(\xhat) = \mat{-1\\1}\) and \(\mathcal{S}(\yhat) = \mat{1\\1}\) form a negatively oriented basis, and so \(\mathcal{S}\) is orientation reversing. Therefore, \(\det(\mathcal{S}) = - \Vol\mathcal{S}(C_{2}) = -2\text{.}\)
Example14.2.7.
Let \(\mathcal{P}:\R^{2}\to\R^{2}\) be projection onto the line with equation \(x+2y=0\text{.}\) Find \(\det(\mathcal{P})\text{.}\)
Solution.
Because \(\mathcal{P}\) projects everything to a line, we know \(\mathcal{P}(C_{2})\) must be a line segment and therefore has volume zero. Thus \(\det(\mathcal{P})=0\text{.}\)
Section14.3Determinants of Composition
Volume changes are naturally multiplicative. If a linear transformation \(\mathcal{T}\) changes volume by a factor of \(\alpha\) and \(\mathcal{S}\) changes volume by a factor of \(\beta\text{,}\) then \(\mathcal{S}\circ \mathcal{T}\) changes volume by a factor of \(\beta\alpha\text{.}\) Thus, determinants must also be multiplicative 1
To fully argue this, we need to show that the composition of two orientation-reversing transformations is orientation preserving.
.
Figure14.3.1.
Theorem14.3.2.
Let \(\mathcal{T}:\R^{n}\to\R^{n}\) and \(\mathcal{S}:\R^{n}\to\R^{n}\) be linear transformations. Then
This means that we can compute the determinant of a complicated transformation by breaking it up into simpler ones and computing the determinant of each piece.
Section14.4Determinants of Matrices
The determinant of a matrix is defined as the determinant of its induced transformation. That means, the determinant is multiplicative with respect to matrix multiplication (because it’s multiplicative with respect to function composition).
Theorem14.4.1.
Let \(A\) and \(B\) be \(n\times n\) matrices. Then
We will derive an algorithm for finding the determinant of a matrix by considering the determinant of elementary matrices. But first, consider the following theorem.
Theorem14.4.2.Volume Theorem I.
For a square matrix \(M\text{,}\)\(\det(M)\) is the oriented volume of the parallelepiped 1
A parallelepiped is the \(n\)-dimensional analog of a parallelogram.
given by the column vectors.
Proof.
Let \(M\) be an \(n\times n\) matrix and let \(\mathcal{T}_{M}\) be its induced transformation. We know the sides of \(\mathcal{T}_{M}(C_{n})\) are given by \(\Set{\mathcal{T_M}(\vec e_1),\ldots,\mathcal{T_M}(\vec e_n)}\text{.}\) And, by definition,
Therefore \(\mathcal{T}_{M}(C_{n})\) is the parallelepiped whose sides are given by the columns of \(M\text{.}\)
This means we can think about the determinant of a matrix by considering its columns. Now we are ready to consider the determinants of the elementary matrices!
There are three types of elementary matrices corresponding to the three elementary row operations. For each one, we need to understand how the induced transformation changes volume.
Multiply a row by a non-zero constant \(\alpha\text{.}\) Let \(E_{m}\) be such an elementary matrix. Scaling one row of \(I\) is equivalent to scaling one column of \(I\text{,}\) and so the columns of \(E_{m}\) specify a parallelepiped that is scaled by \(\alpha\) in one direction.
Swap two rows. Let \(E_{s}\) be such an elementary matrix. Swapping two rows of \(I\) is equivalent to swapping two columns of \(I\text{,}\) so \(E_{s}\) is \(I\) with two columns swapped. This reverses the orientation of the basis given by the columns.
Add a multiple of one row to another. Let \(E_{a}\) be such an elementary matrix. The columns of \(E_{a}\) are the same as the columns of \(I\) except that one column where \(\vec e_{i}\) is replaced with \(\vec e_{i}+\alpha\vec e_{j}\text{.}\) This has the effect of shearing\(C_{n}\) in the \(\vec e_{j}\) direction.
Figure14.4.3.
Figure14.4.4.
Since \(C_{n}\) is sheared in a direction parallel to one of its other sides, its volume is not changed. Thus \(\det(E_{a})=1\text{.}\)
Takeaway14.4.5.
The determinants of elementary matrices are all easy to compute and the determinant of the most-used type of elementary matrix is \(1\text{.}\)
Now, by decomposing a matrix into the product of elementary matrices, we can use the multiplicative property of the determinant (and the formulas for the determinants of the different types of elementary matrices) to compute the determinant of an invertible matrix.
Example14.4.6.
Use elementary matrices to find the determinant of \(A=\mat{1&2\\3&4}\text{.}\)
We can use elementary matrices to compute the determinant of any invertible matrix by decomposing it into the product of elementary matrices. But, what about non-invertible matrices?
Let \(M\) be an \(n\times n\) matrix that is not invertible. Then, we must have \(\Nullity(M)>0\) and \(\Dim(\Col(M))=\Rank(M)<n\text{.}\) Geometrically, this means there is at least one line of vectors, \(\Null(M)\text{,}\) that gets collapsed to \(\vec 0\text{,}\) and the column space of \(M\) must be “flattened” (i.e., it has lost a dimension). Therefore, the volume of the parallelepiped given by the columns of \(M\) must be zero, and so \(\det(M)=0\text{.}\)
Based on this argument, we have the following theorem.
Theorem14.5.1.
Let \(A\) be an \(n\times n\) matrix. \(A\) is invertible if and only if \(\det(A)\neq 0\text{.}\)
Proof.
If \(A\) is invertible, \(A=E_{1}\cdots E_{k}\text{,}\) where \(E_{1},\ldots,E_{k}\) are elementary matrices, and so
All elementary matrices have non-zero determinants, and so \(\det(A)\neq 0\text{.}\)
Conversely, if \(A\) is not invertible, \(\Rank(A)<n\text{,}\) which means the parallelepiped given by the columns of \(A\) is “flattened” and has zero volume.
We now have another way to tell if a matrix is invertible! But, for an invertible matrix \(A\text{,}\) how do \(\det(A)\) and \(\det(A^{-1})\) relate? Well, by definition
Somewhat mysteriously, we have the following theorem.
Theorem14.6.1.Volume Theorem II.
The determinant of a square matrix \(A\) is equal to the oriented volume of the parallelepiped given by the rows of \(A\text{.}\)
Volume Theorem II can be concisely stated as \(\det(A)=\det(A^{T})\text{,}\) and joins other strange transpose-related facts (like \(\Rank(A)=\Rank(A^{T})\)).
We can prove Volume Theorem II using elementary matrices.
Proof.
Suppose \(A\) is not invertible. Then, neither is \(A^{T}\) and so \(\det(A)=\det(A^{T})=0\text{.}\)
Suppose \(A\) is invertible and \(A=E_{1}\cdots E_{k}\) where \(E_{1},\ldots, E_{k}\) are elementary matrices. We then have
which follows from the fact that the transpose reverses the order of matrix multiplication (i.e., \((XY)^{T}=Y^{T}X^{T}\)). However, for each \(E_{i}\text{,}\) we may observe that \(E_{i}^{T}\) is another elementary matrix of the same type and with the same determinant. Therefore,
The key observations for this proof are that (i) \(\det(E_{i}^{T})=\det(E_{i})\) and (ii) since the \(\det(E_{i})\)’s are just scalars, the order in which they are multiplied doesn’t matter.
Exercises14.7Exercises
1.
Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{3x-y\\x-\tfrac{1}{4}y}\text{.}\) Find the volume of \(\mathcal{T}(C_{2})\text{.}\)
Solution.
The volume of \(\mathcal{T}(C_{2})\) is equal to the absolute value of the determinant of \(\mathcal{T}\text{.}\) We have that
so \(\det \mathcal{T} = -3/4 + 1 = 1/4\text{.}\) Since this number is positive, it is also the desired volume.
2.
Let \(\mathcal{S}:\R^{3}\to\R^{3}\) be defined by \(\mathcal{S}\mat{x\\y\\z}=\matc{2x+y+z\\x-\tfrac{1}{2}y\\z}\text{.}\) Find the volume of \(\mathcal{S}(C_{3})\text{.}\)
Solution.
We start by computing the determinant of \(\mathcal{S}\text{.}\) The determinant of \(\mathcal{S}\) can be computed from \([\mathcal{S}]_{\mathcal{E}}\text{,}\) which is given by
Since determinant is preserved by row operations of the form “add a multiple of one row to another”, we can partially row reduce \([\mathcal{S}]_{\mathcal{E}}\) (using only that row operation) without changing the determinant. Thus, the determinant of \([\mathcal{S}]_{\mathcal{E}}\) is the same as the determinant of
This matrix is triangular, so the determinant is just the product of the entries on the diagonal. Therefore, \(\det [\mathcal{S}]_{\mathcal{E}}= -2\text{.}\) But volume is non-negative, so the volume of \(\mathcal{S}(C_{3})\) is \(2\text{.}\)
3.
Let \(\mathcal{T}:\R^{2}\to\R^{2}\) be defined by \(\mathcal{T}\mat{x\\y}=\matc{x+2y\\-x-y}\text{.}\)
Draw \(\mathcal{E}\) and \(\mathcal{T}(\mathcal{E})\) and then determine whether \(\mathcal{T}\) is orientation preserving or orientation reversing.
Find \(\det(\mathcal{T})\text{.}\)
Solution.
Computing, we see \(\mathcal{T}(\xhat) = \mat{1\\-1}\) and \(\mathcal{T}(\yhat)=\mat{2\\-1}\text{.}\) Drawing these two vectors, we see that \(\mathcal{T}(\xhat),\mathcal{T}(\yhat)\) can be continuously transformed back into \(\xhat,\yhat\) while staying linearly independent the whole time. Therefore \(\mathcal{T}\) is orientation preserving.
\(\det \mathcal{T}\) is equal to the determinant of the matrix
For each linear transformation defined below, find its determinant.
\(\mathcal{S}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{S}\) shortens every vector by a factor of \(\tfrac{2}{3}\text{.}\)
\(\mathcal{R}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{R}\) is rotation counter-clockwise by \(90^{\circ}\text{.}\)
\(\mathcal{F}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{F}\) is reflection across the line \(y=-x\text{.}\)
\(\mathcal{G}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{G}(\vec x)=\mathcal{P}(\vec x)+ \mathcal{Q}(\vec x)\) and where \(\mathcal{P}\) is projection onto the line \(y=x\) and \(\mathcal{Q}\) is projection onto the line \(y=-\tfrac{1}{2}x\text{.}\)
\(\mathcal{T}:\R^{3}\to\R^{3}\text{,}\) where \(\mathcal{T}\mat{x\\y\\z}=\matc{x-y+z\\z+x-\tfrac{1}{3}y\\z}\text{.}\)
\(\mathcal{J}:\R^{3}\to\R^{3}\text{,}\) where \(\mathcal{J}\mat{x\\y\\z}=\matc{0\\0\\x+y+z}\text{.}\)
\(\mathcal{K}\circ \mathcal{H}:\R^{2}\to\R^{2}\text{,}\) where \(\mathcal{H}\mat{x\\y}=\matc{x+2y\\-x-y}\text{,}\) and \(\mathcal{K}\mat{x\\y}=\matc{-x-2y\\x+y}\text{.}\)
Solution.
The matrix for \(\mathcal{S}\) in any basis is \(\mat{ 2/3 & 0 \\ 0 & 2/3 }\text{,}\) so the determinant is \(4/9\text{.}\)
\(\mathcal{R}\) does not change volume or orientation so its determinant is \(1\text{.}\)
\(\mathcal{F}\) does not change volume but it reverses orientation so its determinant is \(-1\text{.}\)
Though the determinants of \(\mathcal{P}\) and \(\mathcal{Q}\) are both \(0\text{,}\) the determinant of \(\mathcal{G}\) is not zero! We can compute the standard matrix for \(\mathcal{G}\) by noticing \(\mathcal{G}(\xhat)=\mat{13/10\\1/10}\) and \(\mathcal{G}(\yhat)=\mat{1/10\\7/10}\text{.}\) Therefore
which has the same determinant as \([\mathcal{T}]_{\mathcal{E}}\text{,}\) and since this matrix is upper triangular, its determinant is simply \(2/3\text{.}\)
The map \(\mathcal{J}\) maps every vector in \(\R^{3}\) into \(\Span\Set{\mat{ 0 \\ 0 \\ 1}}\text{,}\) hence \(\mathcal{J}\) is not invertible. Therefore, \(\det \mathcal{J} = 0\text{.}\)
The determinant of the composition of the two maps is just the product of the determinants of the two maps. The matrices for \(\mathcal{K}\) and \(\mathcal{H}\) (with respect to \(\mathcal{E}\)) are
Therefore \(A=E_{1}^{-1}E_{2}^{-1}E_{3}^{-1}E_{4}^{-1}E_{5}^{-1}\text{.}\) By thinking about the relationship between elementary matrices and determinants, we see that \(\det E_{1}^{-1}=\det E_{4}^{-1}=\det E_{5}^{-1}= 1\) and that \(\det E_{2}^{-1}= 2\) and \(\det E_{3}^{-1}= 3\text{.}\) Therefore \(\det A = 6\text{.}\)
This matrix is lower triangular, so the determinant is equal to the product of the diagonal entries, which is still \(6\text{.}\) Further \(\det(E_{1}^{T}) = 1\text{,}\) and so \(\det(A^{T}) = 6\text{.}\)
7.
Let \(A\) be an \(n \times n\) matrix that can be decomposed into the product of elementary matrices.
What is \(\Rank(A)\text{?}\) Justify your answer.
What is \(\Null(A^{-1})\text{?}\) Justify your answer.
Solution.
The rank of \(A\) is equal to \(n\text{,}\) since each elementary matrix has non-zero determinant and since \(A\) can be expressed as a product of elementary matrices, it also has non-zero determinant.
The nullspace of \(A^{-1}\) is trivial (i.e. equal to \(\Set{\vec 0}\)), since \(A^{-1}\) is invertible.
8.
Anna and Ella are studying the relationship between determinant and volume. In particular, they are studying \(\mathcal{S}:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}\) defined by \(\mathcal{S}\mat{x \\ y \\ z}=\mat{4x \\ 2z \\ 0}\text{,}\) and \(\mathcal{T}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2}\) defined by \(\mathcal{T}\mat{x \\ y \\ z}=\mat{2x \\ 8z}\text{.}\)
For each conversation below, (a) evaluate Anna and Ella’s arguments as correct, mostly correct, or incorrect; (b) point out where each argument makes correct/incorrect statements; (c) give a correct numerical value for the determinant or explain why it doesn’t exist.
Anna says:
Since the image of \(C_{3}\) under \(\mathcal{S}\) is the parallelepiped generated by \(\mat{4 \\ 0 \\ 0},\mat{0 \\ 0 \\ 0}, and \mat{0 \\ 2 \\ 0}\text{,}\) which is 2-dimensional parallelogram, the volume of \(\mathcal{S}(C_{3})\) is just the area of this parallelogram, which is 8. Thus, \(\det(\mathcal{S})=8\text{.}\)
Ella says:
\(\det(\mathcal{S})\) is undefined, because \(\mathcal{S}\) is not invertible.
Anna says:
Since the image of \(C_{3}\) under \(\mathcal{T}\) is the parallelepiped generated by \(\mat{2 \\ 0}\text{,}\)\(\mat{0 \\ 0}\text{,}\) and \(\mat{0 \\ 8}\text{,}\) which is a parallelogram in \(\mathbb{R}^{2}\text{,}\) the signed volume of \(\mathcal{T}(C_{3})\) is just the signed area of this parallelogram, which is 16. Thus, \(\det(\mathcal{T})=16\text{.}\)
Ella says:
\(\det(\mathcal{T})\) is undefined, because \(\det(\mathcal{T})\) is only defined when the domain and codomain of \(\mathcal{T}\) are the same.
Solution.
Anna’s argument is incorrect.
Reason: Since \(\mathcal{S}\) is a linear transformation on \(\R^{3}\text{,}\) its determinant is given by the signed change of 3-dimensional volume. Anna’s argument is incorrect because she considered the 2-dimensional volume of \(\mathcal{S}(C_{3})\text{.}\)
Ella’s argument is incorrect.
Reason: The determinant is defined for all linear transformations from \(\R^{n}\) to \(\R^{n}\text{,}\) no matter whether it is invertible or not.
Finally, \(\det(\mathcal{S})=0\text{,}\) because since \(\mathcal{S}(C_{3})\) is a 2-dimensional object in \(\R^{3}\text{,}\) its 3-dimensional volume is \(0\text{.}\) Therefore, \(\VolChange(\mathcal{S})=0\text{,}\) and we conclude that \(\det(\mathcal{S})=0\text{.}\)
Anna’s argument is incorrect.
Reason: The determinant function is only defined for linear transformations with same domain and codomain.
Ella’s argument is correct.
Finally, \(\det(\mathcal{T})\) is undefined, because the domain and codomain of \(\mathcal{T}\) are not the same.