AppendixDFormulas for \(2\times 2\) and \(3\times 3\) Determinants
In this appendix you will learn:
A practical formula for \(2\times 2\) determinants.
How to calculate \(3\times 3\) determinants using diagonal method.
How to calculate \(3\times 3\) determinants using cofactor expansion method.
Module 14 discusses the theory of determinants and gives a general algorithm for computing determinants by using elementary matrices. But, since \(2\times 2\) and \(3\times 3\) matrices arise so often in day-to-day life, 27
The day-to-day life of a mathematics student, at least!
it is worth learning some special-purpose formulas for computing the determinants of \(2\times 2\) and \(3\times 3\) matrices.
It should be noted that these formulas are special. Though there do exist formulas for determinants of \(n\times n\) matrices, they are exponentially more complex than the formulas for \(2\times 2\) and \(3\times 3\) matrices. As such, determinants of large matrices are usually computed using row reduction/elementary matrices and not formulas 28
General determinant formulas are primarily useful as theoretical tools for writing proofs.
.
SectionD.1Computing \(2\times 2\) Determinants
For a \(2\times 2\) matrix, we can calculate its determinant directly from its entries.
TheoremD.1.1.
Let \(M=\mat{a & b \\ c & d}\text{.}\) Then,
\begin{equation*}
\det(M)=ad-bc.
\end{equation*}
The \(2\times 2\) determinant formula can be deduced from Volume Theorem I. Let \(M=\mat{a&b\\c&d}\) and let \(\vec c_{1}=\mat{a\\c}\) and \(\vec c_{2}=\mat{b\\d}\) be the columns of \(M\text{.}\) We need to compute the area of the parallelogram \(\mathcal{P}\text{,}\) with sides \(\vec c_{1}\) and \(\vec c_{2}\text{.}\)
FigureD.1.2.
We can compute the area of \(\mathcal{P}\) by computing the area of a rectangle that contains \(\mathcal{P}\) and subtracting off any area that we “over counted”.
FigureD.1.3.
FigureD.1.4.
Thus,
\begin{equation*}
\Vol(\mathcal{P}) = \text{area of big rectangle}- \text{area of little rectangles}- \text{area of triangles}.
\end{equation*}
Using the coordinates for \(\vec c_{1}\) and \(\vec c_{2}\text{,}\) we get
\begin{equation}
\Vol(\mathcal{P}) = \underbrace{(a+b)(d+c)}_{\text{area of big rectangle}}\qquad
- \underbrace{2bc}_{\text{area of little rectangles}}- \underbrace{2\tfrac{ac}{2}
+ 2\tfrac{bd}{2}}_{\text{area of triangles}}= ad-bc.\tag{D.1.1}
\end{equation}
FigureD.1.5.
Equation (D.1.1) is beautiful and simple, but its derivation should give you pause. Volume Theorem I refers to oriented volume and we didn’t make any reference to orientation in our figures! Indeed, we played tricks with pictures. We drew \(\vec c_{1}\) and \(\vec c_{2}\) in a right-handed orientation in the first quadrant, even though the vectors \(\mat{a\\c}\) and \(\mat{b\\d}\) could be in any quadrant (and one or both could even be the zero vector)! To fully justify Equation (D.1.1), we need to consider cases based on all the possible ways \(\vec c_{1}\) and \(\vec c_{2}\) can form a parallelogram. However, it turns out that every case gives the same answer: \(\det\left(\mat{a&b\\c&d}\right)=ad-bc\text{.}\)
ExampleD.1.6.
Directly compute the determinant of \(M=\mat{1 & 6 \\ 2 & 7}\) using the \(2\times 2\) formula. Then, find the determinant of \(M\) after decomposing it into the product of elementary matrices.
Fortunately, there is a clever mnemonic for remembering this formula called the Rule of Sarrus or the diagonal trick.
Rule of Sarrus
Let \(M=\mat{a & b & c \\ d & e & f \\ g & h & i}\text{.}\) To compute the determinant of \(M\) using the Rule of Sarrus, apply the following four steps.
Step 1.
Augment \(M\) with copies of its first two columns.
\begin{equation*}
\left[\begin{array}{ccc|cc}a & b & c & a & b \\ d & e & f & d & e \\ g & h & i & g & h\end{array}\right]
\end{equation*}
Step 2.
Multiply together and then add the entries along the three diagonals of the new matrix. These are called the diagonal products.
FigureD.2.2.
\begin{equation*}
\text{sum of diagonal products}={ aei}+{ bfg}+{ cdh}.
\end{equation*}
Step 3.
Multiply together and then subtract the entries along the three anti-diagonals. These are called the anti-diagonal products.
FigureD.2.3.
\begin{equation*}
\text{difference of anti-diagonal products}=-{ gec}-{ hfa}-{ idb}
\end{equation*}
Step 4.
Add the diagonal products and subtract the anti-diagonal products to get the determinant.
It may be tempting to apply the Rule of Sarrus to \(4\times 4\) and larger matrices, but don’t do it! There is a formula for \(4\times 4\) determinants, but it’s not given by the Rule of Sarrus 1
Because your curiosity is never ending, here’s the formula. For a matrix \(4\times 4\) matrix \(A=[a_{ij}]\text{,}\) we have \(\det(A)= a_{1 1}a_{2 2}a_{3 3}a_{4 4}- a_{1 1}a_{2 2}a_{3 4}a_{4 3}- a_{1 1}a_{2 3}a_{3 2}a_{4 4}+ a_{1 1}a_{2 3}a_{3 4}a_{4 2}+ a_{1 1}a_{2 4}a_{3 2}a_{4 3}- a_{1 1}a_{2 4}a_{3 3}a_{4 2}- a_{1 2}a_{2 1}a_{3 3}a_{4 4}+ a_{1 2}a_{2 1}a_{3 4}a_{4 3}+ a_{1 2}a_{2 3}a_{3 1}a_{4 4}- a_{1 2}a_{2 3}a_{3 4}a_{4 1}- a_{1 2}a_{2 4}a_{3 1}a_{4 3}+ a_{1 2}a_{2 4}a_{3 3}a_{4 1}+ a_{1 3}a_{2 1}a_{3 2}a_{4 4}- a_{1 3}a_{2 1}a_{3 4}a_{4 2}- a_{1 3}a_{2 2}a_{3 1}a_{4 4}+ a_{1 3}a_{2 2}a_{3 4}a_{4 1}+ a_{1 3}a_{2 4}a_{3 1}a_{4 2}- a_{1 3}a_{2 4}a_{3 2}a_{4 1}- a_{1 4}a_{2 1}a_{3 2}a_{4 3}+ a_{1 4}a_{2 1}a_{3 3}a_{4 2}+ a_{1 4}a_{2 2}a_{3 1}a_{4 3}- a_{1 4}a_{2 2}a_{3 3}a_{4 1}- a_{1 4}a_{2 3}a_{3 1}a_{4 2}+ a_{1 4}a_{2 3}a_{3 2}a_{4 1}\text{.}\) This formula involves \(24\) products. The \(5\times 5\) formula involves \(120\) products and the \(6\times 6\) formula involves \(720\) products. It only gets worse from there.
.
Like the \(2\times 2\) formula for determinants, we can derive the \(3\times 3\) formula directly from the definition. However, it takes quite a bit more work 2
If you’re interested in proving the \(3\times 3\) determinant formula, try using the elementary matrix approach rather than computing the volume of a parallelepiped directly.
.
SectionD.3Determinant Formulas and Orientation
Determinants and orientation are connected and our determinant formulas (if we accept them as true) give us an alternative way to determine the orientation of a basis.
Let \(\mathcal{B}=\Set{\vec b_1,\vec b_2}\) be an ordered basis for \(\R^{2}\text{,}\) and let \(M=[\vec b_{1}|\vec b_{2}]\) be the matrix whose columns are \(\vec b_{1}\) and \(\vec b_{2}\text{.}\) Since \(\mathcal{B}\) is linearly independent, we know that \(\det(M)\neq 0\text{.}\) Further, applying the definition of the determinant, we know
\begin{equation*}
\det(M)>0
\end{equation*}
means that \(\mathcal{B}\) is a right-handed basis and \(\det(M)<0\) means \(\mathcal{B}\) is a left-handed basis.
ExampleD.3.1.
Use a determinant to decide whether the ordered basis \(\Set{\mat{1\\2},\mat{-3\\2}}\) is left-handed or right-handed.
Solution.
Let \(A = \mat{1 & -3 \\ 2 & 2}\) be the matrix whose columns are the elements of the given ordered basis.
Using the formula for \(2 \times 2\) determinants gives us
and so we conclude \(\Set{\mat{1\\2},\mat{-3\\2}}\) is a right-handed basis.
Recall the ordered basis \(\mathcal{Q}=\Set{\xhat, \vec u_{\theta}}\) where \(\vec u_{\theta} = \mat{\cos\theta\\\sin\theta}\) is the unit vector which forms an angle of \(\theta\) with the positive \(x\)-axis.
FigureD.3.2.
Visually, we can see that \(\mathcal{Q}\) should be right-handed when \(\theta\in(0,\pi)\text{,}\) left handed when \(\theta\in(\pi,2\pi)\) and \(\mathcal{Q}\) is not a basis when \(\theta=0\) or \(\theta =\pi\text{.}\)
But what does the determinant say?
Computing the determinant of the matrix \(Q=[\xhat|\,\vec u_{\theta}]\) directly using the \(2\times 2\) determinant formula, we get
Notice that \(\det(Q)=\sin\theta>0\) when \(\theta\in(0,\pi)\text{,}\)\(\det(Q)=\sin\theta < 0\) when \(\theta\in(\pi,2\pi)\) and \(\det(Q)=\sin\theta=0\) when \(\theta\in\Set{0,\pi}\text{.}\)
The determinant supports our intuition.
ExercisesD.4Exercises
1.
For each matrix given below, calculate its determinant using both row reduction/elementary matrices and the \(2\times 2\) determinant formula.
\(\displaystyle \mat{1 & 0 \\ 2 & 4}\)
\(\displaystyle \mat{1 & 5 \\ 1 & 5}\)
\(\displaystyle \mat{1 & 0 \\ 0 & 1}\)
\(\displaystyle \mat{1 & 1 \\ 0 & 0}\)
Solution.
\(\displaystyle 4\)
\(\displaystyle 0\)
\(\displaystyle 1\)
\(\displaystyle 0\)
2.
For each matrix given below, calculate its determinant using both row reduction/elementary matrices and the \(3\times 3\) determinant formula.
Since \(\det\left(\mat{1 & 1 \\ -3 & 2}\right)=5>0\text{,}\) the ordered set \(\Set{\mat{1 \\ -3},\mat{1 \\ 2}}\) is a right-handed basis.
Since \(\det\left(\mat{1 & -2 \\ -3 & 6}\right)=0\text{,}\) the set \(\Set{\mat{1 \\ -3},\mat{-2 \\ 6}}\) is not linearly independent and so is not a basis.
Since \(\det\left(\mat{1 & 1 & 1 \\ 0 & 2 & -1 \\ 3 & 5 & 1}\right)=-2< 0\text{,}\) the ordered set \(\Set{\mat{1 \\ 0 \\ 3},\mat{1 \\ 2 \\ 5},\mat{1 \\ -1 \\ 1}}\) is a left-handed basis.
Since \(\det\left(\mat{1 & 1 & 1 \\ 4 & 2 & 1 \\ 9 & 3 & 1}\right)=-2<0\text{,}\) the ordered set \(\Set{\mat{1 \\ 4 \\ 9},\mat{1 \\ 2 \\ 3},\mat{1 \\ 1 \\ 1}}\) is a left-handed basis.
Since \(\det\left(\mat{4 & 4 & 2 \\ 2 & 2 & 1 \\ 4 & 0 & 6}\right)=0\text{,}\) the set \(\Set{\mat{4 \\ 2 \\ 4},\mat{4 \\ 2 \\ 0},\mat{2 \\ 1 \\ 6}}\) is not linearly independent and so is not a basis.
4.
Find all values of \(a,b\in \R\) so that the ordered set \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is (a) a right-handed basis, (b) a left-handed basis, (c) not a basis.
Solution.
Before answering, note that \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}b-a^{2}b^{2}=a^{2}( b -b^{2})\text{.}\)
If \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a right-handed basis, then \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}( b-b^{2})>0\text{.}\) This implies that \(a^{2}\) and \(b-b^{2}\) are both nonzero and have the same sign. Since \(a^{2}\ge 0\text{,}\) we must have \(a^{2}>0\) and \(b-b^{2}>0\text{.}\) The roots of \(b-b^{2}\) are \(b=0\) and \(b=1\text{,}\) so \(b-b^{2}>0\) implies \(0<b<1\text{.}\) Therefore, \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a right-handed basis when \(a\ne 0\) and \(0<b<1\text{.}\)
If \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a left-handed basis, then \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}( b-b^{2})<0\text{.}\) This implies that \(a^{2}\) and \(b-b^{2}\) are both nonzero and have different signs. Since \(a^{2}\ge 0\text{,}\) this implies that \(a^{2}>0\) and \(b-b^{2}<0\text{.}\) Roots of \(b-b^{2}\) are \(b=0\) and \(b=1\text{,}\) so \(b-b^{2}<0\) implies \(b<0\) or \(b>1\text{.}\) Therefore, \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a left-handed basis when \(a\ne 0\) and either \(b<0\) or \(b>1\text{.}\)
If \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is not a basis, then \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}( b-b^{2})=0\text{.}\) This implies that \(a^{2}=0\) or \(b-b^{2}=0\text{.}\) Therefore, \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is not a basis if one of the following conditions holds: \(a=0\text{,}\)\(b=1\text{,}\) or \(b=0\text{.}\)
5.
Let \(M=\mat{a & b \\ c & d}\text{.}\) The adjugate matrix (sometimes called the classical adjoint) of \(M\text{,}\) notated \(M^{\text{adj}}\text{,}\) is the matrix given by \(M^{\text{adj}}=\mat{d & -b \\ -c & a}\text{.}\) Prove that if \(M\) is invertible, then \(\displaystyle M^{-1}=\frac{M^{\text{adj}}}{\det(M)}\text{.}\)
For each statement below, determine whether it is true or false. Justify your answer.
A \(2\times 2\) matrix \(M\) has determinant \(1\) if and only if \(M= I_{2\times 2}\text{.}\)
A \(3\times 3\) matrix \(M\) has determinant \(1\) if and only if \(\VolChange (\mathcal{T}_{M})\) is equal to 1, where \(\mathcal{T}_{M}\) is the transformation given by \(\mathcal{T}_{M}(\vec x)=M\vec x\text{.}\)
For vectors \(\vec a,\vec b\in \R^{2}\text{,}\) it is always the case that \(\det([\vec a|\vec b])=-\det([\vec b|\vec a])\text{.}\)
For a \(2\times 2\) or \(3\times 3\) matrix \(M\text{,}\) multiplying a single entry of \(M\) by \(4\) will change \(\det(M)\) by a factor of \(4\text{.}\)
For a square matrix \(A\text{,}\) it is always the case that \(\det(A^{T}A)\geq 0\text{.}\)
Solution.
False. A counterexample is \(M=\mat{2 & 0 \\ 0 & \frac{1}{2}}\text{.}\)
False. A counterexample is \(M=\mat{-1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1}\text{.}\)\(M\) does not change volume, but it does reverse orientation.
True. Note that \([\vec a|\vec b]\) is just \([\vec b|\vec a]\) with its columns swapped. The oriented volume of the parallelogram generated by \(\vec a\) and \(\vec b\) is equal to the negative of the oriented volume of the parallelogram generated by \((\vec b,\vec a)\text{.}\) Using Volume Theorem I, we have \(\det([\vec a|\vec b])=-\det([\vec b|\vec a])\text{.}\)
False. A counterexample is \(M=\mat{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1}\text{.}\) We multiply the \((1,2)\)-entry by \(4\) to get another matrix \(M'\text{.}\) Note that \(M'\) is still \(\mat{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 }\text{,}\) and \(\det(M)=1=\det(M')\ne 4\det(M)\text{.}\)
True. Note that \(\det(A^{T}A)=\det(A^{T})\det(A)=\det(A)\det(A)=\det(A)^{2}\ge 0\text{.}\)