Skip to main content

Linear Algebra

Appendix D Formulas for \(2\times 2\) and \(3\times 3\) Determinants

In this appendix you will learn:
  • A practical formula for \(2\times 2\) determinants.
  • How to calculate \(3\times 3\) determinants using diagonal method.
  • How to calculate \(3\times 3\) determinants using cofactor expansion method.
Module 14 discusses the theory of determinants and gives a general algorithm for computing determinants by using elementary matrices. But, since \(2\times 2\) and \(3\times 3\) matrices arise so often in day-to-day life,
 27 
The day-to-day life of a mathematics student, at least!
it is worth learning some special-purpose formulas for computing the determinants of \(2\times 2\) and \(3\times 3\) matrices.
It should be noted that these formulas are special. Though there do exist formulas for determinants of \(n\times n\) matrices, they are exponentially more complex than the formulas for \(2\times 2\) and \(3\times 3\) matrices. As such, determinants of large matrices are usually computed using row reduction/elementary matrices and not formulas
 28 
General determinant formulas are primarily useful as theoretical tools for writing proofs.
.

Section D.1 Computing \(2\times 2\) Determinants

For a \(2\times 2\) matrix, we can calculate its determinant directly from its entries.
The \(2\times 2\) determinant formula can be deduced from Volume Theorem I. Let \(M=\mat{a&b\\c&d}\) and let \(\vec c_{1}=\mat{a\\c}\) and \(\vec c_{2}=\mat{b\\d}\) be the columns of \(M\text{.}\) We need to compute the area of the parallelogram \(\mathcal{P}\text{,}\) with sides \(\vec c_{1}\) and \(\vec c_{2}\text{.}\)
Figure D.1.2.
We can compute the area of \(\mathcal{P}\) by computing the area of a rectangle that contains \(\mathcal{P}\) and subtracting off any area that we “over counted”.
Figure D.1.3.
Figure D.1.4.
Thus,
\begin{equation*} \Vol(\mathcal{P}) = \text{area of big rectangle}- \text{area of little rectangles}- \text{area of triangles}. \end{equation*}
Using the coordinates for \(\vec c_{1}\) and \(\vec c_{2}\text{,}\) we get
\begin{equation} \Vol(\mathcal{P}) = \underbrace{(a+b)(d+c)}_{\text{area of big rectangle}}\qquad - \underbrace{2bc}_{\text{area of little rectangles}}- \underbrace{2\tfrac{ac}{2} + 2\tfrac{bd}{2}}_{\text{area of triangles}}= ad-bc.\tag{D.1.1} \end{equation}
Figure D.1.5.
Equation (D.1.1) is beautiful and simple, but its derivation should give you pause. Volume Theorem I refers to oriented volume and we didn’t make any reference to orientation in our figures! Indeed, we played tricks with pictures. We drew \(\vec c_{1}\) and \(\vec c_{2}\) in a right-handed orientation in the first quadrant, even though the vectors \(\mat{a\\c}\) and \(\mat{b\\d}\) could be in any quadrant (and one or both could even be the zero vector)! To fully justify Equation (D.1.1), we need to consider cases based on all the possible ways \(\vec c_{1}\) and \(\vec c_{2}\) can form a parallelogram. However, it turns out that every case gives the same answer: \(\det\left(\mat{a&b\\c&d}\right)=ad-bc\text{.}\)

Example D.1.6.

Directly compute the determinant of \(M=\mat{1 & 6 \\ 2 & 7}\) using the \(2\times 2\) formula. Then, find the determinant of \(M\) after decomposing it into the product of elementary matrices.

Solution.

Using the \(2\times 2\) formula, we get
\begin{equation*} \det(M)=(1)(7)-(2)(6)=-5. \end{equation*}
Alternatively, row reducing and keeping track of the elementary matrices for each step, we see
\begin{equation*} \underbrace{\mat{1 & -6 \\ 0 & 1}}_{\textstyle E_3}\underbrace{\mat{1 & 0 \\ 0 & -\frac{1}{5}}}_{\textstyle E_2}\underbrace{\mat{1 & 0 \\ -2 & 1}}_{\textstyle E_1}M = \mat{1&0\\0&1}, \end{equation*}
and so
\begin{equation*} M = \mat{1 & 0 \\ 2 & 1}\mat{1 & 0 \\ 0 & -5}\mat{1 & 6 \\ 0 & 1}= E_{1}^{-1}E_{2}^{-1}E_{3}^{-1}. \end{equation*}
\(E_{1}^{-1}\) and \(E_{3}^{-1}\) both have determinant \(1\text{,}\) and \(E_{2}^{-1}\) has determinant \(-5\text{.}\) Thus,
\begin{equation*} \det(M)=\det(E_{1}^{-1})\det(E_{2}^{-1})\det(E_{3}^{-1})=(1)(-5)(1)=-5, \end{equation*}
which is exactly what we got using the formula.

Section D.2 Computing \(3\times 3\) Determinants

The formula for a \(3\times 3\) matrix is more complicated than the \(2\times 2\) formula.
Fortunately, there is a clever mnemonic for remembering this formula called the Rule of Sarrus or the diagonal trick.
Rule of Sarrus
Let \(M=\mat{a & b & c \\ d & e & f \\ g & h & i}\text{.}\) To compute the determinant of \(M\) using the Rule of Sarrus, apply the following four steps.
Step 1.
Augment \(M\) with copies of its first two columns.
\begin{equation*} \left[\begin{array}{ccc|cc}a & b & c & a & b \\ d & e & f & d & e \\ g & h & i & g & h\end{array}\right] \end{equation*}
Step 2.
Multiply together and then add the entries along the three diagonals of the new matrix. These are called the diagonal products.
Figure D.2.2.
\begin{equation*} \text{sum of diagonal products}={ aei}+{ bfg}+{ cdh}. \end{equation*}
Step 3.
Multiply together and then subtract the entries along the three anti-diagonals. These are called the anti-diagonal products.
Figure D.2.3.
\begin{equation*} \text{difference of anti-diagonal products}=-{ gec}-{ hfa}-{ idb} \end{equation*}
Step 4.
Add the diagonal products and subtract the anti-diagonal products to get the determinant.
\begin{equation*} \det(M)=aei+bfg+cdh-gec-hfa-idb. \end{equation*}

Example D.2.4.

Use the diagonal trick to compute \(\det\left(\mat{1 & 4 & 0 \\ -2 & 3 & 1 \\ 0 & 2 & 1}\right)\text{.}\)

Solution.

Figure D.2.5.
\begin{equation*} \text{sum of diagonal products}={ (1)(3) (1)}+{ (4)(1) (0)}+{(0)(-2)(2)}=3+0+0. \end{equation*}
Figure D.2.6.
\begin{equation*} \text{difference of anti-diagonal products}=-{(0) (3) (0)}-{(2)(1) (1)}-{ (1) (-2)(4)}=-0-2-(-8). \end{equation*}
Thus,
\begin{equation*} \det\left(\mat{1 & 4 & 0 \\ -2 & 3 & 1 \\ 0 & 2 & 1}\right)=3\ +0\ +0\quad-0\ -2\ -(-8)=9. \end{equation*}
It may be tempting to apply the Rule of Sarrus to \(4\times 4\) and larger matrices, but don’t do it! There is a formula for \(4\times 4\) determinants, but it’s not given by the Rule of Sarrus
 1 
Because your curiosity is never ending, here’s the formula. For a matrix \(4\times 4\) matrix \(A=[a_{ij}]\text{,}\) we have \(\det(A)= a_{1 1}a_{2 2}a_{3 3}a_{4 4}- a_{1 1}a_{2 2}a_{3 4}a_{4 3}- a_{1 1}a_{2 3}a_{3 2}a_{4 4}+ a_{1 1}a_{2 3}a_{3 4}a_{4 2}+ a_{1 1}a_{2 4}a_{3 2}a_{4 3}- a_{1 1}a_{2 4}a_{3 3}a_{4 2}- a_{1 2}a_{2 1}a_{3 3}a_{4 4}+ a_{1 2}a_{2 1}a_{3 4}a_{4 3}+ a_{1 2}a_{2 3}a_{3 1}a_{4 4}- a_{1 2}a_{2 3}a_{3 4}a_{4 1}- a_{1 2}a_{2 4}a_{3 1}a_{4 3}+ a_{1 2}a_{2 4}a_{3 3}a_{4 1}+ a_{1 3}a_{2 1}a_{3 2}a_{4 4}- a_{1 3}a_{2 1}a_{3 4}a_{4 2}- a_{1 3}a_{2 2}a_{3 1}a_{4 4}+ a_{1 3}a_{2 2}a_{3 4}a_{4 1}+ a_{1 3}a_{2 4}a_{3 1}a_{4 2}- a_{1 3}a_{2 4}a_{3 2}a_{4 1}- a_{1 4}a_{2 1}a_{3 2}a_{4 3}+ a_{1 4}a_{2 1}a_{3 3}a_{4 2}+ a_{1 4}a_{2 2}a_{3 1}a_{4 3}- a_{1 4}a_{2 2}a_{3 3}a_{4 1}- a_{1 4}a_{2 3}a_{3 1}a_{4 2}+ a_{1 4}a_{2 3}a_{3 2}a_{4 1}\text{.}\) This formula involves \(24\) products. The \(5\times 5\) formula involves \(120\) products and the \(6\times 6\) formula involves \(720\) products. It only gets worse from there.
.
Like the \(2\times 2\) formula for determinants, we can derive the \(3\times 3\) formula directly from the definition. However, it takes quite a bit more work
 2 
If you’re interested in proving the \(3\times 3\) determinant formula, try using the elementary matrix approach rather than computing the volume of a parallelepiped directly.
.

Section D.3 Determinant Formulas and Orientation

Determinants and orientation are connected and our determinant formulas (if we accept them as true) give us an alternative way to determine the orientation of a basis.
Let \(\mathcal{B}=\Set{\vec b_1,\vec b_2}\) be an ordered basis for \(\R^{2}\text{,}\) and let \(M=[\vec b_{1}|\vec b_{2}]\) be the matrix whose columns are \(\vec b_{1}\) and \(\vec b_{2}\text{.}\) Since \(\mathcal{B}\) is linearly independent, we know that \(\det(M)\neq 0\text{.}\) Further, applying the definition of the determinant, we know
\begin{equation*} \det(M)>0 \end{equation*}
means that \(\mathcal{B}\) is a right-handed basis and \(\det(M)<0\) means \(\mathcal{B}\) is a left-handed basis.

Example D.3.1.

Use a determinant to decide whether the ordered basis \(\Set{\mat{1\\2},\mat{-3\\2}}\) is left-handed or right-handed.

Solution.

Let \(A = \mat{1 & -3 \\ 2 & 2}\) be the matrix whose columns are the elements of the given ordered basis.
Using the formula for \(2 \times 2\) determinants gives us
\begin{equation*} \det(A) = (1)(2) - (2)(-3) = 8 > 0 \end{equation*}
and so we conclude \(\Set{\mat{1\\2},\mat{-3\\2}}\) is a right-handed basis.
Recall the ordered basis \(\mathcal{Q}=\Set{\xhat, \vec u_{\theta}}\) where \(\vec u_{\theta} = \mat{\cos\theta\\\sin\theta}\) is the unit vector which forms an angle of \(\theta\) with the positive \(x\)-axis.
Figure D.3.2.
Visually, we can see that \(\mathcal{Q}\) should be right-handed when \(\theta\in(0,\pi)\text{,}\) left handed when \(\theta\in(\pi,2\pi)\) and \(\mathcal{Q}\) is not a basis when \(\theta=0\) or \(\theta =\pi\text{.}\)
But what does the determinant say?
Computing the determinant of the matrix \(Q=[\xhat|\,\vec u_{\theta}]\) directly using the \(2\times 2\) determinant formula, we get
\begin{equation*} \det(Q) = \det([\xhat|\,\vec u_{\theta}]) = \det\left(\mat{1&\cos\theta\\0&\sin\theta}\right) = \sin\theta. \end{equation*}
Notice that \(\det(Q)=\sin\theta>0\) when \(\theta\in(0,\pi)\text{,}\) \(\det(Q)=\sin\theta < 0\) when \(\theta\in(\pi,2\pi)\) and \(\det(Q)=\sin\theta=0\) when \(\theta\in\Set{0,\pi}\text{.}\)
The determinant supports our intuition.

Exercises D.4 Exercises

1.

For each matrix given below, calculate its determinant using both row reduction/elementary matrices and the \(2\times 2\) determinant formula.
  1. \(\displaystyle \mat{1 & 0 \\ 2 & 4}\)
  2. \(\displaystyle \mat{1 & 5 \\ 1 & 5}\)
  3. \(\displaystyle \mat{1 & 0 \\ 0 & 1}\)
  4. \(\displaystyle \mat{1 & 1 \\ 0 & 0}\)
Solution.
  1. \(\displaystyle 4\)
  2. \(\displaystyle 0\)
  3. \(\displaystyle 1\)
  4. \(\displaystyle 0\)

2.

For each matrix given below, calculate its determinant using both row reduction/elementary matrices and the \(3\times 3\) determinant formula.
  1. \(\displaystyle \mat{1 & 0 & 0 \\ 1 & 0 & 2 \\ 1 & 6 & 5}\)
  2. \(\displaystyle \mat{1 & -4 & 1 \\ 2 & 6 & 5 \\ 2 & 2 & 3}\)
  3. \(\displaystyle \mat{-1 & 0 & -8 \\ 1 & -3 & -8 \\ 1 & -2 & -1}\)
  4. \(\displaystyle \mat{2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3}\)
  5. \(\displaystyle \mat{0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0}\)
Solution.
  1. \(\displaystyle -12\)
  2. \(\displaystyle -16\)
  3. \(\displaystyle 5\)
  4. \(\displaystyle 12\)
  5. \(\displaystyle 0\)

3.

For each ordered set given below, use a determinant to decide whether it is a right-handed basis, a left-handed basis, or not a basis.
  1. \(\displaystyle \Set{\mat{1 \\ -3},\mat{1 \\ 2}}\)
  2. \(\displaystyle \Set{\mat{1 \\ -3},\mat{-2 \\ 6}}\)
  3. \(\displaystyle \Set{\mat{1 \\ 0 \\ 3},\mat{1 \\ 2 \\ 5},\mat{1 \\ -1 \\ 1}}\)
  4. \(\displaystyle \Set{\mat{1 \\ 4 \\ 9},\mat{1 \\ 2 \\ 3},\mat{1 \\ 1 \\ 1}}\)
  5. \(\displaystyle \Set{\mat{4 \\ 2 \\ 4},\mat{4 \\ 2 \\ 0},\mat{2 \\ 1 \\ 6}}\)
Solution.
  1. Since \(\det\left(\mat{1 & 1 \\ -3 & 2}\right)=5>0\text{,}\) the ordered set \(\Set{\mat{1 \\ -3},\mat{1 \\ 2}}\) is a right-handed basis.
  2. Since \(\det\left(\mat{1 & -2 \\ -3 & 6}\right)=0\text{,}\) the set \(\Set{\mat{1 \\ -3},\mat{-2 \\ 6}}\) is not linearly independent and so is not a basis.
  3. Since \(\det\left(\mat{1 & 1 & 1 \\ 0 & 2 & -1 \\ 3 & 5 & 1}\right)=-2< 0\text{,}\) the ordered set \(\Set{\mat{1 \\ 0 \\ 3},\mat{1 \\ 2 \\ 5},\mat{1 \\ -1 \\ 1}}\) is a left-handed basis.
  4. Since \(\det\left(\mat{1 & 1 & 1 \\ 4 & 2 & 1 \\ 9 & 3 & 1}\right)=-2<0\text{,}\) the ordered set \(\Set{\mat{1 \\ 4 \\ 9},\mat{1 \\ 2 \\ 3},\mat{1 \\ 1 \\ 1}}\) is a left-handed basis.
  5. Since \(\det\left(\mat{4 & 4 & 2 \\ 2 & 2 & 1 \\ 4 & 0 & 6}\right)=0\text{,}\) the set \(\Set{\mat{4 \\ 2 \\ 4},\mat{4 \\ 2 \\ 0},\mat{2 \\ 1 \\ 6}}\) is not linearly independent and so is not a basis.

4.

Find all values of \(a,b\in \R\) so that the ordered set \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is (a) a right-handed basis, (b) a left-handed basis, (c) not a basis.
Solution.
Before answering, note that \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}b-a^{2}b^{2}=a^{2}( b -b^{2})\text{.}\)
  1. If \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a right-handed basis, then \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}( b-b^{2})>0\text{.}\) This implies that \(a^{2}\) and \(b-b^{2}\) are both nonzero and have the same sign. Since \(a^{2}\ge 0\text{,}\) we must have \(a^{2}>0\) and \(b-b^{2}>0\text{.}\) The roots of \(b-b^{2}\) are \(b=0\) and \(b=1\text{,}\) so \(b-b^{2}>0\) implies \(0<b<1\text{.}\) Therefore, \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a right-handed basis when \(a\ne 0\) and \(0<b<1\text{.}\)
  2. If \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a left-handed basis, then \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}( b-b^{2})<0\text{.}\) This implies that \(a^{2}\) and \(b-b^{2}\) are both nonzero and have different signs. Since \(a^{2}\ge 0\text{,}\) this implies that \(a^{2}>0\) and \(b-b^{2}<0\text{.}\) Roots of \(b-b^{2}\) are \(b=0\) and \(b=1\text{,}\) so \(b-b^{2}<0\) implies \(b<0\) or \(b>1\text{.}\) Therefore, \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is a left-handed basis when \(a\ne 0\) and either \(b<0\) or \(b>1\text{.}\)
  3. If \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is not a basis, then \(\det\left(\mat{a^2 & ab \\ ab & b}\right)=a^{2}( b-b^{2})=0\text{.}\) This implies that \(a^{2}=0\) or \(b-b^{2}=0\text{.}\) Therefore, \(\Set{\mat{a^2 \\ ab},\mat{ab \\ b}}\) is not a basis if one of the following conditions holds: \(a=0\text{,}\) \(b=1\text{,}\) or \(b=0\text{.}\)

5.

Let \(M=\mat{a & b \\ c & d}\text{.}\) The adjugate matrix (sometimes called the classical adjoint) of \(M\text{,}\) notated \(M^{\text{adj}}\text{,}\) is the matrix given by \(M^{\text{adj}}=\mat{d & -b \\ -c & a}\text{.}\) Prove that if \(M\) is invertible, then \(\displaystyle M^{-1}=\frac{M^{\text{adj}}}{\det(M)}\text{.}\)
Solution.
Since
\begin{equation*} \frac{M^{\text{adj}}}{\det{M}}M=\frac{1}{ad-bc}\mat{da-bc & db-bd \\ ac-ca& ad-cb} \end{equation*}
\begin{equation*} =\mat{1 & 0 \\ 0 & 1}=I_{2\times 2} \end{equation*}
and
\begin{equation*} M\frac{M^{\text{adj}}}{\det{M}}=\frac{1}{ad-bc}\mat{ad-bc & ba-ab \\ cd-dc& da-cb} \end{equation*}
\begin{equation*} =\mat{1 & 0 \\ 0 & 1}=I_{2\times 2}, \end{equation*}
we conclude that
\begin{equation*} M^{-1}=\frac{M^{\text{adj}}}{\det(M)}. \end{equation*}

6.

For each statement below, determine whether it is true or false. Justify your answer.
  1. A \(2\times 2\) matrix \(M\) has determinant \(1\) if and only if \(M= I_{2\times 2}\text{.}\)
  2. A \(3\times 3\) matrix \(M\) has determinant \(1\) if and only if \(\VolChange (\mathcal{T}_{M})\) is equal to 1, where \(\mathcal{T}_{M}\) is the transformation given by \(\mathcal{T}_{M}(\vec x)=M\vec x\text{.}\)
  3. For vectors \(\vec a,\vec b\in \R^{2}\text{,}\) it is always the case that \(\det([\vec a|\vec b])=-\det([\vec b|\vec a])\text{.}\)
  4. For a \(2\times 2\) or \(3\times 3\) matrix \(M\text{,}\) multiplying a single entry of \(M\) by \(4\) will change \(\det(M)\) by a factor of \(4\text{.}\)
  5. For a square matrix \(A\text{,}\) it is always the case that \(\det(A^{T}A)\geq 0\text{.}\)
Solution.
  1. False. A counterexample is \(M=\mat{2 & 0 \\ 0 & \frac{1}{2}}\text{.}\)
  2. False. A counterexample is \(M=\mat{-1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1}\text{.}\) \(M\) does not change volume, but it does reverse orientation.
  3. True. Note that \([\vec a|\vec b]\) is just \([\vec b|\vec a]\) with its columns swapped. The oriented volume of the parallelogram generated by \(\vec a\) and \(\vec b\) is equal to the negative of the oriented volume of the parallelogram generated by \((\vec b,\vec a)\text{.}\) Using Volume Theorem I, we have \(\det([\vec a|\vec b])=-\det([\vec b|\vec a])\text{.}\)
  4. False. A counterexample is \(M=\mat{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1}\text{.}\) We multiply the \((1,2)\)-entry by \(4\) to get another matrix \(M'\text{.}\) Note that \(M'\) is still \(\mat{1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 }\text{,}\) and \(\det(M)=1=\det(M')\ne 4\det(M)\text{.}\)
  5. True. Note that \(\det(A^{T}A)=\det(A^{T})\det(A)=\det(A)\det(A)=\det(A)^{2}\ge 0\text{.}\)