Skip to main content

Linear Algebra

Appendix B Systems of Linear Equations II

In this appendix you will learn
  • How to put a matrix into reduced row echelon form.
  • How to use free variables to write down the complete solution to a system of linear equations.
  • That a system of linear equations has 0, 1, or infinitely many solutions.
  • How solution sets to systems of linear equations relate to intersecting hyperplanes.
Consider the system
\begin{equation} \left\{ \begin{array}{rcrl} x&-&2y&=0\\2x&-&4y&=0 \end{array}\right..\tag{B.0.7} \end{equation}
Notice that every solution to the first equation is also a solution to the second equation. Applying row reduction, we get the system
\begin{equation*} \left\{\begin{array}{rcrl}x&-&2y&=0\\0x&+&0y&=0\end{array}\right., \end{equation*}
but that second equation, \(0x+0y=0\text{,}\) is funny. It is always true, no matter the choice of \(x\) and \(y\text{.}\) It adds no new information! In retrospect, it might be obvious that both equations from System (B.0.7) contain the same information making one equation redundant.
System (B.0.7) is an example of an underdetermined system of equations, meaning there is not enough information to uniquely determine the value of each variable. Its solution set is a line, which we can find by graphing.
Figure B.0.29.
From this picture, we could describe the complete solution to System (B.0.7) in vector form by
\begin{equation*} \vec x = t\mat{2\\1}. \end{equation*}
But, what about a more complicated system? The system
\begin{equation*} \left\{\begin{array}{rcrcrl}x&+&y&+&z&=1\\&&y&-&z&=2\end{array}\right. \end{equation*}
is also underdetermined. It has a complete solution described by
\begin{equation*} \mat{x\\y\\z}=t\mat{-2\\1\\1}+ \mat{-1\\2\\0}, \end{equation*}
but this is much harder to find by graphing. Fortunately, we won’t have to graph. Row reduction, combined with the notion of free variables, will provide a solution.

Section B.1 Reduced Row Echelon Form

Before we tackle complete solutions for underdetermined systems, we need to talk about reduced row echelon form
 1 
Reduced row echelon form is alternatively called row reduced echelon form; whether you say “reduced row” or “row reduced” makes no difference to the math!
, which is abbreviated rref. The reduced row echelon form of a matrix is the simplest (in terms of reading off solutions) form a matrix can be turned into via elementary row operations.

Definition B.1.1. Reduced Row Echelon Form (RREF).

A matrix \(X\) is in reduced row echelon form if the following conditions hold:
  • The first non-zero entry in every row is a \(1\text{;}\) these entries are called pivots or leading ones.
  • Above and below each leading one are zeros.
  • The leading ones form an echelon (staircase) pattern. That is, if row \(i\) has a leading one, every leading one appearing in row \(j>i\) also appears to the right of the leading one in row \(i\text{.}\)
  • All rows of zeros occur at the bottom of \(X\text{.}\)
Columns of a reduced row echelon form matrix that contain pivots are called pivot columns
 2 
If a matrix is augmented, we usually do not refer to the augmented column as a pivot column, even if it contains a pivot.
.

Example B.1.2.

Which of the follow matrices are in reduced row echelon form? For those that are, identify which columns are pivot columns. For those that are not, what condition(s) fail?
\begin{equation*} A= \left[\begin{array}{cccc}1 & 0 & 0 & 2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 7\end{array}\right]\qquad B= \left[\begin{array}{cccc}1 & 0 & 0 & 8\\ 0 & 1 & 3 & 7\\ 0 & 2 & 1 & 4\\\end{array}\right]\qquad C= \left[\begin{array}{cccc}1 & 0 & 0 & 2\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 8\end{array}\right]\qquad D= \left[\begin{array}{cccc}0 & 1 & 3 & 6\\ 1 & 0 & 0 & 9\\ 0 & 0 & 1 & 4\\\end{array}\right] \end{equation*}

Solution.

\(A\) is not in reduced row echelon form because the second row of \(A\) is a row of zeros but does not occur at the bottom.
\begin{equation*} A= \left[\begin{array}{cccc}1 & 0 & 0 & 2\\ { 0} & { 0} & { 0} & { 0}\\ 0 & 0 & 1 & 7\end{array}\right] \end{equation*}
\(B\) is not in reduced row echelon form for two reasons: (i) the first non-zero entry in the third row is not a \(1\text{,}\) and (ii) the entry below the leading one in the second row is not zero.
\begin{equation*} B= \left[\begin{array}{cccc}1 & 0 & 0 & 8\\ 0 & 1 & 3 & 7\\ 0 & { 2} & 1 & 4\\\end{array}\right] \end{equation*}
\(C\) is in reduced row echelon form and the first, second, third columns are the pivot columns of \(C\text{.}\)
\begin{equation*} C= \left[\begin{array}{cccc}{ 1} & 0 & 0 & 2\\ 0 & { 1} & 0 & 1\\ 0 & 0 & { 1} & 8\end{array}\right] \end{equation*}
\(D\) is not in reduced row echelon form for two reasons: (i) the entries above the leading one in the third row are not all zeros, and (ii) the leading one in the second row appears to the left of the leading one in the first row.
\begin{equation*} D= \left[\begin{array}{cccc}0 & { 1} & { 3} & 6\\ { 1} & 0 & 0 & 9\\ 0 & 0 & 1 & 4\\\end{array}\right] \end{equation*}
We’ve encountered the reduced row echelon form of a matrix already in the examples of Appendix A. Recall the system
\begin{equation*} \left\{\begin{array}{rcrcrl}t&+&2s&-&2r&=-15\\2t&+&s&-&5r&=-21\\t&-&4s&+&r&=18 \end{array}\right.\qquad\text{with augmented matrix}\qquad X=\left[\begin{array}{rrr|r}1&2&-2 & -15\\ 2&1&-5&-21\\ 1&-4&1&18\end{array}\right]. \end{equation*}
The matrix \(X\) could be row reduced to
\begin{equation*} X'=\left[\begin{array}{rrr|r}1&2&-2 & -15\\ 0&-3&-1&9\\ 0&0&5&15\end{array}\right], \end{equation*}
which was suitable for solving the system. However, \(X'\) is not in reduced row echelon form (the leading non-zero entries must all be ones). We can further row reduce \(X'\) to
\begin{equation*} X''=\left[\begin{array}{rrr|r}1&0&0 & -1\\ 0&1&0&-4\\ 0&0&1&3\end{array}\right]. \end{equation*}
\(X''\) is the reduced row echelon form of \(X\text{,}\) and reading off the solution to the original system from \(X''\) is as simple as can be!
Every matrix, \(M\text{,}\) has a unique reduced row echelon form, written \(\Rref(M)\text{,}\) which can be obtained from \(M\) by applying elementary row operations. There are many ways to compute the reduced row echelon form of a matrix, but the following algorithm always works.

Definition B.1.3. Row Reduction Algorithm.

Let \(M\) be a matrix.
  1. If \(M\) takes the form \(M=[\vec 0|M']\) (that is, its first column is all zeros), apply the algorithm to \(M'\text{.}\)
  2. If not, perform a row-swap (if needed) so the upper-left entry of \(M\) is non-zero.
  3. Let \(\alpha\) be the upper-left entry of \(M\text{.}\) Perform the row operation \(\text{row}_{1}\mapsto \tfrac{1}{\alpha}\text{row}_{1}\text{.}\) The upper-left entry of \(M\) is now \(1\) and is called a pivot.
  4. Use row operations of the form \(\text{row}_{i}\mapsto \text{row}_{i}+\beta\,\text{row}_{1}\) to zero every entry below the pivot.
  5. Now, \(M\) has the form
    \begin{equation*} M=\left[\begin{array}{c|c}1 & ??\\ \hline \bigmathstrut \vec 0 & M'\end{array}\right], \end{equation*}
    where \(M'\) is a submatrix of \(M\text{.}\) Apply the algorithm to \(M'\text{.}\)
The resulting matrix is in pre-reduced row echelon form. To put the matrix in reduced row echelon form, additionally apply step 6.
6.
Use the row operations of the form \(\text{row}_{i}\mapsto \text{row}_{i}+\beta\,\text{row}_{j}\) to zero above each pivot.
Though there might be more efficient ways, and you might encounter ugly fractions, the row reduction algorithm will always convert a matrix to its reduced row echelon form.

Example B.1.4.

Apply the row-reduction algorithm to the matrix
\begin{equation*} M=\mat{0&0&0&-2&-2\\0&1&2&3&2\\0&2&4&5&3}. \end{equation*}

Solution.

First notice that \(M\) starts with a column of zeros, so we will focus on the right side of \(M\text{.}\) We will draw a line to separate it.
\begin{equation*} M=\left[\begin{array}{r|rrrr}0&0&0&-2&-2\\0&1&2&3&2\\0&2&4&5&3\end{array}\right] \end{equation*}
Next, we perform a row swap to bring a non-zero entry to the upper left.
\begin{equation*} \left[\begin{array}{r|rrrr}0&0&0&-2&-2\\0&1&2&3&2\\0&2&4&5&3\end{array}\right] \xrightarrow{\text{row}_1\leftrightarrow\text{row}_2}\left[\begin{array}{r|rrrr}0&1&2&3&2\\0&0&0&-2&-2\\0&2&4&5&3\end{array}\right] \end{equation*}
The upper-left entry is already a \(1\text{,}\) so we can use it to zero all entries below.
\begin{equation*} \left[\begin{array}{r|rrrr}0&1&2&3&2\\0&0&0&-2&-2\\0&2&4&5&3\end{array}\right] \xrightarrow{\text{row}_3\mapsto\text{row}_3-2\text{row}_1}\left[\begin{array}{r|rrrr}0&1&2&3&2\\0&0&0&-2&-2\\0&0&0&-1&-1\end{array}\right] \end{equation*}
Now we work on the submatrix.
\begin{equation*} \left[\begin{array}{rr|rrr}0&1&2&3&2\\ \hline \bigmathstrut 0&0&0&-2&-2\\0&0&0&-1&-1\end{array}\right] \end{equation*}
Again, the submatrix has a first column of zeros, so we pass to a sub-submatrix.
\begin{equation*} \left[\begin{array}{rrr|rr}0&1&2&3&2\\ \hline \bigmathstrut 0&0&0&-2&-2\\0&0&0&-1&-1\end{array}\right] \end{equation*}
Now we turn the upper left entry into a \(1\) and use that pivot to zero all entries below.
\begin{equation*} \left[\begin{array}{rrr|rr}0&1&2&3&2\\ \hline \bigmathstrut 0&0&0&-2&-2\\0&0&0&-1&-1\end{array}\right] \xrightarrow{\text{row}_2\mapsto\tfrac{-1}{2}\text{row}_2}\left[\begin{array}{rrr|rr}0&1&2&3&2\\ \hline \bigmathstrut 0&0&0&1&1\\0&0&0&-1&-1\end{array}\right] \xrightarrow{\text{row}_3\mapsto\text{row}_3+\text{row}_2}\left[\begin{array}{rrr|rr}0&1&2&3&2\\ \hline \bigmathstrut 0&0&0&1&1\\0&0&0&0&0\end{array}\right] \end{equation*}
The matrix is now in pre-reduced row echelon form. To put it in reduced row echelon form, we zero above each pivot.
\begin{equation*} \mat{ 0&1&2&3&2\\ 0&0&0&1&1\\0&0&0&0&0 }\xrightarrow{\text{row}_1\mapsto\text{row}_1-3\text{row}_2}\mat{ 0&1&2&0&-1\\ 0&0&0&1&1\\0&0&0&0&0 } \end{equation*}
All matrices, whether augmented or not, have a reduced row echelon form. Correctly applying the row reduction algorithm takes practice, but being able to row reduce a matrix is the analogue of “knowing your multiplication tables” for linear algebra.

Section B.2 Free Variables & Complete Solutions

By now we are very familiar with the system
\begin{equation*} \left\{\begin{array}{rcrcrl}x&+&2y&-&2z&=-15\\2x&+&y&-&5z&=-21\\x&-&4y&+&z&=18 \end{array}\right., \end{equation*}
which has a unique solution \((x,y,z)=(-1,-4,3)\text{.}\) We can compute this by row reducing the associated augmented matrix:
\begin{equation*} \Rref\left(\left[\begin{array}{rrr|r}1&2&-2 & -15\\ 2&1&-5&-21\\ 1&-4&1&18\end{array}\right]\right) \qquad=\qquad \left[\begin{array}{rrr|r}1&0&0 & -1\\ 0&1&0&-4\\ 0&0&1&3\end{array}\right], \end{equation*}
which corresponds to the system
\begin{equation*} \left\{\begin{array}{rcrcrl}x\quad &&&&&=-1\\&&y\quad &&&=-4\\&&&&z&=3 \end{array}\right., \end{equation*}
from which the solution is immediate. But what happens when there isn’t a unique solution?
Consider the system
\begin{equation} \left\{ \begin{array}{rcrl} x&+&3y&=2\\2x&+&6y&=4 \end{array}\right..\tag{B.2.1} \end{equation}
When using an augmented matrix to solve this system, we run into an issue.
\begin{equation*} \left[\begin{array}{rr|r}1&3 & 2\\ 2&6&4\\\end{array}\right] \qquad\Rrefto\qquad \left[\begin{array}{rr|r}1&3&2\\ 0&0&0\\\end{array}\right] \end{equation*}
From the reduced row echelon form, we’re left with the equation \(x+3y=2\text{,}\) which isn’t exactly a solution. Effectively, the original system had only one equation’s worth of information, so we cannot solve for both \(x\) and \(y\) based on the original system. To get ourselves out of this pickle, we will use a notational trick: introduce the arbitrary equation \(y=t\)
 1 
This equation is called arbitrary because it introduces no new information about the original variables. The restrictions on \(x\) and \(y\) aren’t changed by introducing the fact \(y=t\text{.}\)
. Now, because we’ve already done row-reduction, we see
\begin{equation*} \left\{\begin{array}{rcrl}x&+&3y&=2\\2x&+&6y&=4\\&&y&=t\end{array}\right.\qquad\Rrefto\qquad \left\{\begin{array}{rcrl}x&+&3y&=2\\&&y&=t\end{array}\right.. \end{equation*}
Here we’ve omitted the equation \(0=0\) since it adds no information. Now, we can solve for \(x\) and \(y\) in terms of \(t\text{.}\)
\begin{equation*} \vec x=\mat{x\\y}= \matc{2-3t\\t}=t\mat{-3\\1}+\mat{2\\0}. \end{equation*}
Notice that \(t\) here stands for an arbitrary real number. Any choice of \(t\) produces a valid solution to the original system (go ahead, pick some values for \(t\) and see what happens). We call \(t\) a parameter and \(y\) a free variable
 2 
We call \(y\) free because we may pick it to be anything we want and still produce a solution to the system.
. Notice further that
\begin{equation*} \vec x=t\mat{-3\\1}+\mat{2\\0} \end{equation*}
is vector form of the line \(x+3y=2\text{.}\)
Though you can usually make many choices about which variables are free variables, one choice always works: pick all variables corresponding to non-pivot columns to be free variables. For this reason, we refer to non-pivot non-augmented columns of a row-reduced matrix as free variable columns.

Example B.2.1.

Use row reduction to find the complete solution to \(\left\{\begin{array}{rcrcrl}x&+&y&+&z&=1\\&&y&-&z&=2\end{array}\right.\)

Solution.

The corresponding augmented matrix for the system is
\begin{equation*} A=\left[\begin{array}{ccc|c}1 & 1 & 1 & 1\\ 0 & 1 & -1 & 2\end{array}\right]. \end{equation*}
\(A\) is already in pre-reduced row echelon form, so we only need to zero above each pivot.
\begin{equation*} \left[\begin{array}{ccc|c}1 & 1 & 1 & 1\\ 0 & 1 & -1 & 2\end{array}\right] \xrightarrow{\text{row}_1\mapsto\text{row}_1-\text{row}_2}\left[\begin{array}{ccc|c}1 & 0 & 2 & -1\\ 0 & 1 & -1 & 2\end{array}\right] =\Rref(A). \end{equation*}
The third column of \(\Rref(A)\) is a free variable column, so we introduce the arbitrary equation \(z=t\) and solve the system in terms of \(t\text{:}\)
\begin{equation*} \left\{\begin{array}{rcrcrl}x&&&+&2z&=-1\\&&y&-&z&=2\\&&&&z&=t\end{array}\right.. \end{equation*}
Written in vector form, the complete solution is
\begin{equation*} \mat{x\\y\\z}= \matc{-1-2t\\2+t\\t}= t\mat{-2\\1\\1}+\mat{-1\\2\\0}, \end{equation*}
and written as a set, the solution set is
\begin{equation*} \Set{\vec x\in\R^3 \given \vec x= t\mat{-2\\1\\1}+\mat{-1\\2\\0}\text{ for some } t\in\R}. \end{equation*}
Consider the (somewhat strange) system of one equation
\begin{equation*} \left\{\begin{array}{rcrcrl}0x&+&0y&+&z&=1\end{array}\right.. \end{equation*}
The solution set for this system is the \(xy\)-plane in \(\R^{3}\) shifted up by one unit. We can use row reduction and free variables to see this.
The system corresponds to the augmented matrix
\begin{equation*} \left[\begin{array}{ccc|c}0&0&1&1\end{array}\right] \end{equation*}
which is already in reduced row echelon form. It’s third column is the only pivot column, making columns \(1\) and \(2\) free variable columns (remember, we don’t count augmented columns as free variable columns). Thus, we introduce two arbitrary equations, \(x=t\) and \(y=s\text{,}\) and solve the new system
\begin{equation*} \left\{\begin{array}{rcrcrl}0x&+&0y&+&z&=1\\x&&&&&=t\\&&y&&&=s\end{array}\right. \end{equation*}
for \((x,y,z)\text{,}\) which gives
\begin{equation*} \mat{x\\y\\z}= \mat{t\\s\\1}= t\mat{1\\0\\0}+s\mat{0\\1\\0}+\mat{0\\0\\1}. \end{equation*}
Using row reduction and free variables, we can find complete solutions to very complicated systems. The process is straight-forward enough that even a computer can do it!
 3 
Computers usually don’t follow the algorithm outlined here because they have to deal with rounding error. But, there is a modification of the row reduction algorithm called row reduction with partial pivoting which solves some issues with rounding error.

Example B.2.2.

Consider the system of equations in the variables \(x\text{,}\) \(y\text{,}\) \(z\text{,}\) and \(w\text{:}\)
\begin{equation*} \left\{\begin{array}{rcrcrcrl}&&&&&-&2w&=-2\\&&y&+&2z&+&3w&=2\\&&2y&+&4z&+&5w&=3\end{array}\right. \end{equation*}
Find the solution set for this system.

Solution.

The augmented matrix corresponding to this system is
\begin{equation*} M=\left[\begin{array}{cccc|c}0&0&0&-2&-2\\0&1&2&3&2\\0&2&4&5&3\end{array}\right], \end{equation*}
which we’ve row reduced in a previous example:
\begin{equation*} \Rref(M) = \left[\begin{array}{cccc|c}0&1&2&0&-1\\ 0&0&0&1&1\\ 0&0&0&0&0\end{array}\right]. \end{equation*}
Here, columns \(1\) and \(3\) are free variable columns, so we introduce the equations \(x=t\) and \(z=s\text{.}\) Now, solving the system
\begin{equation*} \left\{\begin{array}{rcrcrcrl}&&y&+&2z&&&=-1\\&&&&&&w&=1\\x&&&&&&&=t\\&&&&z&&&=s\end{array}\right. \end{equation*}
for \((x,y,z,w)\text{,}\) gives
\begin{equation*} \mat{x\\y\\z\\w}= \matc{t\\-1-2s\\s\\1}= t\mat{1\\0\\0\\0}+s\mat{0\\-2\\1\\0}+\mat{0\\-1\\0\\1}. \end{equation*}
Thus, the solution set to the system is
\begin{equation*} \Set{\vec x\in\R^4\given \vec x = t\mat{1\\0\\0\\0}+s\mat{0\\-2\\1\\0}+\mat{0\\-1\\0\\1} \text{ for some }t,s\in\R}. \end{equation*}

Section B.3 Free Variables & Inconsistent Systems

If you need a free variable/parameter to describe the complete solution to a system of linear equations, the system necessarily has an infinite number of solutions—one coming from every choice of value for your free variable/parameter. However, one still needs to be careful when deciding from an augmented matrix whether a system of linear equations has an infinite number of solutions.
Consider the augmented matrices \(A\) and \(B\text{,}\) which are given in reduced row echelon form.
\begin{equation*} A=\left[\begin{array}{cc|c}1&2&-1\\0&0&0\end{array}\right] \qquad B=\left[\begin{array}{cc|c}1&2&0\\0&0&1\end{array}\right] \end{equation*}
Both matrices lack a pivot in their second column. However, \(A\) corresponds to a system with an infinite solution set, while \(B\) corresponds to an inconsistent system with an empty solution set. We can debate whether it is appropriate to say that \(B\) has a free variable column
 1 
On the one hand, the second column fits the description. On the other hand, you cannot make any choices when picking a solution, since there are no solutions.
, but one thing is clear: when evaluating the number of solutions to a system, you must pay attention to whether or not the system is consistent.
Putting everything together, we can fully classify the number of solutions to a system of linear equations based on pivots/free variables.
Consistent Pivots Number of Solutions
False At least one column doesn’t have a pivot 0
True Every column has a pivot 1
True At least one column doesn’t have a pivot Infinitely many
This information is so important, we will also codify it in a theorem.

Section B.4 The Geometry of Systems of Equations

Consider the system of equations
\begin{equation} \left\{ \begin{array}{rcrll} x&-&2y&=0&\quad \qquad\text{row}_{1}\\x&+&y&=3&\quad \qquad\text{row}_{2} \end{array}\right.\tag{B.4.1} \end{equation}
The only values of \(x\) and \(y\) that satisfy both equations is \((x,y)=(2,1)\text{.}\) However, each row, viewed in isolation, specifies a line in \(\R^{2}\text{.}\) Call the line coming from the first row \(\ell_{1}\) and the line coming from the second row \(\ell_{2}\text{.}\)
Figure B.4.1.
These two lines intersect exactly at the point \(\mat{2\\1}\text{.}\) And, of course they should. By definition, a solution to a system of equations satisfies all equations. In other words, a solution to System (B.4.1) is a point that lies in both \(\ell_{1}\) and \(\ell_{2}\text{.}\) In other words, solutions lie in \(\ell_{1}\cap \ell_{2}\text{.}\)

Takeaway B.4.2.

Geometrically, a solution to a system of equations is the intersection of objects specified by the individual equations.
This perspective sheds some light on inconsistent systems. The system
\begin{equation*} \left\{\begin{array}{rcrll}x&-&2y&=0&\quad \qquad\text{row}_{1}\\2x&-&4y&=2&\quad \qquad\text{row}_{2} \end{array}\right. \end{equation*}
is inconsistent. And, when we graph the lines specified by the rows, we see that they are parallel and never intersect. Thus, the solution set is empty.

Section B.5 Planes & Hyperplanes

Consider the solution set to a single linear equation viewed in isolation. For example, in the three-variable case, we might consider
\begin{equation*} x+2y-z=3. \end{equation*}
The solution set to this equation is a plane. Why? For starters, writing down the complete solution involves picking two free variables. Suppose we pick \(y=t\) and \(z=s\text{.}\) Then, before we even do a calculation, we know the complete solution will be described in vector form by
\begin{equation*} \vec x=t\vec d_{1}+s\vec d_{2}+\vec p, \end{equation*}
where \(\vec d_{1}\text{,}\) \(\vec d_{2}\text{,}\) and \(\vec p\) come from doing the usual computations. But, that is vector form of a plane!
In general, a single equation in \(n\) variables requires \(n-1\) free variables to describe its complete solution. The only exception is the trivial equation, \(0x_{1}+\cdots +0x_{n}=0\text{,}\) which requires \(n\) free variables. For the sake of brevity, from now on we will assume that a linear equation in \(n\) variables means a non-trivial linear equation in \(n\) variables.
Applying this knowledge, we can construct a table for systems consisting of a single linear equation.
Number of Variables Number of Free Variables Complete Solution
2 1 Line in \(\R^{2}\)
3 2 Plane in \(\R^{3}\)
4 3 Volume in \(\R^{4}\)
Notice that the dimension of the solution set (a line being one dimensional, a plane being two dimensional, and a volume being three dimensional) is always one less than the dimension of the ambient space (\(\R^{2}\text{,}\) \(\R^{3}\text{,}\) \(\R^{4}\))
 1 
Another way to describe these sets would be to say that they have co-dimension 1.
. Such sets are called hyperplanes because they are flat and plane-like. However, unlike a plane, the dimension of a hyperplane need not be two.
With our newfound geometric intuition, we can understand solutions to systems of linear equations in a different way. The solution set to a system of linear equations of two variables must be the result of intersecting lines. Therefore, the only options are: a point, a line, or the empty set. The solution to a system of linear equations of three variables is similarly restricted. It can only be: a point, a line, a plane, or the empty set.
Figure B.5.1.
Figure B.5.2.
In higher dimensions, the story is the same: solution sets are formed by intersecting hyperplanes and we can use algebra to precisely describe these sets of intersection.

Exercises B.6 Exercises

1.

Find the complete solution to the following systems.
  1. \(\displaystyle \left\{\begin{array}{crcrcrcrl}&4x&+&6y&+&3z&-&10w&=6\\&5x&+&2y&+&z&-&7w&=2\\-&6x&+&2y&+&z&+&4w&=2\end{array}\right.\)
  2. \(\displaystyle \left\{\begin{array}{rcrcrcrl}2x&+&2y&+&z&&&=-1\\&&y&-&4z&+&2w&=3\\x&-&y&-&3z&-&4w&=5\end{array}\right.\)
  3. \(\displaystyle \left\{\begin{array}{crcrcrl}&x&+&y&-&2z&=-5\\-&4x&+&y&+&5z&=3\end{array}\right.\)
  4. \(\displaystyle \left\{\begin{array}{crcrcrl}&3x&-&2y&&&=-4\\&x&+&y&+&3z&=3\\-&4x&+&y&-&3z&=1\end{array}\right.\)
  5. \(\displaystyle \left\{\begin{array}{rcrcrl}x&-&y&+&2z&=-1\\2x&+&y&+&4z&=1\\3x&-&4y&+&3z&=-2\end{array}\right.\)
  6. \(\displaystyle \left\{\begin{array}{rcrcrl}2x&&&+&z&=8\\x&+&y&+&z&=4\\x&+&3y&+&2z&=4\\3x&+&2y&+&4z&=9\end{array}\right.\)
Solution.
  1. Let
    \begin{equation*} X= \left[\begin{array}{cccc|c}4 & 6 & 3 & -10 & 6\\ 5 & 2 & 1 & -7 & 2\\ -6 & 2 & 1 & 4 & 2\end{array}\right] \end{equation*}
    be the augmented matrix corresponding to the system.
    By row reduction,
    \begin{equation*} \Rref(X)= \left[\begin{array}{cccc|c}1 & 0 & 0 & -1 & 0\\ 0 & 1 & 1/2 & -1 & 1\\ 0 & 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}
    The third and fourth column of \(\Rref(X)\) are free variable columns, so we introduce the arbitrary equations \(z=t\) and \(w=s\) and solve the following system in terms of \(t\) and \(s\text{:}\)
    \begin{equation*} \left\{\begin{array}{rcrcrcrl}x&&&&&-&w&=0\\&&y&+&(1/2)z&-&w&=1\\&&&&z&&&=t\\&&&&&&w&=s\end{array}\right.. \end{equation*}
    Written in vector form, the complete solution is
    \begin{equation*} \mat{x\\y\\z\\w}= \matc{s\\1-(1/2)t+s\\t\\s}=t\mat{0\\-1/2\\1\\0}+s\mat{1\\1\\0\\1}+\mat{0\\1\\0\\0}. \end{equation*}
  2. Let
    \begin{equation*} X= \left[\begin{array}{cccc|c}2 & 2 & 1 & 0 & -1\\ 0 & 1 & -4 & 2 & 3\\ 1 & -1 & -3 & -4 & 5\end{array}\right] \end{equation*}
    be the augmented matrix corresponding to the system.
    By row reduction,
    \begin{equation*} \Rref(X)= \left[\begin{array}{cccc|c}1 & 0 & 0 & -2 & 1\\ 0 & 1 & 0 & 2 & -1\\ 0 & 0 & 1 & 0 & -1\end{array}\right]. \end{equation*}
    The fourth column of \(\Rref(X)\) is a free variable column, so we introduce the arbitrary equation \(w=t\) and solve the following system in terms of \(t\text{:}\)
    \begin{equation*} \left\{\begin{array}{rcrcrcrl}x&&&&&-&2w&=1\\&&y&&&+&2w&=-1\\&&&&z&&&=-1\\&&&&&&w&=t\end{array}\right.. \end{equation*}
    Written in vector form, the complete solution is
    \begin{equation*} \mat{x\\y\\z\\w}= \matc{1+2t\\-1-2t\\-1\\t}= t\mat{2\\-2\\0\\1}+\mat{1\\-1\\-1\\0}. \end{equation*}
  3. Let
    \begin{equation*} X= \left[\begin{array}{ccc|c}1 & 1 & -2 & -5\\ -4 & 1 & 5 & 3\end{array}\right] \end{equation*}
    be the augmented matrix corresponding to the system.
    By row reduction,
    \begin{equation*} \Rref(X)= \left[\begin{array}{ccc|c}1 & 0 & -7/5 & -8/5\\ 0 & 1 & -3/5 & -17/5\end{array}\right]. \end{equation*}
    The third column of \(\Rref(X)\) is a free variable column, so we introduce the arbitrary equation \(z=t\) and solve the following system in terms of \(t\text{:}\)
    \begin{equation*} \left\{\begin{array}{rcrcrl}x&&&-&7/5z&=-8/5\\&&y&-&3/5z&=-17/5\\&&&&z&=t\end{array}\right.. \end{equation*}
    Written in vector form, the complete solution is
    \begin{equation*} \mat{x\\y\\z}= \matc{-8/5+7/5t\\-17/5+3/5t\\t}= t\mat{7/5\\3/5\\1}+\mat{-8/5\\-17/5\\0}. \end{equation*}
  4. Let
    \begin{equation*} X= \left[\begin{array}{ccc|c}3 & -2 & 0 & -4\\ 1 & 1 & 3 & 3\\ -4 & 1 & -3 & 1\end{array}\right] \end{equation*}
    be the augmented matrix corresponding to the system.
    By row reduction,
    \begin{equation*} \Rref(X)= \left[\begin{array}{ccc|c}1 & 0 & 6/5 & 2/5\\ 0 & 1 & 9/5 & 13/5\\ 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}
    The third column of \(\Rref(X)\) is a free variable column, so we introduce the arbitrary equation \(z=t\) and solve the following system in terms of \(t\text{:}\)
    \begin{equation*} \left\{\begin{array}{rcrcrl}x&&&+&6/5z&=2/5\\&&y&+&9/5z&=13/5\\&&&&z&=t\end{array}\right.. \end{equation*}
    Written in vector form, the complete solution is
    \begin{equation*} \mat{x\\y\\z}= \matc{2/5-6/5t\\13/5-9/5t\\t}= t\mat{-6/5\\-9/5\\1}+\mat{2/5\\13/5\\0}. \end{equation*}
  5. Let
    \begin{equation*} X= \left[\begin{array}{ccc|c}1 & -1 & 2 & -1\\ 2 & 1 & 4 & 1\\ 3 & -4 & 3 & -2\end{array}\right] \end{equation*}
    be the augmented matrix corresponding to the system.
    By row reduction,
    \begin{equation*} \Rref(X)= \left[\begin{array}{ccc|c}1 & 0 & 0 & 4/3\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & -2/3\end{array}\right]. \end{equation*}
    Written in vector form, the complete solution is
    \begin{equation*} \mat{x\\y\\z}= \mat{4/3\\1\\-2/3}. \end{equation*}
  6. Let
    \begin{equation*} X= \left[\begin{array}{ccc|c}2 & 0 & 1 & 8\\ 1 & 1 & 1 & 4\\ 1 & 3 & 2 & 4\\ 3 & 2 & 4 & 9\end{array}\right] \end{equation*}
    be the augmented matrix corresponding to the system.
    By row reduction,
    \begin{equation*} \Rref(X)= \left[\begin{array}{ccc|c}1 & 0 & 0 & 5\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & -2\\ 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}
    Written in vector form, the complete solution is
    \begin{equation*} \mat{x\\y\\z}= \mat{5\\1\\-2}. \end{equation*}

2.

For each system of linear equations given below: (i) write down its augmented matrix, (ii) use row reduction algorithm to determine if it is consistent or not, (iii) for each consistent system, give the complete solution.
  1. \(\displaystyle \left\{\begin{array}{crcrcrl}-&10x_{1}&-&4x_{2}&+&4x_{3}&=28\\&3x_{1}&+&x_{2}&-&x_{3}&=-8\\&x_{1}&+&x_{2}&-&\frac{1}{2}x_{3}&=-3\end{array}\right.\)
  2. \(\displaystyle \left\{\begin{array}{rcrcrl}3x_{1}&-&2x_{2}&+&4x_{3}&=54\\5x_{1}&-&3x_{2}&+&6x_{3}&=88\\x_{1}&&&&&=-3\end{array}\right.\)
  3. \(\displaystyle \left\{\begin{array}{rcrl}x&+&2y&=5\end{array}\right.\)
  4. \(\displaystyle \left\{\begin{array}{rl}4x&=6\\2x&=3\end{array}\right.\)
  5. \(\displaystyle \left\{\begin{array}{rcrcrcrl}x_{1}&+&2x_{2}&+&4x_{3}&-&3x_{4}&=0\\3x_{1}&+&5x_{2}&+&6x_{3}&-&4x_{4}&=1\\4x_{1}&+&5x_{2}&-&2x_{3}&+&3x_{4}&=3\end{array}\right.\)
  6. \(\displaystyle \left\{\begin{array}{rcrcrcrl}x_{1}&-&x_{2}&+&5x_{3}&+&x_{4}&=1\\x_{1}&+&x_{2}&-&2x_{3}&+&3x_{4}&=3\\3x_{1}&-&x_{2}&+&8x_{3}&+&x_{4}&=5\\x_{1}&+&3x_{2}&-&9x_{3}&+&7x_{4}&=5\end{array}\right.\)
  7. \(\displaystyle \left\{\begin{array}{rcrcrl}0x&+&0y&+&0z&=0\end{array}\right.\)
Solution.
(a) i.
\begin{equation*} \left[\begin{array}{ccc|c}-10 & -4 & 4 & 28\\ 3 & 1 & -1 & -8\\ 1 & 1 & -1/2 & -3\end{array}\right] \end{equation*}
(a) ii.
\begin{align*} &\left[ \begin{array}{ccc|c} -10 & -4 & 4 & 28\\ 3 & 1 & -1 & -8\\ 1 & 1 & -1/2 & -3 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 1 & -1/2 & -3\\ 3 & 1 & -1 & -8\\ -10 & -4 & 4 & 28 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 1 & -1/2 & -3\\ 0 & -2 & 1/2 & 1\\ 0 & 6 & -1 & -2 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 1 & -1/2 & -3\\ 0 & 1 & -1/4 & -1/2\\ 0 & 6 & -1 & -2 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 1 & -1/2 & -3\\ 0 & 1 & -1/4 & -1/2\\ 0 & 0 & 1/2 & 1 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 1 & -1/2 & -3\\ 0 & 1 & -1/4 & -1/2\\ 0 & 0 & 1 & 2 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 0 & 0 & -2\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 2 \end{array}\right] \end{align*}
(a) iii.
This system of linear equations is consistent. Its complete solution is
\begin{equation*} \mat{x_1\\x_2\\x_3}=\mat{-2\\0\\2}. \end{equation*}
(b) i.
\begin{equation*} \left[\begin{array}{ccc|c}3 & -2 & 4 & 54\\ 5 & -3 & 6 & 88\\ 1 & 0 & 0 & -3\end{array}\right] \end{equation*}
(b) ii.
\begin{align*} &\left[ \begin{array}{ccc|c} 3 & -2 & 4 & 54\\ 5 & -3 & 6 & 88\\ 1 & 0 & 0 & -3 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 0 & 0 & -3\\ 5 & -3 & 6 & 88\\ 3 & -2 & 4 & 54 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 0 & 0 & -3\\ 0 & -3 & 6 & 103\\ 0 & -2 & 4 & 63 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 0 & 0 & -3\\ 0 & 1 & -2 & -103/3\\ 0 & -2 & 4 & 63 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{ccc|c} 1 & 0 & 0 & -3\\ 0 & 1 & -2 & -103/3\\ 0 & 0 & 0 & -17/3 \end{array}\right] \end{align*}
(b) iii.
This system of linear equations is inconsistent.
(c) i.
\begin{equation*} \left[\begin{array}{cc|c}1 & 2 & 5\end{array}\right] \end{equation*}
(c) ii.
The augmented matrix of this system of linear equations is already in reduced row echelon form.
(c) iii.
This system of linear equations is consistent. Its complete solution is
\begin{equation*} \mat{x\\y}=t\mat{-2\\1}+\mat{5\\0}. \end{equation*}
(d) i.
\begin{equation*} \left[\begin{array}{c|c}4 & 6\\ 2 & 3\end{array}\right] \end{equation*}
(d) ii.
\begin{equation*} \left[\begin{array}{c|c}4 & 6\\ 2 & 3\end{array}\right] \rightarrow \left[\begin{array}{c|c}1 & 3/2\\ 2 & 3\end{array}\right] \rightarrow \left[\begin{array}{c|c}1 & 3/2\\ 0 & 0\end{array}\right] \end{equation*}
(d) iii.
This system of linear equations is consistent. Its complete solution is \(x=3/2\text{.}\)
(e) i.
\begin{equation*} \left[\begin{array}{cccc|c}1 & 2 & 4 & -3 & 0\\ 3 & 5 & 6 & -4 & 1\\ 4 & 5 & -2 & 3 & 3\end{array}\right] \end{equation*}
(e) ii.
\begin{align*} &\left[ \begin{array}{cccc|c} 1 & 2 & 4 & -3 & 0\\ 3 & 5 & 6 & -4 & 1\\ 4 & 5 & -2 & 3 & 3 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & 2 & 4 & -3 & 0\\ 0 & -1 & -6 & 5 & 1\\ 0 & -3 & -18 & 15 & 3 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & 2 & 4 & -3 & 0\\ 0 & 1 & 6 & -5 & -1\\ 0 & -3 & -18 & 15 & 3 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & 2 & 4 & -3 & 0\\ 0 & 1 & 6 & -5 & -1\\ 0 & 0 & 0 & 0 & 0 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & 0 & -8 & 7 & 2\\ 0 & 1 & 6 & -5 & -1\\ 0 & 0 & 0 & 0 & 0 \end{array}\right] \end{align*}
(e) iii.
This system of linear equations is consistent. Its complete solution is
\begin{equation*} \mat{x_1\\x_2\\x_3\\x_4}=t\mat{8\\-6\\1\\0}+s\mat{-7\\5\\0\\1}+\mat{2\\-1\\0\\0}. \end{equation*}
(f) i.
\begin{equation*} \left[\begin{array}{cccc|c}1 & -1 & 5 & 1 & 1\\ 1 & 1 & -2 & 3 & 3\\ 3 & -1 & 8 & 1 & 5\\ 1 & 3 & -9 & 7 & 5\end{array}\right] \end{equation*}
(f) ii.
\begin{align*} &\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 1 & 1 & -2 & 3 & 3\\ 3 & -1 & 8 & 1 & 5\\ 1 & 3 & -9 & 7 & 5 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 0 & & -2 & 3 & 3\\ 3 & -1 & 8 & 1 & 5\\ 1 & 3 & -9 & 7 & 5 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 0 & 2 & -7 & 2 & 2\\ 0 & 2 & -7 & -2 & 2\\ 0 & 4 & -14 & 6 & 4 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 0 & 1 & -7/2 & 1 & 1\\ 0 & 2 & -7 & -2 & 2\\ 0 & 4 & -14 & 6 & 4 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 0 & 1 & -7/2 & 1 & 1\\ 0 & 0 & 0 & -4 & 0\\ 0 & 0 & 0 & 2 & 0 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 0 & 1 & -7/2 & 1 & 1\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 2 & 0 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & -1 & 5 & 1 & 1\\ 0 & 1 & -7/2 & 1 & 1\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 \end{array}\right]\\ \rightarrow&\left[ \begin{array}{cccc|c} 1 & 0 & 3/2 & 0 & 2\\ 0 & 1 & -7/2 & 0 & 1\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 \end{array}\right] \end{align*}
(f) iii.
This system of linear equations is consistent. Its complete solution is
\begin{equation*} \mat{x_1\\x_2\\x_3\\x_4}=t\mat{-3/2\\7/2\\1\\0}+\mat{2\\1\\0\\0}. \end{equation*}
(g) i.
\begin{equation*} \left[\begin{array}{ccc|c}0 & 0 & 0 & 0\end{array}\right] \end{equation*}
(g) ii.
The augmented matrix of this system of linear equations is already in reduced row echelon form.
(g) iii.
This system of linear equations is consistent. Its complete solution is
\begin{equation*} \mat{x\\y\\z}=t\mat{1\\0\\0}+s\mat{0\\1\\0}+r\mat{0\\0\\1}. \end{equation*}

3.

  1. Let \(\vec v_{1}=\mat{1\\1\\-2\\4}\text{,}\) \(\vec v_{2}=\mat{1\\4\\0\\2}\) and \(\vec v_{3}=\mat{-2\\-2\\4\\-8}\text{.}\)
    Set up and solve a system of linear equations whose solution will determine if the vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\) and \(\vec v_{3}\) are linearly independent.
  2. Let \(\vec v_{1}=\mat{1\\2\\3}\text{,}\) \(\vec v_{2}=\mat{-2\\1\\0}\) and \(\vec v_{3}=\mat{2\\7\\1}\text{.}\)
    Set up and solve a system of linear equations whose solution will determine if the vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\) and \(\vec v_{3}\) span \(\R^{3}\text{.}\)
  3. Let \(\ell_{1}\) and \(\ell_{2}\) be described in vector form by
    \begin{equation*} \overbrace{\vec x=t\mat{1\\3}+\mat{1\\1}}^{\displaystyle \ell_1}\quad \overbrace{\vec x=t\mat{2\\1}+\mat{3\\4}}^{\displaystyle \ell_2}. \end{equation*}
    Set up and solve a system of linear equations whose solution will determine if the lines \(\ell_{1}\) and \(\ell_{2}\) intersect.
  4. Let \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) be described in vector form by
    \begin{equation*} \begin{aligned}&\mathcal{P}_{1}:\quad \vec x=t\mat{1\\-1\\0}+s\mat{-1\\-1\\2},\\&\mathcal{P}_{2}:\quad \vec x=t\mat{1\\-1\\1}+s\mat{-1\\3\\-2}+\mat{0\\1\\-1}.\end{aligned} \end{equation*}
    Set up and solve a system of linear equations whose solution will determine if the planes \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) intersect.
Solution.
  1. The vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\) and \(\vec v_{3}\) are linearly independent if
    \begin{equation*} x\vec v_{1}+y\vec v_{2}+z\vec v_{3}=\vec 0 \end{equation*}
    only has the trivial solution.
    This vector equation is equivalent to the system of linear equations
    \begin{equation*} \left\{\begin{array}{crcrcrl}&x&+&y&-&2z&=0\\&x&+&4y&-&2z&=0\\-&2x&&&+&4z&=0\\&4x&+&2y&-&8z&=0\end{array}\right.. \end{equation*}
    The complete solution to this system is
    \begin{equation*} \mat{x\\y\\z}=t\mat{2\\0\\1}. \end{equation*}
    In particular, \((x, y, z)=(2, 0, 1)\) is a non-trivial solution to this system, so the vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\) and \(\vec v_{3}\) are linearly dependent.
  2. By definition, the vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\text{,}\) and \(\vec v_{3}\) span \(\R^{3}\) if every vector can be written as a linear combination of \(\vec v_{1}\text{,}\) \(\vec v_{2}\text{,}\) and \(\vec v_{3}\text{.}\) In other words, the equation
    \begin{equation*} x\vec v_{1}+y\vec v_{2}+z\vec v_{3}=\mat{a\\b\\c} \end{equation*}
    is consistent for all choices of \(a\text{,}\) \(b\text{,}\) and \(c\text{.}\) This vector equation is equivalent to the system of linear equations
    \begin{equation*} \left\{\begin{array}{rcrcrl}x&-&2y&+&2z&=a\\2x&+&y&+&7z&=b\\3x&&&+&z&=c\end{array}\right.. \end{equation*}
    Row reducing, we notice that every column is a pivot column and so the system is always consistent. Therefore, \(\Span\Set{\vec v_1,\vec v_2,\vec v_3}=\R^{3}\text{.}\)
  3. The lines \(\ell_{1}\) and \(\ell_{2}\) intersect when their \(x\) and \(y\)-coordinates. We first set the parameter variable of \(\ell_{1}\) to \(t\) and the parameter variable of \(\ell_{2}\) to \(s\text{.}\) Then, equating the coordinates gives the system of linear equations
    \begin{equation*} \left\{\begin{array}{rcrl}t&-&2s&=2\\3t&-&s&=3\end{array}\right.. \end{equation*}
    The solution to this system is
    \begin{equation*} \mat{t\\s}=\mat{4/5\\-3/5}. \end{equation*}
    Since \(\vec x=\mat{9/5\\17/5}\) when \(t=4/5\) (or \(s=-3/5\)), the intersection of \(\ell_{1}\) and \(\ell_{2}\) is the point \(\mat{9/5\\17/5}\text{.}\)
  4. The planes \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) intersect when their coordinates are equal. Relabeling the parameter variables for \(\mathcal{P}_{2}\) as \(q\) and \(r\) and equating both vector forms, we get the following system of linear equations:
    \begin{equation*} \left\{\begin{array}{crcrcrcrl}&t&-&s&-&q&+&r&=0\\-&t&-&s&+&q&-&3r&=1\\&&&2s&-&q&+&2r&=-1\end{array}\right.. \end{equation*}
    The complete solution to this system is
    \begin{equation*} \mat{t\\s\\q\\r}=u\mat{-2\\-1\\0\\1}+\mat{-1/2\\-1/2\\0\\0}. \end{equation*}
    Thus there are an infinite number of points in \(\mathcal{P}_{1}\cap \mathcal{P}_{2}\text{.}\)
    To find these points, we substitute \(q=0\) and \(r=u\) into the vector form of \(\mathcal{P}_{2}\text{.}\) This shows us that \(\mathcal{P}_{1}\cap \mathcal{P}_{2}\) can be expressed in vector form by
    \begin{equation*} \mat{x\\y\\z}=u\mat{-1\\3\\-2}+\mat{0\\1\\-1}. \end{equation*}

4.

Presented below some students’ arguments for question B.6.3. Evaluate whether their reasoning is totally correct, mostly correct, or incorrect. If their reasoning is not totally correct, point out what mistake(s) they made and how they might be fixed.
(a) i.
Consider the vector equation
\begin{equation*} x\vec v_{1}+y\vec v_{2}+z\vec v_{3}=\vec 0 \end{equation*}
where \(x, y, z\in\R\text{.}\)
Since \((x, y, z)=(0, 0, 0)\) is a solution to the equation, the equation has the trivial solution. Therefore, the vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\) and \(\vec v_{3}\) are linearly independent.
(a) ii.
Consider the vector equation
\begin{equation*} x\vec v_{1}+y\vec v_{2}+z\vec v_{3}=\vec 0 \end{equation*}
where \(x, y, z\in\R\text{.}\)
Notice that \((x, y, z)=(-2, 0, -1)\) is a solution to the equation. Since \(y=0\) in this solution, it is a trivial solution and therefore the vectors \(\vec v_{1}\text{,}\) \(\vec v_{2}\) and \(\vec v_{3}\) are linearly independent.
(c) i.
The lines \(\ell_{1}\) and \(\ell_{2}\) intersect when their \(x\) and \(y\)-coordinates are equal. Equating \(x\) and \(y\)-coordinates gives
\begin{equation*} \left\{\begin{array}{rcrl}t&+&1&=2t+3\\3t&+&1&=t+4\end{array}\right.. \end{equation*}
This system is equivalent to
\begin{equation*} \left\{\begin{array}{rl}t&=-2\\2t&=3\end{array}\right.. \end{equation*}
Since this system is inconsistent, the lines \(\ell_{1}\) and \(\ell_{2}\) do not intersect.
(c) ii.
The lines \(\ell_{1}\) and \(\ell_{2}\) intersect when their \(x\) and \(y\)-coordinates are equal. Equating \(x\) and \(y\)-coordinates gives
\begin{equation*} \left\{\begin{array}{rcrl}t&+&1&=2s+3\\3t&+&1&=s+4\end{array}\right.. \end{equation*}
This system is equivalent to
\begin{equation*} \left\{\begin{array}{rcrl}t&-&2s&=2\\3t&-&s&=3\end{array}\right., \end{equation*}
and the solution is
\begin{equation*} \mat{t\\s}=\mat{4/5\\-3/5}. \end{equation*}
Therefore the lines \(\ell_{1}\) and \(\ell_{2}\) intersect at \(\mat{4/5\\3/5}\text{.}\)
(d) i.
Notice that
\begin{equation*} \vec x=\mat{1/2\\-1/2\\0}=1/2\mat{1\\-1\\0}+0\mat{-1\\-1\\2} \end{equation*}
and
\begin{equation*} \vec x=\mat{1/2\\-1/2\\0}=0\mat{1\\-1\\1}-1/2\mat{-1\\3\\-2}+\mat{0\\1\\-1}. \end{equation*}
So, \(\vec x=\mat{1/2\\-1/2\\0}\) is a point on \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\text{.}\) Therefore the planes \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) intersect.
(d) ii.
Notice that \(\vec x=\mat{1\\0\\0}\) is a point in \(\mathcal{P}_{2}\text{,}\) but this point is not in \(\mathcal{P}_{1}\text{.}\) Therefore the planes do not intersect.
Solution.
  1. The reasoning is incorrect. The solution \((x, y, z)=(0, 0, 0)\) is the trivial solution to the vector equation, and it is always a solution to the homogeneous equation
    \begin{equation*} \alpha_{1}\vec v_{1}+\cdots+\alpha_{k}\vec v_{k}=\vec 0 \end{equation*}
    no matter what \(\vec v_{1}, \dots, \vec v_{k}\) are.
    To determine if a set of vectors is linearly independent, we need to find out whether the trivial solution is the only solution to the vector equation. That is, there does not exist any non-trivial solution to the vector equation.
  2. The reasoning is incorrect. A trivial solution is the solution where all the variables equal zero, so the solution \((x, y, z)=(-2, 0, -1)\) is not a trivial solution.
  3. The reasoning is incorrect. When equating coordinates of two different vector forms, the parameter variables needs to be set to different letters.
    A valid system of linear equations is
    \begin{equation*} \left\{\begin{array}{rcrl}t&-&2s&=2\\3t&-&s&=3\end{array}\right.. \end{equation*}
    Here we have set the parameter \(t\) in the vector form of \(\ell_{2}\) to \(s\text{.}\)
  4. The reasoning is incorrect. The solution \(\mat{4/5\\-3/5}\) to the system of linear equations gives the value of \(t\) and \(s\) at the intersection. To find the intersection of \(\ell_{1}\) and \(\ell_{2}\text{,}\) the value \(t=4/5\) or \(s=-3/5\) needs to be plugged into the vector form of \(\ell_{1}\) or \(\ell_{2}\text{.}\)
  5. The reasoning is correct. Since \(\vec x=\mat{1/2\\-1/2\\0}\) is a point on both \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\text{,}\) it is in the intersection of \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\text{,}\) so the planes \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) intersect. However, we can not determine if the intersection is a line or a plane based on only one point, so we need to set up and solve an appropriate system of linear equations.
  6. The reasoning is incorrect. Finding one point that is in \(\mathcal{P}_{2}\) but not in \(\mathcal{P}_{1}\) shows that \(\mathcal{P}_{1}\) does not intersect \(\mathcal{P}_{2}\) at that point, but does not rule out the possibility that \(\mathcal{P}_{1}\) intersects \(\mathcal{P}_{2}\) at a different point.