Module3Spans, Translated Spans, and Linear Independence/Dependence
In this module you will learn
The definition of span and how to visualize spans.
How to express lines/planes/volumes through the origin as spans.
How to express lines/planes/volumes not through the origin as translated spans using set addition.
Geometric and algebraic definitions of linear independence and linear dependence.
How to find linearly independent subsets.
Let \(\vec u=\mat{1\\1}\) and \(\vec v=\mat{1\\-2}\text{.}\) Can the vector \(\vec w=\mat{2\\5}\) be obtained as a linear combination of \(\vec u\) and \(\vec v\text{?}\)
By drawing a picture, the answer appears to be yes.
Figure3.0.1.
Figure3.0.2.
Algebraically, we can use the definition of a linear combination to set up a system of equations. We know \(\vec w\) can be expressed as a linear combination of \(\vec u\) and \(\vec v\) if and only if the vector equation
\begin{equation*}
\vec w = \mat{2\\5}=\alpha\mat{1\\1}+\beta\mat{1\\-2}=\alpha \vec u+\beta \vec v
\end{equation*}
has a solution. By inspection, we see \(\alpha=3\) and \(\beta=-1\) solve this equation.
After initial success, we might ask the following: what are all the locations in \(\R^{2}\) that can be obtained as a linear combination of \(\vec u\) and \(\vec v\text{?}\) Geometrically, it appears any location can be reached. To verify this algebraically, consider the vector equation
Subtracting the second equation from the first, we get \(x-y=3\beta\) and so \(\beta=(x-y)/3\text{.}\) Plugging \(\beta\) into the first equation and solving, we get \(\alpha=(2x+y)/3\text{.}\) Thus, equation (3.0.1)always has the solution
There is a formal term for the set of vectors that can be obtained as linear combinations of others: span.
Definition3.0.3.Span.
The span of a set of vectors \(V\) is the set of all linear combinations of vectors in \(V\text{.}\) That is,
\begin{equation*}
\Span V = \Set{{\vec v \given \vec v=}\,\alpha_1\vec v_1+\alpha_2\vec v_2 + \cdots +\alpha_n\vec v_n \text{ for some } {\vec v_1,\vec v_2,\ldots,\vec v_n\in V}\! \text{ and scalars }\alpha_1,\alpha_2,\ldots,\alpha_n}.
\end{equation*}
Additionally, we define \(\Span\Set{}= \Set{\vec 0}\text{.}\)
We just showed above that \(\Span\Set{\mat{1\\1},\mat{1\\-2}}=\R^{2}\text{.}\) Alternatively, we may use span as a verb and say the set \(\Set{\mat{1\\1},\mat{1\\-2}}\)spans\(\R^{2}\text{.}\)
Example3.0.4.
Let \(\vec u=\mat{-1\\2}\) and \(\vec v=\mat{1\\-2}\text{.}\) Find \(\Span\Set{\vec u,\vec v}\text{.}\)
Solution.
By the definition of span,
\begin{equation*}
\Span\Set{\vec u,\vec v}= \Set{\vec x\given \vec x=\alpha\vec u+\beta\vec v\text{ for some }\alpha,\beta\in\R}.
\end{equation*}
We need to determine for which \(x\) and \(y\) the vector equation \(\mat{x\\y}= \alpha\mat{-1\\2}+\beta\mat{1\\-2}\) is consistent.
From the first and second coordinates, we get the system
Adding 2 times the first equation to the second, we get \(2x+y=0\) and so \(y=-2x\text{.}\) Therefore, if \(\mat{x\\y}\) makes the above system consistent, we must have
\begin{equation*}
\mat{x\\y}=\mat{t\\-2t}=t\vec v
\end{equation*}
for some \(t\text{.}\) Thus,
\begin{equation*}
\Span\Set{\vec u,\vec v}= \Set{\vec x\given \vec x=t\vec v\text{ for some }t}=\Span\Set{\vec v},
\end{equation*}
which is a line through the origin with direction \(\vec v\text{.}\)
Example3.0.5.
Let \(\vec a=\mat{1\\2\\1}\text{,}\)\(\vec b=\mat{0\\1\\0}\text{,}\) and \(\vec c=\mat{1\\1\\2}\text{.}\) Show that \(\R^{3}=\Span\Set{\vec a,\vec b,\vec c}\text{.}\)
Solution.
If the equation
\begin{equation*}
\vec x=\mat{x\\y\\z}= \alpha_{1}\mat{1\\2\\1}+\alpha_{2}\mat{0\\1\\0}+\alpha_{3}\mat{1\\1\\2}= \alpha_{1}\vec a+\alpha_{2}\vec b+\alpha_{3}\vec c
\end{equation*}
is always consistent, then any vector in \(\R^{3}\) can be obtained as a linear combination of \(\vec a, \vec b\text{,}\) and \(\vec c\text{.}\)
\begin{equation*}
\mathcal{P}= \Set{\vec x\given \vec x=t\vec d_1+s\vec d_2\text{ for some }t,s\in \R}=\Span\Set{\vec d_1,\vec d_2}.
\end{equation*}
If the “\(\vec p\)” in our vector form is \(\vec 0\text{,}\) then that vector form actually defines a span. This means (if you accept that every line/plane through the origin has a vector form) that every line/plane through the origin can be written as a span. Conversely, if \(X=\Span\Set{\vec v_1,\ldots,\vec v_n}\) is a span, we know \(\vec 0=0\vec v_{1}+\cdots+0\vec v_{n}\in X\text{,}\) and so every span passes through the origin.
As it turns out, spans exactly describe points, lines, planes, and volumes 1
We use the word volume to indicate the higher-dimensional analogue of a plane.
through the origin.
Example3.1.1.
The line \(\ell_{1}\subseteq\R^{2}\) is described by the equation \(x+2y=0\) and the line \(\ell_{2}\subseteq \R^{2}\) is described by the equation \(4x-2y=6\text{.}\) If possible, describe \(\ell_{1}\) and \(\ell_{2}\) using spans.
However, \(\ell_{2}\) does not pass through \(\vec 0\text{,}\) and so \(\ell_{2}\) cannot be written as a span.
Takeaway3.1.2.
Lines and planes through the origin, and only lines and planes through the origin, can be expressed as spans.
Section3.2Set Addition
We’re going to work around the fact that only objects which pass through the origin can be written as spans, but first let’s take a detour and learn about set addition.
Definition3.2.1.Set Addition.
If \(A\) and \(B\) are sets of vectors, then the set sum of \(A\) and \(B\text{,}\) denoted \(A+B\text{,}\) is
\begin{equation*}
A+B=\Set{\vec x \given \vec x=\vec a+\vec b\text{ for some }\vec a\in A\text{ and } \vec b\in B}.
\end{equation*}
Set sums are very different than regular sums despite using the same symbol, “\(+\)” 1
For example, \(A+\Set{}=\Set{}\text{,}\) which might seem counterintuitive for an “addition” operation.
. However, they are very useful. Let \(C=\Set{\vec x\in\R^2\given \norm{\vec x}=1}\) be the unit circle centered at the origin, and consider the sets
Rewriting, we see \(X=\Set{\vec x+\yhat\given \norm{\vec x}=1}\) is just \(C\) translated by \(\yhat\text{.}\) Similarly, \(Y=\Set{\vec x+\vec v\given \norm{\vec x}=1\text{ and }\vec v=3\xhat\text{ or }\vec v=\yhat}=(C+\Set{3\xhat})\cup (C+\Set{\yhat})\text{,}\) and so \(Y\) is the union of two translated copies of \(C\) 2
If you want to stretch your mind, consider what \(C+C\) is as a set.
. Finally, \(Z\) is the union of three translated copies of \(C\text{.}\)
Figure3.2.2.
Figure3.2.3.
Figure3.2.4.
Section3.3Translated Spans
Set addition allows us to easily create parallel lines and planes by translation. For example, consider the lines \(\ell_{1}\) and \(\ell_{2}\) given in vector form as \(\vec x=t\vec d\) and \(\vec x=t\vec d+\vec p\text{,}\) respectively, where \(\vec d=\mat{2\\1}\) and \(\vec p=\mat{-1\\1}\text{.}\) These lines differ from each other by a translation. That is, every point in \(\ell_{2}\) can be obtained by adding \(\vec p\) to a corresponding point in \(\ell_{1}\text{.}\) Using the idea of set addition, we can express this relationship by the equation
Note: it would be incorrect to write “\(\ell_{2}=\ell_{1}+\vec p\)”. Because \(\ell_{1}\) is a set and \(\vec p\) is not a set, “\(\ell_{1}+\vec p\)” does not make mathematical sense.
Example3.3.3.
Recall \(\ell_{2}\subseteq\R^{2}\) is the line described by the equation \(4x-2y=6\text{.}\) Describe \(\ell_{2}\) as a translated span.
Solution.
We can express \(\ell_{2}\) in vector form with the equation
We can now see translated spans provide an alternative notation to vector form for specifying lines and planes. If \(Q\) is described in vector form by
Since \(\vec w=\vec u+\vec v\text{,}\) we know that \(\vec w\in\Span\Set{\vec u,\vec v}\text{.}\) Geometrically, this is also clear because \(\Span\Set{\vec u,\vec v}\) is the \(xy\)-plane in \(\R^{3}\) and \(\vec w\) lies on that plane.
What about \(\Span\Set{\vec u,\vec v,\vec w}\text{?}\) Intuitively, since \(\vec w\) is already a linear combination of \(\vec u\) and \(\vec v\text{,}\) we can’t get anywhere new by taking linear combinations of \(\vec u\text{,}\)\(\vec v\text{,}\)and\(\vec w\) compared to linear combinations of just \(\vec u\) and \(\vec v\text{.}\) So \(\Span\Set{\vec u,\vec v}=\Span\Set{\vec u,\vec v,\vec w}\text{.}\)
Can we prove this from the definitions? Yes! Suppose \(\vec r\in \Span\Set{\vec u,\vec v,\vec w}\text{.}\) By definition,
\begin{equation*}
\vec r=\alpha\vec u+\beta\vec v+\gamma\vec w
\end{equation*}
for some \(\alpha,\beta,\gamma\in\R\text{.}\) Since \(\vec w=\vec u+\vec v\text{,}\) we see
Thus, \(\Span\Set{\vec u,\vec v,\vec w}\subseteq \Span\Set{\vec u,\vec v}\text{.}\) Conversely, if \(\vec s\in\Span\Set{\vec u,\vec v}\text{,}\) by definition,
\begin{equation*}
\vec s=a\vec u+b\vec v=a\vec u+b\vec v+0\vec w
\end{equation*}
for some \(a,b\in \R\text{,}\) and so \(\vec s\in\Span\Set{\vec u,\vec v,\vec w}\text{.}\) Thus \(\Span\Set{\vec u,\vec v}\subseteq\Span\Set{\vec u,\vec v,\vec w}\text{.}\) We conclude \(\Span\Set{\vec u,\vec v}=\Span\Set{\vec u,\vec v,\vec w}\text{.}\)
In this case, \(\vec w\) was a redundant vector—it wasn’t needed for the span. When a set contains a redundant vector, we call the set linearly dependent.
We will also refer to sets of vectors (for example \(\Set{\vec v_1,\ldots,\vec v_n}\)) as being linearly independent or linearly dependent. For technical reasons, we didn’t state the definition in terms of sets 1
The issue is, every element of a set is unique. Clearly, the vectors \(\vec v\) and \(\vec v\) are linearly dependent, but \(\Set{\vec v,\vec v}=\Set{\vec v}\text{,}\) and so \(\Set{\vec v,\vec v}\) is technically a linearly independent set. This issue would be resolved by talking about multisets instead of sets, but it isn’t worth the hassle.
.
The geometric definition of linear dependence says that the vectors \(\vec v_{1},\ldots,\vec v_{n}\) are linearly dependent if you can remove at least one vector without changing the span. In other words, \(\vec v_{1},\ldots,\vec v_{n}\) are linearly dependent if there is a redundant vector.
Example3.4.2.
Let \(\vec a=\mat{1\\2}\text{,}\)\(\vec b=\mat{2\\3}\text{,}\)\(\vec c=\mat{4\\6}\text{,}\) and \(\vec d=\mat{4\\5}\text{.}\) Determine whether \(\Set{\vec a,\vec b,\vec c,\vec d}\) is linearly independent or linearly dependent.
Solution.
By inspection, we see \(\vec c=2\vec b\text{.}\) Therefore,
Determine if \(\mathcal{P}\) and \(\mathcal{Q}\) are the same plane.
Solution.
We could answer this question using techniques from Module 2, but for variety, let’s see if we can answer the question using spans and linear dependence.
Let \(\vec a_{1}=\mat{1\\2\\1}\) and \(\vec a_{2}=\mat{2\\2\\1}\) be direction vectors for \(\mathcal{P}\) and let \(\vec b_{1}=\mat{3\\4\\2}\) and \(\vec b_{2}=\mat{2\\2\\1}\) be direction vectors for \(\mathcal{Q}\) and notice \(\mathcal{P}=\Span\Set{\vec a_1,\vec a_2}\) and \(\mathcal{Q}=\Span\Set{\vec b_1,\vec b_2}\text{.}\)
By definition \(\mathcal{P}=\mathcal{Q}\) if (i) every point in \(\mathcal{P}\) is a point in \(\mathcal{Q}\) and (ii) every point in \(\mathcal{Q}\) is a point in \(\mathcal{P}\text{.}\)
Focusing on (i), let \(\vec p=t\vec a_{1}+s\vec a_{2}\in \mathcal{P}\) be an arbitrary point in \(\mathcal{P}\text{.}\) We need to show \(\vec p\in\mathcal{Q}\text{.}\) Since \(\Set{\vec b_1,\vec b_2}\) is linearly independent and \(\mathcal{Q}=\Span\Set{\vec b_1,\vec b_2}\text{,}\) showing \(\vec p\in\mathcal{Q}\) is equivalent to showing \(\Set{\vec p,\vec b_1,\vec b_2}\) is a linearly dependent set.
are both linearly dependent sets: \(\vec a_{1}=\vec b_{1}-\vec b_{2}\in\Span\Set{\vec b_1,\vec b_2}\) and \(\vec a_{2}=\vec b_{2}\in\Span\Set{\vec b_1,\vec b_2}\text{.}\) Therefore, \(\vec a_{1}\in\mathcal{Q}\) and \(\vec a_{2}\in\mathcal{Q}\text{.}\) Since both \(\vec a_{1}\) and \(\vec a_{2}\) are in \(\mathcal{Q}\) and \(\mathcal{Q}=\Span\Set{\vec b_1, \vec b_2}\) is itself a span, we know that every linear combination of \(\vec a_{1}\) and \(\vec a_{2}\) must be in \(\mathcal{Q}\text{.}\) In particular, \(\vec p=t\vec a_{1}+s\vec a_{2}\in\mathcal{Q}\text{,}\) which is what we wanted to show.
We can show (ii) similarly by observing that \(\vec b_{1}\in\mathcal{P}\) and \(\vec b_{2}\in \mathcal{P}\) and so any point \(\vec q=t\vec b_{1}+s\vec b_{2}\in\mathcal{Q}\) must also be in \(\mathcal{P}\text{.}\)
We can also think of linear independence/dependence from an algebraic perspective. Suppose the vectors \(\vec u\text{,}\)\(\vec v\text{,}\) and \(\vec w\) satisfy
The set \(\Set{\vec u,\vec v,\vec w}\) is linearly dependent since \(\vec w\in\Span\Set{\vec u,\vec v}\text{,}\) but equation (3.4.1) can be rearranged to get
Here we have expressed \(\vec 0\) as a linear combination of \(\vec u\text{,}\)\(\vec v\text{,}\) and \(\vec w\text{.}\) By itself, this is nothing special. After all, we know \(\vec 0=0\vec u+0\vec v+0\vec w\) is a linear combination of \(\vec u\text{,}\)\(\vec v\text{,}\) and \(\vec w\text{.}\) However, the right side of equation (3.4.2) has non-zero coefficients, which makes the linear combination non-trivial.
Definition3.4.4.Trivial Linear Combination.
The linear combination \(\alpha_{1}\vec v_{1}+\cdots+\alpha_{n}\vec v_{n}\) is called trivial if \(\alpha_{1}=\cdots=\alpha_{n}=0\text{.}\) If at least one \(\alpha_{i}\neq 0\text{,}\) the linear combination is called non-trivial.
We can always write \(\vec 0\) as a linear combination of vectors if we let all the coefficients be zero, but it turns out we can only write \(\vec 0\) as a non-trivial linear combination of vectors if those vectors are linearly dependent. This is the inspiration for another definition of linear independence/dependence.
The vectors \(\vec v_{1},\vec v_{2},\ldots,\vec v_{n}\) are linearly dependent if there is a non-trivial linear combination of \(\vec v_{1},\ldots,\vec v_{n}\) that equals the zero vector. Otherwise they are linearly independent.
The idea of a “redundant vector” coming from the geometric definition of linear dependence is easy to visualize, but it can be hard to prove things with—checking for linear independence with the geometric definition involves verifying for every vector that it is not in the span of the others. The algebraic definition on the other hand is less obvious, but the reasoning is easier. You only need to analyze solutions to one equation!
Example3.4.6.
Let \(\vec u=\mat{1\\2}\text{,}\)\(\vec v=\mat{2\\3}\text{,}\) and \(\vec w=\mat{4\\5}\text{.}\) Use the algebraic definition of linear independence to determine whether \(\Set{\vec u,\vec v,\vec w}\) is linearly independent or dependent.
Solution.
We need to determine if there is a non-trivial solution to
In particular, \((x,y,z)=(2,-3,1)\) is a non-trivial solution to this system. Therefore \(\Set{\vec u,\vec v,\vec w}\) is linearly dependent.
Theorem3.4.7.
The geometric and algebraic definitions of linear independence are equivalent.
Proof.
To show the two definitions are equivalent, we need to show that geometric \(\implies\) algebraic and algebraic \(\implies\) geometric.
(geometric \(\implies\) algebraic) Suppose \(\vec v_{1},\ldots,\vec v_{n}\) are linearly dependent by the geometric definition. That means that for some \(i\text{,}\) we have
This must be a non-trivial linear combination because the coefficient of \(\vec v_{i}\) is \(-1\neq 0\text{.}\) Therefore, \(\vec v_{1},\ldots,\vec v_{n}\) is linearly dependent by the algebraic definition.
(algebraic \(\implies\) geometric) Suppose \(\vec v_{1},\ldots,\vec v_{n}\) are linearly dependent by the algebraic definition. That means there exist \(\alpha_{1},\ldots,\alpha_{n}\text{,}\) not all zero, so that
which satisfy the non-trivial relationship \(\vec u+\vec v-\vec w=\vec 0\text{.}\) Since \(\vec u+\vec v-\vec w=\vec 0\) is a non-trivial relationship giving \(\vec 0\text{,}\) we can use it to generate others. For example,
are all different non-trivial linear combinations that give \(\vec 0\text{.}\) In other words, if the equation \(\alpha\vec u+\beta \vec v+\gamma \vec w=\vec 0\) has a non-trivial solution, it has infinitely many non-trivial solutions. Conversely, if the equation \(\alpha\vec u+\beta \vec v+\gamma \vec w=\vec 0\) has infinitely many solutions, one of them has to be non-trivial!
Equations where one side is \(\vec 0\) show up often and are called homogeneous equations.
Definition3.5.1.Homogeneous System.
A system of linear equations or a vector equation in the variables \(\alpha_{1}\text{,}\) …, \(\alpha_{n}\) is called homogeneous if it takes the form
This theorem has a practical application: suppose you wanted to decide if the vectors \(\vec a\text{,}\)\(\vec b\text{,}\) and \(\vec c\) were linearly dependent. You could (i) find a non-trivial solution to \(x\vec a+y\vec b+z\vec c=\vec 0\text{,}\) or (ii) merely show that \(x\vec a+y\vec b+z\vec c=\vec 0\) has more than one solution. Sometimes one is easier than the other.
represents a plane in vector form whenever \(\vec d_{1}\) and \(\vec d_{2}\) are non-zero, non-parallel vectors. In other words, \(\vec x=t_{1}\vec d_{1}+t_{2}\vec d_{2}\) represents a plane whenever \(\Set{\vec d_1,\vec d_2}\) is linearly independent.
Does this reasoning work for lines too? The equation
\begin{equation*}
\vec x=t\vec d
\end{equation*}
represents a line in vector form precisely when \(\vec d\neq \vec 0\text{.}\) And \(\Set{\vec d}\) is linearly independent exactly when \(\vec d\neq 0\text{.}\)
This reasoning generalizes to volumes. The equation
represents a volume in vector form exactly when \(\Set{\vec d_1,\vec d_2,\vec d_3}\) is linearly independent. To see this, suppose \(\Set{\vec d_1,\vec d_2,\vec d_3}\) were linearly dependent. That means one or more vectors could be removed from \(\Set{\vec d_1,\vec d_2,\vec d_3}\) without changing its span. Therefore, if \(\Set{\vec d_1,\vec d_2,\vec d_3}\) is linearly dependent \(\vec x=t_{1}\vec d_{1}+t_{2}\vec d_{2}+t_{3}\vec d_{3}\) at best represents a plane (though it could be a line or a point).
We now have a way of testing the validity of a vector-form representation of a line/plane/volume. Just check whether the chosen direction vectors are linearly independent!
Takeaway3.6.1.
When writing an object in vector form, the direction vectors must always be linearly independent.
Exercises3.7Exercises
1.
Let \(A=\Set{\mat{1\\2\\0},\mat{0\\1\\0},\mat{1\\1\\0}}\text{.}\)
Is \(A\) linearly independent or dependent?
Describe the span of \(A\text{.}\)
Can \(A\) be extended (i.e., can vectors be added to \(A\)) so that \(A\) spans all of \(\R^{3}\text{?}\)
Solution.
\(A\) is linearly dependent.
\(\Span(A)\) is the plane \(\Set{\mat{x \\y \\0} \in\R^3: x,y\in\R}\text{.}\)
Yes. If \(A'=A\cup\Set{\mat{0\\0\\1}}\text{,}\) then \(\Span(A')=\R^{3}\text{.}\)
2.
For each set below, determine whether it spans a point, line, plane, volume, or other.
For each set in question 3.7.2, determine whether it is linearly independent or dependent.
Is the set \(\Set{\mat{1\\2\\3},\mat{5\\6\\7}, \mat{9\\10\\11}, \mat{13\\14\\15}}\) linearly independent or dependent?
Can you find a set of \(n+1\) vectors in \(\R^{n}\) that is linearly independent? Explain.
Solution.
Linearly Independent
Linearly Dependent
Linearly Independent
Lineraly Dependent
Linearly Independent
Linearly Independent
Linearly Dependent
Linearly Independent
Linearly Independent
Linearly Dependent
Linearly Dependent
No. The solutions to the vector equation \(\alpha_{1}\vec x_{1}+\alpha_{2}\vec x_{2}+\ldots+\alpha_{n+1}\vec x_{n+1}=0\) for \(\alpha_{1},\alpha_{2},\ldots,\alpha_{n+1}\in \R\) are the solutions to a system of \(n\) equations in \(n+1\) variables. This system is consistent since \(\alpha_{1}=\alpha_{2}=\ldots=\alpha_{n+1}=0\) is a solution. The row reduced echelon form of the corresponding augmented matrix has at least one free variable column since there are more columns than rows. Hence there are infinitely many solutions, and in particular there exists a non-trivial solution to the above vector equation.
4.
If possible, express the following lines in \(\R^{2}\) as spans. Otherwise, justify why the line cannot be expressed as a span.
\(\displaystyle x=0\)
\(\displaystyle 2x+3y=0\)
\(\displaystyle 5x-4y=0\)
\(\displaystyle -x-y=-1\)
\(\displaystyle 9x-15y=8\)
For each line in question 3.7.4.a that cannot be expressed as a span, express it as a translated span.
Each equation below specifies a line or a plane in \(\R^{3}\text{.}\) If possible, express the specified line or plane as a span. Otherwise, justify why it cannot be expressed as a span.
\(\displaystyle 2x-y+z=4\)
\(\displaystyle x+6y-z=0\)
\(\displaystyle x+3z=0\)
\(\displaystyle y=1\)
\(x=0\) and \(z=0\)
\(2x-y=2\) and \(z=-1\)
For lines or planes in question 3.7.4.c that cannot be expressed as spans, express as a translated span.
Solution.
\(\displaystyle \Span\Set{\mat{0\\-1}}\)
\(\displaystyle \Span\Set{\mat{3\\-2}}\)
\(\displaystyle \Span\Set{\mat{4\\5}}\)
Not possible since \(\mat{0\\0}\) is not on the line.
Not possible since \(\mat{0\\0}\) is not on the line.
Determine if the following planes, expressed in vector form, are the same plane.
\(\vec x = t\mat{1\\2}+ s\mat{2\\7}\) and \(\vec x = t\mat{3\\5}+ s\mat{8\\4}\text{.}\)
\(\vec x = t\mat{2\\2\\3}+ s\mat{1\\0\\5}\) and \(\vec x = t\mat{1\\0\\5}+ s\mat{4\\2\\13}\text{.}\)
\(\vec x = t\mat{1\\2\\1}+ s\mat{2\\2\\1}\) and \(\vec x = t\mat{0\\1\\0}+ s\mat{1\\2\\1}\text{.}\)
Solution.
Same plane (\(\R^{2}\)).
Same plane (\(2\mat{1\\0\\5}+ \mat{2\\2\\3}= \mat{4\\2\\13}\)).
Different plane (\(\mat{2\\2\\1}\) is not in the second plane).
6.
Show that the set \(\Set{\mat{2\\0\\7},\mat{1\\1\\1},\mat{6\\4\\11}}\) is linearly dependent in two ways. First, using the geometric definition of linear dependence and then using the algebraic definition.
Solution.
The set is linearly dependent since: \(\mat{6\\4\\11}= \mat{2\\0\\7}+ 4\mat{1\\1\\1}\) (geometric) or \(\mat{2\\0\\7}+ 4\mat{1\\1\\1}+ (-1)\mat{6\\4\\11}=\vec 0\) (algebraic).
7.
Choose vectors \(\vec p\text{,}\)\(\vec d_{1}\text{,}\)\(\vec d_{2}\text{,}\)\(\vec d_{3}\) in \(\R^{4}\) such that the vector equation \(\vec x = t_{1}\vec d_{1}+ t_{2}\vec d_{2}+ t_{3}\vec d_{3}+\vec p\) specifies:
Classify the sets \(A=\Set{}\) and \(B=\Set{\vec 0}\) as linearly independent or dependent.
Solution.
The set \(A\) is linearly independent. Since \(A\) contains no vectors, there is no way to write \(\vec 0\) as a non-trivial linear combination of vectors in \(A\text{.}\) However, \(B\) is linearly dependent, since \(\vec 0=7\vec 0\) is a non-trivial linear combination of vectors in \(B\) that gives the zero vector.
9.
Let \(S=\Set{\mat{1\\3},\mat{0\\-1}}\) and let \(T=\Set{\mat{-1\\-1},\mat{0\\2},\mat{0\\0}}\text{.}\) Draw the sets \(S\text{,}\)\(T\text{,}\) and \(T+S\text{.}\)
Solution.
Figure3.7.1.
Figure3.7.2.
Figure3.7.3.
10.
Let \(S=\Set{\mat{1\\1},\mat{0\\-1}, \mat{0\\0}}\text{.}\)
Draw \(S\text{,}\)\(S+S\text{,}\) and \((S+S)+S\text{.}\)
Is \((S+S)+S=S+(S+S)\text{?}\) Does the expression \(S+S+S\) make sense?
Draw \(S+S+S+S+\cdots\text{.}\)
Solution.
The set \(S\text{:}\)
Figure3.7.4.
The set \(S+S\text{:}\)
Figure3.7.5.
The set \((S+S)+S\text{:}\)
Figure3.7.6.
The set \((S+S)+S\) and \(S+(S+S)\) are equal since vector addition is associative. This means that \(S+S+S\) is well-defined and we can drop the parentheses, so the expression \(S+S+S\) makes sense.
Figure3.7.7.
Notice that we obtain the orange points when we add an additional \(S\) to the set sum. Since we are considering the infinite sum \(S+S+S+S+\cdots\text{,}\) the process continues infinitely.
11.
Let \(D\subseteq\R^{2}\) be the unit disk centered at the origin and let \(L\subseteq \R^{2}\) be the line segment from \((0,0)\) to \((0,2)\text{.}\)
How many points are in \(D\text{,}\)\(L\text{,}\) and \(D+L\text{?}\)
Draw \(D+L\text{.}\)
Find the area of \(D+L\text{.}\)
Suppose \(S\subseteq\R^{2}\) makes a smiley face when drawn and the “thickness” of each line composing this smiley face is \(0.01\) units. Can you find a set \(A\) so that the set \(S+A\) represents a smiley face where the lines have a thickness of \(0.05\text{?}\) If so, give an example of such an \(A\text{.}\) Otherwise, explain why it is impossible.
Solution.
There are infinitely many points in each of D, L, and \(D+L\text{.}\)
Figure3.7.8.
\(D+L\) can be decomposed into \(2\) unit radius half-circles and a square with side length \(2\text{.}\) The area of \(D+L\) is then \(\pi+4\text{.}\)
We can take A to be a circle of radius \(0.02\) units.
12.
Let \(\vec v_{1}, \vec v_{2}, \vec v_{3}\) be vectors. For each of the following statements, justify whether the statement is true or false.
If \(\vec v_{1}\) can be written as a linear combination of \(\vec v_{2}\) and \(\vec v_{3}\text{,}\) then \(\Set{\vec v_1,\vec v_2,\vec v_3}\) is linearly dependent.
If \(\Set{\vec v_1,\vec v_2,\vec v_3}\) is linearly dependent, then \(\vec v_{1}\) can be written as a linear combination of \(\vec v_{2}\) and \(\vec v_{3}\text{.}\)
If \(\vec v_{1}=k\vec v_{2}\) for some real number \(k\text{,}\) then \(\Set{\vec v_1,\vec v_2}\) is linearly dependent.
If \(\vec v_{1}\) is not a scalar multiple of \(\vec v_{2}\text{,}\) then \(\Set{\vec v_1,\vec v_2,\vec v_3}\) is linearly independent.
All spans contain \(\vec 0\text{.}\)
Solution.
True. This follows from the geometric definition of linear dependence.
False. \(\Set{\mat{1\\0},\mat{0\\0},\mat{0\\1}}\) is linearly dependent, but \(\mat{1\\0}\) is not a linear combination of \(\mat{0\\0}\) and \(\mat{0\\1}\text{.}\)
True. \(\vec{v}_{1}\) is a linear combination of \(\vec{v}_{2}\) and so \(\Set{\vec {v}_{1},\vec{v}_{2}}\) is linearly dependent by the definition of linear dependence.
False. \(\Set{\mat{1\\0},\mat{0\\0},\mat{0\\1}}\) is linearly dependent, but \(\mat{1\\0}\) is not a scalar multiple of \(\mat{0\\1}\text{.}\)
True. The linear combination of any finite set with all coefficients zero is \(\vec{0}\text{.}\)