Linear Algebra and Its Applications, Exercise 2.3.15

Exercise 2.3.15. If the vector space V has dimension k show that

a) if a set of k vectors in V is linearly independent then that set forms a basis

b) if a set of k vectors in V spans V then that set forms a basis

Answer: a) Assume that we have a set of linearly independent vectors v_1, \dotsc, v_k in V. Suppose that this set does not span V. By theorem 2L (page 86) we can extend this set to form a basis by adding additional vectors v_{k+1}, \dotsc, v_{k+p}. But if this expanded set of vectors is a basis for V then the dimension of V is (by definition) k+p > k which contradicts the assumption that the dimension of V is k. Therefore the linearly independent set v_1, \dotsc, v_k must span V and be a basis for it.

b) Assume that we have a set of vectors w_1, \dotsc, w_k in V that span V. Suppose that this set is not linearly independent and thus not a basis. By theorem 2L (page 86) we can reduce this set to form a basis by removing one or more vectors so that the new set has l vectors where l < k. But since the dimension of V is k then there must exist some set of k vectors that is a basis for V. But if the reduced set of l vectors is a basis for V and the other set of k vectors is also a basis then by theorem 2K we must have l = k not l < k. We therefore conclude that the spanning set w_1, \dotsc, w_k is in fact linearly independent and is thus a basis for V.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.14

Exercise 2.3.14. Suppose we have the following matrix

A = \begin{bmatrix} 1&2&1 \\ 0&0&4 \end{bmatrix}

How can you extend the rows of A to create a basis for \mathbf{R}^3? How can you reduce the columns of A to create a basis for \mathbf{R}^2?

Answer: As defined A is in echelon form but has only two pivots, in the first and third columns. We can add another row to A to provide a pivot for the second column:

A' = \begin{bmatrix} 1&2&1 \\ 0&1&0 \\ 0&0&4 \end{bmatrix}

Since A' is in echelon form and has pivots in every column, the columns are linearly independent. Since A' also has pivots in every row the rows are also linearly independent. (See exercise 2.3.5.) The three rows also span \mathbf{R}^3 with any vector v = (v_1, v_2, v_3) in \mathbf{R}^3 expressible as

v = v_1 \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} + (v_2 - 2v_1) \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} + \frac{1}{4} (v_3 - v_1) \begin{bmatrix} 0 \\ 0 \\ 4 \end{bmatrix}

Since the three rows of A' are linearly independent and span \mathbf{R}^3 they form a basis for \mathbf{R}^3.

Turning to the columns of A, since A has three columns but only two pivots (in the first and third columns) the three columns must be linearly dependent, and in fact the second column is twice the first. We can therefore remove the second column to form the following 2 by 2 matrix:

A'' = \begin{bmatrix} 1&1 \\ 0&4 \end{bmatrix}

Since A'' is in echelon form and has pivots in all columns the columns are linearly independent. They also span \mathbf{R}^2 with any vector w = (w_1, w_2) expressible as

w = (w_1 - \frac{1}{4}w_2) \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \frac{1}{4}w_2 \begin{bmatrix} 1 \\ 4 \end{bmatrix}

Since the two columns of A'' are linearly independent and span \mathbf{R}^2 they form a basis for \mathbf{R}^2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.13

Exercise 2.3.13.What are the dimensions of the following spaces?

a) vectors in \mathbf{R}^4 with components that sum to zero

b) the nullspace associated with the 4 by 4 identity matrix

c) the space of all 4 by 4 matrices

Answer: a) If the components of a vector v sum to zero then we have

v_1 + v_2 + v_3 + v_4 = 0 \rightarrow v_4 = -v_1 - v_2 - v_3

Such a vector v can be expressed as the linear combination of three vectors as follows:

v = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ -v_1 - v_2 - v_3 \end{bmatrix} = v_1 \begin{bmatrix} 1 \\ 0 \\ 0 \\ -1 \end{bmatrix} + v_2 \begin{bmatrix} 0 \\ 1 \\ 0 \\ -1 \end{bmatrix} + v_3 \begin{bmatrix} 0 \\ 0 \\ 1 \\ -1 \end{bmatrix}

The three vectors are linearly independent (as can be easily seen by doing elimination on a matrix whose columns are the vectors) and since they span the space they are a basis for it. The dimension of the space is therefore 3.

b) The nullspace of the 4 by 4 identity matrix I contains solutions to Ix = 0 or

\begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

This is equivalent to

x_1 \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + x_2 \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

But since these vectors are linearly independent the only solution is x_1 = x_2 = x_3 = x_4 = 0 or x = (0, 0, 0, 0). The nullspace consists only of the zero vector and (by convention) its dimension is zero.

c) The space of all 4 by 4 matrices has as a basis the set of matrices in which one element is one and the rest are zero; for example

\begin{bmatrix} 1&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} \quad \begin{bmatrix} 0&1&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix}

\cdots \quad \begin{bmatrix} 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&1&0 \end{bmatrix} \quad \begin{bmatrix} 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&1 \end{bmatrix}

There are 16 such matrixes in this basis, so the dimension of the space is 16.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.12

Exercise 2.3.12.Suppose that the set of vectors v_1, v_2, v_3,  and v_4,  is a basis for \mathbf{R}^4 and that W is a subspace of \mathbf{R}^4. Provide a counterexample to the conjecture that some subset of v_1, v_2, v_3,  and v_4 is necessarily a basis for W.

Answer: Suppose that v_1, v_2, v_3,  and v_4 are equal to the vectors (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), and (0, 0, 0, 1) and suppose that W is the subspace consisting of all vectors whose first two elements are equal to each other and whose last two elements are equal to each other; i.e., vectors in W are of the form (a, a, c, c). (It is fairly simple to verify that W is in fact a subspace, so I omit that here.) In this case none of the vectors v_1, v_2, v_3,  and v_4 are in the subspace W and thus cannot be part of a basis for W.

Instead we could use, for example, the vectors (1, 1, 0, 0) and (0, 0, 1, 1) as a basis for the subspace W, with any vector w = (a, a, c, c) in the subspace expressible as

w = a \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} + c \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.11

Exercise 2.3.11. Consider the subspace of \mathbf{R}^3 consisting of all vectors whose first two components are equal. Find two different bases for this subspace.

Answer: All vectors in the subspace are of the form (a, a, c). One basis for the subspace consists of the vectors (1, 1, 0) and (0, 0, 1) with any vector v = (a, a, c) in the subspace expressible as

v = a \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + c \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

Another basis for the subspace consists of the vectors (1, 1, 1) and (0, 0, -1) with any vector v = (a, a, c) in the subspace expressible as

v = a \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} + (a - c) \begin{bmatrix} 0 \\ 0 \\ -1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.10

Exercise 2.3.10. The set of all 2 by 2 matrices forms a vector space under the standard rules for multiplying two matrices and multiplying a matrix by a scalar. Find a basis for the space and describe the subspace spanned by the set of all matrices U in echelon form.

Answer: Any 2 by 2 matrix

A = \begin{bmatrix} a&b \\ c&d \end{bmatrix}

can be represented as a linear combination of four matrices as follows:

A = \begin{bmatrix} a&b \\ c&d \end{bmatrix} = a \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + b \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} + c \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} + d \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}

These four matrices are linearly independent and form a basis for the space of all 2 by 2 matrices.

The set of 2 by 2 echelon matrices consists of all matrices U of the form

\begin{bmatrix} a&b \\ 0&d \end{bmatrix}

where any of a, b, or d may be zero. The subspace spanned is the set of all upper triangular matrices, and the three matrices

\begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \qquad \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} \qquad \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}

serve as a basis for the subspace.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.9

Exercise 2.3.9. Give a basis for the column space of the matrix

U = \begin{bmatrix} 0&1&4&3 \\ 0&0&2&2 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix}

and express the other columns of U in terms of it. Find a matrix A that is reduced by elimination to the same U but has a different column space than U.

Answer: The pivots of U are in the second and third columns, so that there columns are linearly independent and can serve as a basis for the column space. We can express the fourth column as a linear combination of the second and third columns as follows:

\begin{bmatrix} 3 \\ 2 \\ 0 \\ 0 \end{bmatrix} = - \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} 4 \\ 2 \\ 0 \\ 0 \end{bmatrix}

We can construct a matrix A for which elimination produces U simply by having the first two rows of A be the same as those of U and then making the third row of A equal to the second row of U instead of being set to zeros:

A = \begin{bmatrix} 0&1&4&3 \\ 0&0&2&2 \\ 0&0&2&2 \\ 0&0&0&0 \end{bmatrix}

The column space of A is spanned by the basis vectors (1, 0, 0, 0) and (4, 2, 2) and is different than the column space of U.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.8

Exercise 2.3.8. Describe the column space of the matrix

A = \begin{bmatrix} 1&2 \\ 3&6 \end{bmatrix}

and give a basis for it. Do the same for A^2.

Answer:The second column of A is twice the first column, so that the two vectors are linearly dependent. The column space consists of any vector of the form c (1, 3) where c is any real number; geometrically the column space is a line passing through the origin and the point (1, 3). The vector (1, 3) serves as a basis for the space.

We have

A^2 = \begin{bmatrix} 1&2 \\ 3&6 \end{bmatrix} \begin{bmatrix} 1&2 \\ 3&6 \end{bmatrix} = \begin{bmatrix} 7&14 \\ 21&42 \end{bmatrix}

Again we have the second column equal to twice the first, so the two vectors are linearly dependent. Also, we have (7, 21) = 7(1, 3) so that the first column of A^2 is a linear combination of the first column of A. The matrix A^2 therefore has the same column space as A and the vector (1, 3) can serve as its basis.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.7

Exercise 2.3.7. For each of the following, state whether the vector b is in the subspace spanned by w_1, \dotsc, w_l. (Construct a matrix A with w_1, \dotsc, w_l as the columns, and try to solve Ax = b.)

a) w_1 = (1, 1, 0), w_2 = (2, 2, 1), w_3 = (0, 0, 2), b = (3, 4, 5)

b) w_1 = (1, 2, 0), w_2 = (2, 5, 0), w_3 = (0, 0, 2), w_4 = (0, 0, 0), any b

Answer: a) We construct the matrix

A = \begin{bmatrix} 1&2&0 \\ 1&2&0 \\ 0&1&2 \end{bmatrix}

and do Gaussian elimination on the left and right hand sides of Ax = b:

\begin{bmatrix} 1&2&0&\vline&3 \\ 1&2&0&\vline&4 \\ 0&1&2&\vline&5 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&0&\vline&3 \\ 0&0&0&\vline&1 \\ 0&1&2&\vline&5 \end{bmatrix}

\Rightarrow \begin{bmatrix} 1&2&0&\vline&3 \\ 0&1&2&\vline&5 \\ 0&0&0&\vline&1 \end{bmatrix}

From the third equation we have the contradiction 0 = 1 and we conclude that Ax = b has no solution in this case. The vector b = (3, 4, 5) is therefore not in the subspace spanned by w_1, w_2,  and w_3.

b) We construct the matrix

A = \begin{bmatrix} 1&2&0&0 \\ 2&5&0&0 \\ 0&0&2&0 \end{bmatrix}

and do Gaussian elimination on the left and right hand sides of Ax = b:

\begin{bmatrix} 1&2&0&0&\vline&b_1 \\ 2&5&0&0&\vline&b_2 \\ 0&0&2&0&\vline&b_3 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&0&0&\vline&b_1 \\ 0&1&0&0&\vline&b_2 - 2b_1 \\ 0&0&2&0&\vline&b_3 \end{bmatrix}

The pivots are in columns 1 through 3 and the basic variables are x_1, x_2, and x_3 with x_4 being a free variable. The first three columns of the original matrix are thus linearly independent; the fourth column is a linear combination of the others. Since the first three columns w_1, w_2, and w_3 are vectors in \mathbf{R}^3 and are linearly independent, they span all of \mathbf{R}^3 and thus any vector b = (b_1, b_2, b_3) can be represented as a linear combination of w_1, w_2, and w_3 using suitable weights.

Going further, from the third equation we have 2x_3 = b_3 or x_3 = b_3/2. From the second equation we have x_2 = b_2 - 2b_1. From the first equation we have

x_1 + 2x_2 = b_1 \rightarrow x_1 + 2(b_2 - 2b_1) = b_1

\rightarrow x_1 + 2b_2 -4b_1 = b_1 \rightarrow x_1 = 5b_1 - 2b_2

The equation Ax = b is equivalent to

\begin{bmatrix} 1&2&0&0 \\ 2&5&0&0 \\ 0&0&2&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}

\rightarrow x_1 \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix} + x_2 \begin{bmatrix} 2 \\ 5 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} 0 \\ 0 \\ 2 \end{bmatrix} + x_4 \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}

Dropping the zero vector w_4 we can therefore represent any vector b in \mathbf{R}^n as a linear combination of w_1, w_2, and w_3 as follows:

\begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} = (5b_1 - 2b_2) \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix} + (b_2 - 2b_1) \begin{bmatrix} 2 \\ 5 \\ 0 \end{bmatrix} + (b_3/2) \begin{bmatrix} 0 \\ 0 \\ 2 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.6

Exercise 2.3.6. What are the geometric entities (e.g., line, plane, etc.) spanned by the following sets of vectors:

a) (0, 0, 0), (0, 1, 0), and (0, 2, 0)

b) (0, 0, 1), (0, 1, 1), and (0, 2, 1)

c) the combined set of six vectors above (which vectors are a basis for the space spanned?)

d) all vectors with positive components

Answer: a) Both the first vector and the third vector are a linear combination of the second vector: we have (0, 0, 0) = 0 \cdot (0, 1, 0) and (0, 2, 0) = 2 \cdot (0, 1, 0) so that the space spanned by the set of vectors is simply the line passing through the origin and the point (0, 1, 0). This line is the y axis in \mathbf{R}^3.

b) Since all of the vectors have the first (x) component to be zero they are in the yz plane. There is no nonzero c for which (0, 1, 1) = c (0, 0, 1) so those two vectors are linearly independent; however the third vector can be expressed as a nontrivial linear combination of the first two: (0, 2, 1) = 2(0, 1, 1) - (0, 0, 1). Together the first two vectors span the yz plane.

c) The vectors from (a) can all be expressed as linear combinations of the first two vectors from (b): (0, 0, 0) = 0 (0, 1, 1), (0, 1, 0) = (0, 1, 1) - (0, 0, 1), and (0, 2, 0) = 2(0, 1, 1) - 2(0, 0, 1). Thus the space spanned by all six vectors is the same as that spanned by the vectors in (b), namely the yz plane, and the first two vectors from (b) are a basis for the space.

d) Among the vectors with positive components are (1, 0, 0), (0, 1, 0), and (0, 0, 1). These three vectors span \mathbf{R}^3 and serve as a basis for the space. So the set of all vectors with positive components spans \mathbf{R}^3.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment