Linear Algebra and Its Applications, Exercise 3.1.1

Exercise 3.1.1. For x = (1, 4, 0, 2) and y = (2, -2, 1, 3) what is the length of each vector and their inner product?

Answer: We have

\|x\|^2 = 1 \cdot 1 + 4 \cdot 4 + 0 \cdot 0 + 2 \cdot 2

= 1 + 16 + 0 + 4 = 21

and

\|y\|^2 = 2 \cdot 2 + (-2) \cdot (-2) + 1 \cdot 1 + 3 \cdot 3

= 4 + 4 + 1 + 9 = 18

so that \|x\| = \sqrt{21} and \|y\| = \sqrt{18}.

The inner product of x and y is then

x^Ty = 1 \cdot 2 + 4 \cdot (-2) + 0 \cdot 1 + 2 \cdot 3

= 2 - 8 + 0 + 6 = 0

Note that x and y are thus orthogonal.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Completing Chapter 2 of Linear Algebra and Its Applications

Yesterday I posted the final worked-out solution for the exercises from chapter 2 of  Gilbert Strang’s Linear Algebra and Its Applications, Third Edition. My first post for chapter 2 was for exercise 2.1.1 almost exactly 29 months ago. This is almost twice as long as it took me to post solutions for the exercises in chapter 1, so the time to complete the entire book is receding even farther into the future. At this point I’m managing to post two exercises a week, and I think I can keep that up indefinitely; whether I can increase the pace is another matter.

As with chapter 1, I found that working through the exercises for the posts has again  improved my understanding of the material. Among other things, in chapter 2 this resulted in my doing a series of posts on reflections and rotations and a post on composition of linear transformations (a topic I felt could have been handled better in the book).

Expect the next post, for the first exercise of chapter 3, in a few days.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Review Exercise 2.33

Review exercise 2.33. Consider the following PA = LU factorization:

\begin{bmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&1 \\ 0&0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1&-3&2 \\ 2&-1&4&2&1 \\ 4&-2&9&1&4 \\ 2&-1&5&-1&5 \end{bmatrix}

= \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 1&1&1&0 \\ 2&1&0&1 \end{bmatrix} \begin{bmatrix} 2&-1&4&2&1 \\ 0&0&1&-3&2 \\ 0&0&0&0&2 \\ 0&0&0&0&0 \end{bmatrix}

a) What is the rank of A?

b) Find a basis for the row space of A.

c) Are rows 1, 2, and 3 of A linearly independent: true or false?

d) Find a basis for the column space of A.

e) What is the dimension of the left nullspace of A?

f) Find the general solution to Ax = 0.

Answer: a) The echelon matrix U has three pivots (in columns 1, 3, and 5) and thus has rank r = 3. The rank of A is the same as the rank of U, namely r = 3.

b) The row space of A is the same as the row space of U. The first three rows of U are linearly independent and thus can serve as a basis for the row space of U and thus the row space of A:

\begin{bmatrix} 2 \\ -1 \\ 4 \\ 2 \\ 1 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \\ -3 \\ 2 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 2 \end{bmatrix}

c) Rows 1 and 2 of A are clearly linearly independent. If row 3 is a linear combination of rows 1 and 2 then the coefficient for row 2 must be 2 since row 1 has a zero in the first two columns and cannot contribute to those entries. If we take 2 times row 2 and then add row 1 (i.e., using 1 as the coefficient for row 1) we have

\begin{bmatrix} 0 \\ 0 \\ 1 \\ -3 \\ 2 \end{bmatrix} + 2 \begin{bmatrix} 2 \\ -1 \\ 4 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 4 \\ -2 \\ 9\\ 1 \\ 4 \end{bmatrix}

The result is equal to row 3, so row 3 is a linear combination of rows 1 and 2. The three rows are not linearly independent.

d) As noted above, the echelon matrix U has pivots in columns 1, 3, and 5, so those three columns are linearly independent. The corresponding columns of A are then also linearly independent and form a basis for the column space of A:

\begin{bmatrix} 0 \\ 2 \\ 4 \\ 2 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 4 \\ 9 \\ 5 \end{bmatrix} \qquad \begin{bmatrix} 2 \\ 1 \\ 4 \\ 5 \end{bmatrix}

e) The rank of A is r = 3 so the dimension of the left nullspace of A is m - r = 4 - 3 = 1.

f) The general solution Ax = 0 is the same as the general solution to Ux = 0. From the echelon matrix U we see that x_1, x_3 and x_5 are basic variables and that x_2 and x_4 are free variables.

We first set x_2 = 1 and x_4 = 0. From row 3 of U we have x_5 = 0. From row 2 we have x_3 - 3x_4 + 2x_5 = x_3 - 3 \cdot 0 + 2 \cdot 0 = 0 or x_3 = 0. From row 1 we have

2x_1 - x_2 + 4x_3 - 2x_4 + x_5 = 2x_1 - 1 + 4 \cdot 0 + 2 \cdot 0 + 0 = 0

or x_1 = \frac{1}{2}. So one solution is (\frac{1}{2}, 1, 0, 0, 0).

We next set x_2 = 0 and x_4 = 1. From row 3 of U we have x_5 = 0. From row 2 we have x_3 - 3x_4 + 2x_5 = x_3 - 3 \cdot 1 + 2 \cdot 0 = 0 or x_3 = 3. From row 1 we have

2x_1 - x_2 + 4x_3 + 2x_4 + x_5 = 2x_1 - 0 + 4 \cdot 3 + 2 \cdot 1 + 0 = 0

or x_1 = -7. So a second solution is (-7, 0, 3, 1, 0).

We have two solutions to Ux = 0 and thus to Ax = 0. The general solution is then

x = x_2 \begin{bmatrix} \frac{1}{2} \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} -7 \\ 0 \\ 3 \\ 1 \\ 0 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , , , , , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.32

Review exercise 2.32. a) Find the subspace of \mathbb{R}^6 such that for any vector x in the subspace we have x_1+x_2 = x_3+x_4 = x_5+x_6.

b) Find a matrix for which this subspace is the nullspace.

c) Find a matrix for which this subspace is the column space.

Answer: a) We have three equations that must be satisfied for any vector x in the subspace:

\begin{array}{rcrcrcr} x_1&+&x_2&=&x_3&+&x_4 \\ x_3&+&x_4&=&x_5&+&x_6 \\ x_1&+&x_2&=&x_5&+&x_6 \end{array}

These equations are equivalent to the following linear system

\begin{array}{rcrcrcrcrcrcr} x_1&+&x_2&-&x_3&-&x_4&&&&&=&0 \\ &&&&x_3&+&x_4&-&x_5&-&x_6&=&0 \\ x_1&+&x_2&&&&&-&x_5&-&x_6&=&0 \end{array}

which in turn is equivalent to the matrix equation

\begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 1&1&0&0&-1&-1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \end{bmatrix} = 0

To solve the system we use Gaussian elimination. We first multiply row 1 by 1 and subtract the result from row 3:

\begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 1&1&0&0&-1&-1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 0&0&1&1&-1&-1 \end{bmatrix}

Then we multiply row 2 by 1 and subtract the result from row 3:

\begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 0&0&1&1&-1&-1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 0&0&0&0&0&0 \end{bmatrix}

The resulting echelon matrix has pivots in columns 1 and 3, so x_1 and x_3 are basic variables and x_2, x_4, x_5, and x_6 are free variables.

We first set x_2 = 1 and x_4 = x_5 = x_6 = 0. From the second row of the echelon matrix we have x_3 + x_4 - x_5 - x_6 = x_3 + 0 - 0 - 0 = 0 or x_3 = 0. From the first row we have x_1 + x_2 - x_3 - x_4 = x_1 + 1 - 0 - 0 = 0 or x_1 = -1. One solution to the system is thus (-1, 1, 0, 0, 0, 0).

We next set x_4 = 1 and x_2 = x_5 = x_6 = 0. From the second row of the echelon matrix we have x_3 + x_4 - x_5 - x_6 = x_3 + 1 - 0 - 0 = 0 or x_3 = -1. From the first row we have x_1 + x_2 - x_3 - x_4 = x_1 + 0 + 1 - 1 = 0 or x_1 = 0. A second solution to the system is thus (0, 0, -1, 1, 0, 0).

We next set x_5 = 1 and x_2 = x_4 = x_6 = 0. From the second row of the echelon matrix we have x_3 + x_4 - x_5 - x_6 = x_3 + 0 - 1 - 0 = 0 or x_3 = 1. From the first row we have x_1 + x_2 - x_3 - x_4 = x_1 + 0 - 1 + 0 = 0 or x_1 = 1. A third solution to the system is thus (1, 0, 1, 0, 1, 0).

Finally we set x_6 = 1 and x_2 = x_4 = x_5 = 0. From the second row of the echelon matrix we have x_3 + x_4 - x_5 - x_6 = x_3 + 0 - 0 - 1 = 0 or x_3 = 1. From the first row we have x_1 + x_2 - x_3 - x_4 = x_1 + 0 - 1 - 0 = 0 or x_1 = 1. A fourth and final solution to the system is thus (1, 0, 1, 0, 0, 1).

Any linear combination of the four solution vectors is also a solution to the original system, and is thus in the subspace also. The four solution vectors together constitute a basis for the subspace:

\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{bmatrix}

b) In solving the matrix equation

\begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 1&1&0&0&-1&-1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \end{bmatrix} = 0

the solutions we found are in the nullspace of the matrix

A = \begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \\ 1&1&0&0&-1&-1 \end{bmatrix}

Since the solutions formed a basis for the subspace, any vector in the subspace is in the nullspace of the matrix. Similarly any vector in the nullspace is in the subspace as well.

(Either a vector in the nullspace is a linear combination of the four basis vectors above or it is not. In the former case the vector is in the subspace. In the latter case the vector is linearly independent of the four basis vectors. But this is impossible since the matrix has two pivots and thus rank r = 2, the dimension of the nullspace is 6-2 = 4, and there cannot be more than four linearly independent vectors in the nullspace.)

Since every vector in the subspace is in the nullspace of A and every vector in the nullspace of A is in the subspace, the subspace is equal to the nullspace of A.

Note that the subspace is also the nullspace of the echelon matrix

B = \begin{bmatrix} 1&1&-1&-1&0&0 \\ 0&0&1&1&-1&-1 \end{bmatrix}

This is the same as the echelon matrix found via Gaussian elimination, with the last row of zeros removed.

c) We can construct a matrix C whose column space is the subspace simply by setting the columns of C to be the basis vectors of the subspace:

C = \begin{bmatrix} -1&0&1&1 \\ 1&0&0&0 \\ 0&-1&1&1 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | 2 Comments

Linear Algebra and Its Applications, Review Exercise 2.31

Review exercise 2.31. Consider the rank-one matrix A = uv^T. Under what conditions would A^2 = 0?

Answer: In order for A^2 to exist A must be a square matrix; otherwise we could not multiply A by A since the number of columns of the first matrix would not match the number of rows of the second.

Since A is an n by n matrix the vectors u and v both have n entries, with u = (u_1, \ldots, u_n) and v = (v_1, \ldots, v_n).

We then have

A^2 = (uv^T)^2 = uv^Tuv^T = u(v^Tu)v^T

Since v^T is 1 by n and u is n by 1 their product v^Tu is a 1 by 1 matrix, or in other words a scalar value

c = \sum_{i=1}^n v_iu_i = \sum_{i=1}^n u_iv_i

We thus have

A^2 = u(v^Tu)v^T = ucv^T = cuv^T = cA

One way for A^2 to be zero is to have A = 0. Another way is if c = \sum_{i=1}^n u_iv_i = 0. Note that this would be trivially true if A = 0 but it is also possible for c to be zero even if A is nonzero.

For example, consider the case where u = (1, 1) and v = (1, -1) so that

A = uv^T = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \begin{bmatrix} 1&-1 \end{bmatrix} = \begin{bmatrix} 1&-1 \\ 1&-1 \end{bmatrix}

We then have

c = \sum_{i=1}^n u_iv_i = 1 \cdot 1 + 1 \cdot (-1) = 1 - 1 = 0

and

A^2 = \begin{bmatrix} 1&-1 \\ 1&-1 \end{bmatrix} \begin{bmatrix} 1&-1 \\ 1&-1 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix} = 0

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.30

Review exercise 2.30. Suppose that the matrix A is a square matrix.

a) Show that the nullspace of A^2 contains the nullspace of A.

b) Show that the column space of A contains the column space of A^2.

Answer: a) Suppose x is in the nullspace of A, so that Ax = 0. We then have A^2x = A(Ax) = A\cdot 0 = 0 so that x is also in the nullspace of A^2. Since this is true for every x in the nullspace of A, the nullspace of A is a subset of the nullspace of A^2 or (stated differently) the nullspace of A^2 contains the nullspace of A.

b) Suppose that b is in the column space of A^2. Then there exists some x for which A^2x = b (in other words, b is a linear combination of the columns of A, with the entries of x being the coefficients).

Let y = Ax. Then we have Ay = AAx = A^2x = b. Since there exists a y for which Ay = b we see that b is also in the column space of A. Since this is true for every b in the column space of A^2, the column space of A^2 is a subset of the column space of A or (stated differently) the column space of A contains the column space of A^2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.29

Review exercise 2.29. The following matrices

A_1 = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \qquad A_2 = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \qquad A_3 = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix}

represent linear transformations in the xy plane with e_1 = (1, 0) and e_2 = (0, 1) as a basis. Describe the effect of each transformation.

Answer: When the matrix A_1 is applied to the vector e_1 = (1, 0) we obtain

A_1v_1 = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}

When the matrix A_1 is applied to the vector e_2 = (0, 1) we obtain

A_1v_1 = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ -1 \end{bmatrix}

The effect of the transformation represented by A_1 is to reflect all vectors through the x-axis.

When the matrix A_2 is applied to the vector e_1 = (1, 0) we obtain

A_2e_1 = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}

When the matrix A_2 is applied to the vector e_2 = (0, 1) we obtain

A_2e_2 = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}

When the matrix A_2 is applied to the vector v_1 = (1, 2) we obtain

A_2v_1 = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 1 \\ 4 \end{bmatrix}

The transformation represented by A_2 is a shearing transformation that leaves all vectors on the y-axis unchanged but changes the y coordinates of all other vectors in proportion to their distance from the y-axis.

When the matrix A_3 is applied to the vector e_1 = (1, 0) we obtain

A_3e_1 = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ -1 \end{bmatrix}

When the matrix A_2 is applied to the vector e_2 = (0, 1) we obtain

A_2e_2 = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}

The effect of the transformation represented by A_3 is to rotate all vectors through an angle of -90 degrees. (Or, in other words, clockwise through 90 degrees.)

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.28

Review exercise 2.28. a) If A is an m by n matrix with linearly independent rows, what is the rank of A? The column space of A? The left null space of A?

b) If A is an 8 by 10 matrix and the nullspace of A has dimension 2, show that the system Ax = b has a solution for any b.

Answer: a) If A has m rows and they are linearly independent then the rank of A is r = m.

Since the rank is m there are also m linearly independent columns of A and the column space is \mathbb{R}^m. (In other words, the m linearly independent columns span \mathbb{R}^m.)

The dimension of the left nullspace is then m-r = m-m = 0. The left nullspace thus contains only the zero vector.

b) Since the dimension of the nullspace of a matrix is n-r in general, if the dimension of the nullspace of A is 2 we have 2 = 10-r or r = 8. Since the rank of A is equal to the number of rows of A, by 20Q on page 96 there exists a solution to the system Ax = b for every b.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.27

Review exercise 2.27. Find bases for each of the following matrices:

A_1 = \begin{bmatrix} 1&2&0&3 \\ 0&2&2&2 \\ 0&0&0&0 \\ 0&0&0&4 \end{bmatrix} \qquad A_2 = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \begin{bmatrix} 1&4 \end{bmatrix}

Answer: If we put A_1 in echelon form (by exchanging rows 3 and 4) the resulting matrix would have pivots in columns 1, 2, and 4. Columns 1, 2, and 4 of A_1 are thus linearly independent and form a basis for the column space of A_1:

\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 2 \\ 2 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 3 \\ 2 \\ 0 \\ 4 \end{bmatrix}

The rank of A_1 is therefore r = 3 and the dimension of the nullspace of A_1 is therefore n-r = 4-3 = 1.

In the system

A_1x = \begin{bmatrix} 1&2&0&3 \\ 0&2&2&2 \\ 0&0&0&0 \\ 0&0&0&4 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = 0

it is apparent from inspection that x_3 is a free variable and the rest are basic.

Setting x_3 = 1, from the fourth equation we have 4x_4 = 0 or x_4 = 0, from the second equation we have x_2 + x_3 + x_4 = x_2 + 1 + 0 = 0 or x_2 = -1, and from the first equation we have x_1 + 2x_2 + 3x_4 = x_1 - 2 + 0 = 0 or x_1 = 2. The vector

\begin{bmatrix} 2 \\ -1 \\ 1 \\ 0 \end{bmatrix}

is thus a solution to the homogeneous system A_1x = 0 and a basis for the nullspace of A_1.

Since the rank of A_1 is r = 3 the dimension of the row space of A_1 is also 3; from inspection it is apparent that rows 1, 2, and 4 of A_1 are linearly independent and are a basis for the row space of A_1:

\begin{bmatrix} 1 \\ 2 \\ 0 \\ 3 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 2 \\ 2 \\ 2 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 0 \\ 4 \end{bmatrix}

The dimension of the left nullspace of A_1 is therefore m-r = 4-3 = 1.

To find the left nullspace we must find a solution to the system y^TA_1 = 0 or (alternately) A_1^Ty = 0. To do this we do Gaussian elimination on the matrix

A_1^T = \begin{bmatrix} 1&0&0&0 \\ 2&2&0&0 \\ 0&2&0&0 \\ 3&2&0&4 \end{bmatrix}

We begin by multiplying row 1 by 2 and subtracting the result from row 2:

\begin{bmatrix} 1&0&0&0 \\ 2&2&0&0 \\ 0&2&0&0 \\ 3&2&0&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&2&0&0 \\ 3&2&0&4 \end{bmatrix}

We then multiply row 1 by 3 and subtract the result from row 4:

\begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&2&0&0 \\ 3&2&0&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&2&0&0 \\ 0&2&0&4 \end{bmatrix}

We then multiply row 2 by 1 and subtract the result from row 3:

\begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&2&0&0 \\ 0&2&0&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&0&0&0 \\ 0&2&0&4 \end{bmatrix}

and then multiply row 2 by 1 and subtract the result from row 3:

\begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&0&0&0 \\ 0&2&0&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&0&0&0 \\ 0&0&0&4 \end{bmatrix}

Although the resulting matrix is not in echelon form, it is apparent from inspection that y_3 is a free variable and the rest are basic.

Setting y_3 = 1, from the fourth equation we have 4y_4 = 0 or y_4 = 0, from the second equation we have 2y_2 = 0 or y_2 = 0, and from the first equation we have x_1 = 0. The vector

\begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}

is thus a solution to the homogeneous system A_1^Ty = 0 and a basis for the left nullspace of A_1.

We now turn to

A_2 = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \begin{bmatrix} 1&4 \end{bmatrix} = \begin{bmatrix} 1&4 \\ 1&4 \\ 1&4 \end{bmatrix}

This matrix has rank r = 1 and the first column

\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}

is a basis for the column space of A_2.

Since the rank of A_2 is r = 1 the dimension of the nullspace is n-r = 2-1 = 1. In the homogeneous system A_2x = 0 or

\begin{bmatrix} 1&4 \\ 1&4 \\ 1&4 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0

Gaussian elimination produces the following

\begin{bmatrix} 1&4 \\ 1&4 \\ 1&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&4 \\ 0&0 \\ 0&0 \end{bmatrix}

so that x_2 is a free variable and x_1 is basic.

Setting x_2 = 1 we have x_1 + 4x_2 = x_1 + 4 = 0 or x_1 = -4. The vector

\begin{bmatrix} -4 \\ 1 \end{bmatrix}

is thus a solution to A_2x = 0 and a basis for the nullspace of A_2.

Since the rank of A_2 is r = 1 the dimension of the row space of A_2 is also 1; from inspection it is apparent that row 1 of A_2 is a basis for the row space of A_2:

\begin{bmatrix} 1 \\ 4 \end{bmatrix}

The dimension of the left nullspace of A_2 is therefore m-r = 3-1 = 2.

To find the left nullspace we must find a solution to the system y^TA_2 = 0 or (alternately) A_2^Ty = 0. To do this we do Gaussian elimination on the matrix

A_2^T = \begin{bmatrix} 1&1&1 \\ 4&4&4 \end{bmatrix}

We multiply row 1 by 4 and subtract the result from row 2:

\begin{bmatrix} 1&1&1 \\ 4&4&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&1 \\ 0&0&0 \end{bmatrix}

The resulting echelon matrix has a pivot in column 1, with y_2 and y_3 being free variables. Setting y_2 = 1 and y_3 = 0, from the first equation we have y_1 + y_2 + y_3 = y_1 + 1 + 0 = 0 or y_1 = -1. Setting y_2 = 0 and y_3 = 0, from the first equation we have y_1 + y_2 + y_3 = y_1 + 0 + 1 = 0 or y_1 = -1. The vectors

\begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}

are thus solutions to A_2^Ty =0 and form a basis for the left nullspace of A_2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.26

Review exercise 2.26. State whether the following statements are true or false:

a) For every subspace S of \mathbb{R}^4 there exists a matrix A for which the nullspace of A is S.

b) For any matrix A, if both A and its transpose A^T have the same nullspace then A is a square matrix.

c) The transformation from \mathbb{R}^1 to \mathbb{R}^1 that transforms x into mx+b (for some scalars m and b) is a linear transformation.

Answer: a) For the statement to be true, for any subspace S of \mathbb{R}^4 we must be able to find a matrix A such that \mathcal{N}(A) = S. (In other words, for any vector x in S we have Ax = 0 and for any x such that Ax = 0 the vector x is an element of S.)

What would such a matrix A look like? First, since S is a subspace of \mathbb{R}^4 any vector x in S has four entries: x = (x_1, x_2, x_3, x_4). If we have Ax = 0 then the matrix A must have four columns; otherwise A would not be able to multiply x from the left side.

Second, note that the nullspace of A and the column space of A are related: If the column space has dimension r then the nullspace of A has dimension n-r where n is the number of columns of A. Since A has four columns (from the previous paragraph) the dimension of \mathcal{N}(A) is n-r = 4-r.

Put another way, if s is the dimension of S and S is the nullspace of A then we must have s = n-r = 4-r or r = 4-s. So if the dimension of S is s=0 then we have r = 4-s = 4, if the dimension of S is s=1 then we have r = 3, and so on.

Finally, if the matrix A has rank r then A has both r linearly independent columns and r linearly independent rows. As noted above, the number of columns of A (linearly independent or otherwise) is fixed by the requirement that the nullspace of A should be a subspace of \mathbb{R}^4. So A must always have four columns, and must have at least r = 4-s rows.

We now have five possible cases to consider, and can approach them as follows:

  • S has dimension 0 or 4. In both these cases there is only one possible subspace S and we can easily find a matrix A meeting the specified criteria.
  • S has dimension 1, 2, or 3. In each of these cases there are many possible subspaces S (an infinite number, in fact). For a given S we do the following:
    1. Start with a basis for S. (Any basis will do.)
    2. Consider the effect on the basis vectors if they were to be in the nullspace of some matrix A meeting the criteria above.
    3. Take the corresponding system of linear equations, re-express it as a system involving the entries of A as unknowns and the entries of the basis vectors as coefficients, and show that we can solve the system to find the unknowns.
    4. Show that all other vectors in S are also in the nullspace of A.
    5. Show that any vector in the nullspace of A must also be in S.

We now proceed to the individual cases:

S has dimension s=4. We then have S = \mathbb{R}^4. (If S has dimension 4 then its basis has four linearly independent vectors. If S \ne \mathbb{R}^4 then there must be some vector v in \mathbb{R}^4 but not in S, and that vector must be linearly independent of the vectors in the basis of S. But it is impossible to have five linearly independent vectors in a 4-dimensional vector space, so we conclude that S = \mathbb{R}^4.)

We then must have Ax = 0 for any vector x in S = \mathbb{R}^4. This is true when A is equal to the zero matrix. As noted above A must have exactly four columns and at least r = 4-s = 4-4 = 0 rows. So one possible value for A is

A = \begin{bmatrix} 0&0&0&0 \end{bmatrix}

(The matrix A could have additional rows as well, as long as they are all zeros.)

We have thus found a matrix A such that any vector x in S = \mathbb{R}^4 is also in \mathcal{N}(A). Going the other way, any vector x in \mathcal{N}(A) must have four entries (in order for A to be able to multiply it) so that any such vector x is also in \mathbb{R}^4 = S.

So if S is a 4-dimensional subspace (namely \mathbb{R}^4) then a matrix A exists such that S is the nullspace of A.

S has dimension s=0. The only 0-dimensional subspace of \mathbb{R}^4 is that consisting only of the zero vector x = (0, 0, 0, 0). (If S contained only a single nonzero vector then it would not be closed under multiplication, since multiplying that vector times the scalar 0 would produce a vector not in S. If S were not closed under multiplication then it would not be a subspace.)

In this case the matrix A would have to have rank r = 4-s = 4-0 = 4. If r = 4 then all four columns of A would have to be linearly independent and A would have to have at least four linearly independent rows. Suppose we choose the four elementary vectors e_1 through e_4 as the columns, so that

A = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

If Ax = 0 we then have

A = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = 0

so that the only solution is x = (0, 0, 0, 0). We have thus again found a matrix A for which \mathcal{N}(A) = S.

(Note that any other matrix of rank r = 4 would have worked as well: The product Ax is a linear combination of the columns of A, with the coefficients being x_1 through x_4. If the columns of A are linearly independent then that linear combination can be zero only if all the coefficients x_1 through x_4 are zero.)

Having disposed of the easy cases, we now proceed to the harder ones.

S has dimension s=3. In this case we are looking for a matrix A with rank 4-3 = 1 such that \mathcal{N}(A) = S. The matrix A thus must have only one linearly independent column and (more important for our purposes) only one linearly independent row. We need only find a matrix that is 1 by 4. (If desired we can construct suitable matrices that are 2 by 4, 3 by 4, etc., by adding additional rows that are multiples of the first row.)

Since the dimension of S is 3, any three linearly independent vectors in S form a basis for S; we pick an arbitrary set of such vectors u, v, and w. For S to be equal to \mathcal{N}(A) we must have Au = 0, Av = 0, and Aw = 0. We are looking for a matrix A that is 1 by 4, so these equations correspond to the following:

\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14} \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} = 0

\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix} = 0

\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14} \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \\ w_3 \\ w_4 \end{bmatrix} = 0

These can in turn be rewritten as the following system of equations:

\begin{array}{rcrcrcrcr} u_1a_{11}&+&u_2a_{12}&+&u_3a_{13}&+&u_4a_{14}=&0 \\ v_1a_{11}&+&v_2a_{12}&+&v_3a_{13}&+&w_4a_{14}=&0 \\ w_1a_{11}&+&w_2a_{12}&+&w_3a_{13}&+&w_4a_{14}=&0 \end{array}

This is a system of three equations in four unknowns, equivalent to the matrix equation By = 0 where

By = \begin{bmatrix} u_1&u_2&u_3&u_4 \\ v_1&v_2&v_3&v_4 \\ w_1&w_2&w_3&w_4 \end{bmatrix} \begin{bmatrix} a_{11} \\ a_{12} \\ a_{13} \\ a_{14} \end{bmatrix}

Since the vectors u, v, and w are linearly independent (because they form a basis for the 3-dimensional subspace S) and the vectors in question form the rows of the matrix B, the rank of B is r = 3. We also have r = m, the number of rows of B. Since this is true, per 20Q on page 96 there exists at least one solution y to the above system By = 0. But y is simply the first and only row in the matrix we were looking for, so this in turn means that we have found a matrix A for which u, v, and w are in the nullspace of A.

If x is a vector in S then x can be expressed as a linear combination of the basis vectors u, v, and w for some set of coefficients c_1, c_2, and c_3. We then have

Ax = A(c_1u+c_2v+c_3w)

= c_1Au + c_2Av + c_3Aw

= c_1 \cdot 0 + c_2 \cdot 0 + c_3 \cdot 0 = 0

So any vector x in S is also an element in the nullspace of A.

Suppose that y is a vector in the nullspace of A and y is not in S. Since y is not in S it cannot be expressed as a linear combination solely of the basis vectors u, v, and w; rather we must have

y = c_1u+c_2v+c_3w + c_4z

where z is some vector that is linearly independent of u, v, and w.

If y is in the nullspace of A then we have Ay = 0 so

0 = Ay = A(c_1u+c_2v+c_3w+c_4z)

= c_1Au + c_2Av + c_3Aw + c_4Az

= c_1 \cdot 0 + c_2 \cdot 0 + c_3 \cdot 0 + c_4Az = c_4Az

If c_4Az = 0 then either c_4 = 0 or Az = 0. If c_4 = 0 then we have

y = c_1u+c_2v+c_3w

so that y is actually an element of S, contrary to our supposition. If Az = 0 then z is an element of the nullspace of A. But \mathcal{N}(A) has dimension 3 and already contains the three linearly independent vectors u, v, and w. The fourth vector z cannot be both an element of \mathcal{N}(A) and also linearly independent of u, v, and w.

Our assumption that y is in the nullspace of A but is not in S has thus led to a contradiction. We conclude that any element of \mathcal{N}(A) is also in S. We previously showed that any element of S is also in \mathcal{N}(A), so we conclude that S = \mathcal{N}(A).

For any 3-dimensional subspace S of \mathbb{R}^4 we can therefore find a matrix A such that S is the nullspace of A.

S has dimension s=2. In this case we are looking for a matrix A with rank 4-2 = 2 such that \mathcal{N}(A) = S. The matrix A thus must have only two linearly independent columns and only two linearly independent rows. We thus look for a matrix that is 2 by 4.

Since the dimension of S is 2, any two linearly independent vectors in S form a basis for S; we pick an arbitrary set of such vectors u and v. For S to be equal to \mathcal{N}(A) we must have Au = 0 and Av = 0. We are looking for a matrix A that is 2 by 4, so these equations correspond to the following:

\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14} \\ a_{21}&a_{22}&a_{23}&a_{24} \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} = 0

\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14} \\ a_{21}&a_{22}&a_{23}&a_{24} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix} = 0

These can in turn be rewritten as the following system of four equations in eight unknowns:

\begin{array}{rrcr} u_1a_{11}+u_2a_{12}+u_3a_{13}+u_4a_{14}&&=&0 \\ &u_1a_{21}+u_2a_{22}+u_3a_{23}+u_4a_{24}&=&0 \\ v_1a_{11}+v_2a_{12}+v_3a_{13}+v_4a_{14}&&=&0 \\ &v_1a_{21}+v_2a_{22}+v_3a_{23}+v_4a_{24}&=&0 \end{array}

or By = 0 where

B = \begin{bmatrix} u_1&u_2&u_3&u_4&0&0&0&0 \\ 0&0&0&0&u_1&u_2&u_3&u_4 \\ v_1&v_2&v_3&v_4&0&0&0&0 \\ 0&0&0&0&v_1&v_2&v_3&v_4 \end{bmatrix}

Since the four rows are linearly independent (this follows from the linear independence of u and v) we have r = m so that the system is guaranteed to have a solution y. The entries in y are just the entries of A so we have found a matrix A for which the basis vectors u and v are members of the nullspace.

Since the basis vectors of S are in \mathcal{N}(A) all other elements of S are in \mathcal{N}(A) also. By the same argument as in the 3-dimensional case, any vector y in \mathcal{N}(A) must be in S also; otherwise a contradiction occurs. Thus we conclude that S = \mathcal{N}(A).

For any 2-dimensional subspace S of \mathbb{R}^4 we can therefore find a matrix A such that S is the nullspace of A.

S has dimension s=1. In this case we are looking for a matrix A with rank 4-1 = 3 such that \mathcal{N}(A) = S. The matrix A thus must have only three linearly independent columns and only three linearly independent rows. We thus look for a matrix that is 3 by 4.

Since the dimension of S is 1, any nonzero vector u in S forms a basis for S. For S to be equal to \mathcal{N}(A) we must have Au = 0. We are looking for a matrix A that is 3 by 4, so this equation corresponds to the following:

\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14} \\ a_{21}&a_{22}&a_{23}&a_{24} \\ a_{31}&a_{32}&a_{33}&a_{34} \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} = 0

This can be rewritten as a system By = 0 of three equations with twelve unknowns (a_{ij}), a system which is guaranteed to have at least one solution due to the linear independence of the three rows of B. The solution then forms the entries of A, so that we have Au = 0. Since u is a basis for S any other vector in S is also in the nullspace of A.

By the same argument as in the 3-dimensional case, any vector y in \mathcal{N}(A) must be in S also; otherwise a contradiction occurs. Thus we conclude that S = \mathcal{N}(A).

For any 1-dimensional subspace S of \mathbb{R}^4 we can therefore find a matrix A such that S is the nullspace of A.

Any subspace of \mathbb{R}^4 must have dimension from 0 through 4. We have thus shown that for any subspace S of \mathbb{R}^4 we can find a matrix A such that \mathcal{N}(A) = S. The statement is true.

b) Suppose that for some m by n matrix A both A and its transpose A^T have the same nullspace.

The rank r of A is also the rank of A^T. The rank of \mathcal{N}(A) is then n-r, and the rank of \mathcal{N}(A^T) is m-r. Since \mathcal{N}(A) = \mathcal{N}(A^T) we then have n - r = m - r so that m = n.

The number of rows m of A is the same as the number of columns n of A so that A (and thus A^T) is a square matrix. The statement is true.

c) If f(x) represents the transformation in question, with f(x) = mx+b, we have

f(x+y) = m(x+y) + b = mx + my + b

f(x) + f(y) = (mx+b) + (my+b) = mx + my + 2b

These two quantities are not the same unless b=0, so for b \ne 0 the transformation is not linear. The statement is false.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment