Linear Algebra and Its Applications, Exercise 3.1.21

Exercise 3.1.21. If P is the plane in \mathbb{R}^3 described by x+2y-z = 6 what is the equation for the plane P' parallel to P through the origin? What is a vector perpendicular to P'? Find a matrix A for which P' is the nullspace, and a matrix B for which P' is the row space.

Answer:  One way to approach this problem is to find a general solution to the equation x+2y-z = 6 and express it as the sum of a homogeneous solution and a particular solution. The corresponding homogeneous system is x+2y-z = 0, which is represented by the matrix

A = \begin{bmatrix} 1&2&-1 \end{bmatrix}

This system has x as a basic variable and y and y as free variables. Setting y = 1 and z = 0 we have x+2y-z = x+2-0 = 0 or x = -2. So (-2, 1, 0) is one solution to the homogeneous system. Setting y = 0 and z = 1 we have x+2y-z = x+0-1 = 0 or x = 1. So (1, 0, 1) is a second solution to the homogeneous system. These vectors are in the nullspace of A and serve as a basis for it.

To find the particular solution we set y = z = 0 for the general system so that we have x+2y-z = x+0-0 = 6 or x = 6. So (6, 0, 0) is a particular solution to the general system, and the general solution to the equation is then the sum of the particular solution and the homogeneous solution:

\begin{bmatrix} 6 \\ 0 \\ 0 \end{bmatrix} + y \begin{bmatrix} -2 \\ 1 \\ 0 \end{bmatrix} + z \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}

The plane P is defined by the general solution to x+2y-z=6. The plane P' going through the origin corresponds to the homogeneous system x+2y-z=0 and is spanned by the vectors (-2, 1, 0) and (1, 0, 1); it is the nullspace of the matrix A above. The plane P is parallel to P' and is offset from it by the vector (6, 0, 0).

To find a vector perpendicular to P' (and P),  from the above solution to the homogeneous system x+2y-z = 0 we know that the vector (1, 2, -1) (the first and only row of the matrix A above) is orthogonal to the vectors (-2, 1, 0) and (1, 0, 1) (the basis vectors for the nullspace of A). Therefore the vector (1, 2, -1) is perpendicular to the plane P', the nullspace of A spanned by (-2, 1, 0) and (1, 0, 1).

Using the vectors (-2, 1, 0) and (1, 0, 1) that serve as a basis for the plane P' we can construct a matrix B for which P' is the row space:

B = \begin{bmatrix} -2&1&0 \\ 1&0&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.20

Exercise 3.1.20. Suppose S is a subspace of \mathbb{R}^n. Show that (S^\perp)^\perp = S. What does this mean?

Answer: We first consider the case where S = \{0\}; in other words, S contains only the zero vector. From exercise 3.1.18 we know that S^\perp = \{0\}^\perp = \mathbb{R}^n. The only vector that is orthogonal to all vectors in \mathbb{R}^n is the zero vector, so the zero vector is the only member of (S^\perp)^\perp = (\mathbb{R}^n)^\perp. We thus have S = (S^\perp)^\perp when S = \{0\}.

We next consider the case where S \ne \{0\}; in other words, S contains at least one nonzero vector (and thus has dimension of at least 1). Then there must exist a linearly independent set of one or more basis vectors v_1 through v_m that span S. Consider the m by n matrix A that has the basis vectors v_1 through v_m as its rows. The row space of A is then the set of all linear combinations of the basis vectors v_1 through v_m and is thus equal to the S itself.

Per 3D on page 138 (the Fundamental Theorem of Linear Algebra, part 2) the nullspace of A is the orthogonal complement of the row space and the row space of A is the orthogonal complement of the nullspace of A. But the row space is S, so the nullspace of A is the orthogonal complement S^\perp of S. The orthogonal complement of the nullspace of A is then (S^\perp)^\perp and is equal to the row space S.

So for all subspaces S in \mathbb{R}^n we have S = (S^\perp)^\perp.

To recap: For S = \{0\} we have S = (S^\perp)^\perp) trivially. Any nonzero subspace S is the row space of some matrix A and S^\perp is the nullspace of that matrix. The fact that S = (S^\perp)^\perp then follows from the fact that the row space of a matrix is the orthogonal complement of the nullspace and vice versa. The basis sets of S and S^\perp are mutually orthogonal, and together they form a basis set for \mathbb{R}^n. Thus no (nonzero) vector can be orthogonal to both S and S^\perp.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | 1 Comment

Linear Algebra and Its Applications, Exercise 3.1.19

Exercise 3.1.19. State whether each of the following is true or false:

(a) If the subspaces V and W are orthogonal, then V^\perp and W^\perp are also orthogonal.

(b) If V is orthogonal to W and W orthogonal to Z then V is orthogonal to Z.

Answer: (a) In \mathbb{R}^n suppose that V = W = \{0\}. Then since the zero vector is orthogonal to itself V is orthogonal to W = V. However we have V^\perp = W^\perp = \mathbb{R}^n and \mathbb{R}^n is not orthogonal to itself, so V^\perp and W^\perp are not orthogonal. The statement is false.

(b) Suppose that V = Z = \mathbb{R}^n and W = \{0\}. Then V is orthogonal to W and W is orthogonal to Z but V is not orthogonal to Z because \mathbb{R}^n is not orthogonal to itself. The statement is false.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.18

Exercise 3.1.18. Suppose that S = \{0\} is the subspace of \mathbb{R}^4 containing only the origin. What is the orthogonal complement of S (S^\perp)? What is S^\perp if S is the subspace of \mathbb{R}^4 spanned by the vector (0, 0, 0, 1)?

Answer: Every vector is orthogonal to the zero vector. (In other words, v^T0 = 0 for all v.) So the orthogonal complement of S = \{0\} is the entire vector space, or in this case S^\perp = \mathbb{R}^4.

If S is spanned by the vector (0, 0, 0, 1) then all vectors in S are of the form (0, 0, 0, d). Any vector whose last entry is zero is orthogonal to vectors in S. In other words, for vectors of the form (a, b, c, 0) the inner product with a vector in S is

a \cdot 0 + b \cdot 0 + c \cdot 0 + 0 \cdot d = 0+0+0+0 = 0

The space of vectors of the form (a, b, c, 0) is spanned by the vectors (1, 0, 0, 0), (0, 1, 0, 0), and (0, 0, 1, 0), which are linearly independent and form a basis for the subspace. Thus S^\perp is the subspace of \mathbb{R}^4 with basis vectors (1, 0, 0, 0), (0, 1, 0, 0), and (0, 0, 1, 0).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.17

Exercise 3.1.17. Suppose that V and W are subspaces of \mathbb{R}^n and are orthogonal complements. Is there a matrix A such that the row space of A is V and the nullspace of A is W? If so, show how to construct A using the basis vectors for V.

Answer: Suppose v_1 through v_m are the basis vectors for V. (Since V is a subspace of \mathbb{R}^n we will have m \le n.) Let A be an m by n matrix with row 1 equal to v_1, row 2 equal to v_2, and so on through row m equal to v_m.

The row space of A is the set of all linear combinations of the rows, which is equivalent to the set of all linear combinations of v_1 through v_m. But since v_1 through v_m is a basis set for V they span V and the set of all linear combinations of v_1 through v_m is V itself. The row space of A is therefore equal to V.

Now consider the null space of A consisting of all vectors x such that Ax = 0. If this is the case then the inner product of x with each row of A must be zero or (put another way) x must be orthogonal to each row of A. Since the rows of A are v_1 through v_m this means that any vector x in the nullspace of A is orthogonal to all of v_1 through v_m.

Since x is orthogonal to all of v_1 through v_m it is also orthogonal to all linear combinations of v_1 through v_m, and since v_1 through v_m form a basis set for V this means that x is orthogonal to any vector in V. All vectors in the nullspace are thus orthogonal to V. Moreover, any vector orthogonal to V (i.e., orthogonal to all vectors in V) must be orthogonal to all of v_1 through v_m and is therefore in the nullspace of A.

The nullspace of A thus contains all vectors orthogonal to V. But the set of all vectors orthogonal to V is W, the orthogonal complement to V. The nullspace of A is therefore equal to W.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.16

Exercise 3.1.16. Describe the set of all vectors orthogonal to the vectors (1, 4, 4, 1) and (2, 9, 8, 2).

Answer: If a vector x is orthogonal to the vectors (1, 4, 4, 1) and (2, 9, 8, 2) then its inner products with those vectors must be zero, so that

1 \cdot x_1 + 4 \cdot x_2 + 4 \cdot x_3 + 1 \cdot x_4 = 0

and

2 \cdot x_1 + 9 \cdot x_2 + 8 \cdot x_3 + 2 \cdot x_4 = 0

This is a system of two equations in four unknowns:

\begin{array}{rcrcrcrcl} x_1&+&4x_2&+&4x_3&+&x_4&=&0 \\ 2x_1&+&9x_2&+&8x_3&+&2x_4&=&0 \end{array}

and is equivalent to the system Ax = 0 where

A = \begin{bmatrix} 1&4&4&1 \\ 2&9&8&2 \end{bmatrix}

In other words, the problem of finding all vectors orthogonal to (1, 4, 4, 1) and (2, 9, 8, 2) is equivalent to the problem of finding the nullspace of the 2 by 4 matrix A for which (1, 4, 4, 1) and (2, 9, 8, 2) are the rows.

We use Gaussian elimination to solve the system, starting by subtracting 2 times row 1 from row 2:

\begin{bmatrix} 1&4&4&1 \\ 2&9&8&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&4&4&1 \\ 0&1&0&0 \end{bmatrix}

The resulting echelon matrix has two pivots, in columns 1 and 2, so x_1 and x_2 are basic variables and x_3 and x_4 are free variables.

Setting x_3 = 1 and x_4 = 0, from row 2 we have x_2 = 0 and from row 1 we then have x_1 + 4x_2 + 4x_3 + x_4 = x_1 + 0 + 4 + 0 = 0 or x_1 = -4. So (-4, 0, 1, 0) is one solution to Ax = 0. Setting x_3 = 0 and x_4 = 1, from row 2 we have x_2 = 0 again and from row 1 we have x_1 + 4x_2 + 4x_3 + x_4 = x_1 + 0 + 0 + 1 = 0 or x_1 = -1. So (-1, 0, 0, 1) is a second solution to Ax = 0.

The vectors (-4, 0, 1, 0) and (-1, 0, 0, 1) together form a basis for the nullspace of A. That nullspace, i.e., all linear combinations of (-4, 0, 1, 0) and (-1, 0, 0, 1), is the set of all vectors orthogonal to the vectors (1, 4, 4, 1) and (2, 9, 8, 2).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.15

Exercise 3.1.15. Is there a matrix such that the vector (1, 2, 1) is in the row space of the matrix and the vector (1, -2, 1) is in the nullspace of the matrix?

Answer: The row space of any matrix A is the orthogonal complement to the nullspace of A, so that any vector in the nullspace must be orthogonal to any vector in the row space and vice versa. In other words, the inner products of any such pairs of vectors must be zero.

Suppose that the vector (1, 2, 1) is in the row space of some matrix A. The inner product of (1, 2, 1) with (1, -2, 1) is

1 \cdot 1 + 2 \cdot (-2) + 1 \cdot 1 = 1 - 4 + 1 = -2

Since the inner product of the two vectors is not zero, the vector (1, -2, 1) cannot be in the nullspace of A.

Similarly, assume that (1, -2, 1) is in the nullspace of some matrix B. Then because the inner product with (1, 2, 1) is nonzero, the vector (1, 2, 1) cannot be in the row space of B.

We conclude that no matrix exists for which (1, 2, 1) is in the row space and (1, -2, 1) is in the nullspace.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.14

Exercise 3.1.14. Given two vectors x and y in \mathbb{R}^n, show that their difference x-y is orthogonal to their sum x+y if and only if their lengths \|x\| and \|y\| are the same.

Answer: First we assume that x-y is orthogonal to x+y. This means that their inner product  must be zero, or

0 = \sum_{i=1}^n (x_i-y_i)(x_i+y_i)

= \sum_{i=1}^n (x_i^2 - y_ix_i + x_iy_i - y_i^2)

= \sum_{i=1}^n (x_i^2 - y_i^2)

= \sum_{i=1}^n x_i^2 - \sum_{i=1}^n y_i^2

= \|x\|^2 - \|y\|^2

So we have \|x\|^2 - \|y\|^2 = 0 which implies that \|x\|^2 = \|y\|^2 or \|x\| = \|y\| (keeping in mind that the length of a vector must be a positive quantity).

So if x-y is orthogonal to x+y then we must have \|x\| = \|y\|.

Now suppose that \|x\| = \|y\|. This implies that \|x\|^2 = \|y\|^2 or \|x\|^2 - \|y\|^2 = 0. We then use the definitions of \|x\| and \|y\| to run the argument above in reverse:

0 = \sum_{i=1}^n x_i^2 - \sum_{i=1}^n y_i^2

= \sum_{i=1}^n (x_i^2 - y_i^2)

= \sum_{i=1}^n (x_i^2 - y_ix_i + x_iy_i - y_i^2)

= \sum_{i=1}^n (x_i-y_i)(x_i+y_i)

Since \sum_{i=1}^n (x_i-y_i)(x_i+y_i) = 0 we see that x-y is orthogonal to x+y.

So if \|x\| = \|y\| then x-y must be orthogonal to x+y.

Combining the two proofs above, we have shown that for any two vectors x and y in in \mathbb{R}^n, x-y is orthogonal to x+y if and only if \|x\| = \|y\|

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.13

Exercise 3.1.13. Provide a picture showing the action of A^T in sending the column space of A to the row space and the left nullspace to zero.

Answer: I’m leaving this post as a placeholder until I have time to illustrate this.

The basic idea is that just as the row space and the nullspace are orthogonal complements, so are the column space and the left nullspace. Thus a given vector y should be decomposable into a column space component y_c and a left nullspace component y_{ln}.

Multiplying y_{ln} by A^T produces A^Ty_{ln} = 0 since y_{ln} is in the left nullspace and satisfies yA = 0 = A^Ty. Multiplying y_c by A^T produces A^Ty_c, which is a linear combination of the columns of A^T. But the columns of A^T are the rows of A, so A^Ty_c is a linear combination of the rows of A and is therefore in the row space of A.

So multiplication by A^T sends all vectors in the left nullspace of A into zero, and all vectors in the column space of A into the row space of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.12

Exercise 3.1.12. For the matrix

A = \begin{bmatrix} 1&0&2 \\ 1&1&4 \end{bmatrix}

find a basis for the nullspace and show that it is orthogonal to the row space. Take the vector x = (3, 3, 3) and express it as the sum of a nullspace component x_n and a row space component x_r.

Answer: We use Gaussian elimination to solve the system Ax = 0. We start by subtracting 1 times row 1 from row 2:

\begin{bmatrix} 1&0&2 \\ 1&1&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&2 \\ 0&1&2 \end{bmatrix}

The resulting echelon matrix has two pivots, in columns 1 and 2, so x_1 and x_2 are basic variables and x_3 is a free variable. Setting x_3 = 1 from the second row we have x_2 + 2x_3 = x_2 + 2 = 0 or x_2 = -2. From the first row we then have x_1 + 2x_3 = x_1 + 2 = 0 or x_1 = -2. The vector (-2, -2, 1) is therefore a solution to Ax = 0 and a basis for the nullspace.

Since (-2, -2, 1) is in the nullspace so is its negative, (2, 2, -1), which is an alternative basis for the nullspace. Note that the inner product of u = (2, 2, -1) with v = (1, 0, 2) is

u^Tv = 2 \cdot 1 + 2 \cdot 0 + (-1) \cdot 2 = 2 - 2 = 0

and that the inner product of u = (2, 2, -1) with w = (1, 1, 4) is

u^Tw = 2 \cdot 1 + 2 \cdot 1 + (-1) \cdot 4 = 2 + 2 - 4 = 0

The basis of the nullspace u = (2, 2, -1) is thus orthogonal to each of the basis vectors for the row space, v = (1, 0, 2) and w = (1, 1, 4). The row space as a whole consists of all linear combinations c_1v+c_2w of those basis vectors, where c_1 and c_2 are scalars.

We then have

u^T(c_1v+c_2w) = c_1u^Tv + c_2u^Tw = c_1 \cdot 0 + c_2 \cdot 0 = 0

so that the basis u = (2, 2, -1) of the nullspace is orthogonal to all vectors in the row space.

Note that we have

x = (3, 3, 3) = (2, 2, -1) + (1, 1, 4) = u + w

Since the vector u (2, 2, -1) is in the nullspace and the vector w = (1, 1, 4) is in the row space (being the second row of A) we have x = x_n + x_r where x_n = u = (2, 2, -1) and x_r = w = (1, 1, 4).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , | Leave a comment