Linear Algebra and Its Applications, Exercise 3.1.11

Exercise 3.1.11. Fredholm’s alternative to the fundamental theorem of linear algebra states that for any matrix A and vector b either

1) Ax = b

has a solution or

2) A^Ty = 0, y^Tb \ne 0

has a solution, but not both. Show that assuming both (1) and (2) have solutions leads to a contradiction.

Answer: Suppose that both (1) and (2) have solutions for any matrix A and any vector b. In other words Ax = b for some x and A^Ty = 0 for some y, where y^Tb \ne 0.

Since A^Ty = 0 we also have (A^Ty)^T = 0. But (A^Ty)^T = y^T(A^T)^T = y^TA so that we also have y^TA = 0. Multiplying both sides by x we have y^TAx = 0 \cdot x = 0. But Ax = b so the equation y^TAx = 0 reduces to y^Tb = 0 contrary to our original assumption that y^Tb \ne 0.

We have thus shown that either (1) can have a solution or (2) can have a solution but not both at the same time.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.10

Exercise 3.1.10. Given the two vectors (1, 1, 2) and (1, 2, 3) find a homogeneous system in three unknowns whose solutions are the linear combinations of the vectors.

Answer: In the previous exercise 3.1.9 we showed that the plane spanned by the vectors (1, 1, 2) and (1, 2, 3) was orthogonal to the line passing through the origin and (-1, -1, 1). This means that for any vector x in that plane the inner product of x and (-1, -1, 1) is zero, or

(-1) \cdot x_1 + (-1) \cdot x_2 + 1 \cdot x_3 = 0

So we have -x_1 -x_2 + x_3 = 0 for all vectors x in the plane spanned by (1, 1, 2) and (1, 2, 3). But the plane spanned by (1, 1, 2) and (1, 2, 3) is simply the set of all vectors that are linear combinations of (1, 1, 2) and (1, 2, 3). So we have found a homogeneous system in three unknowns whose solutions are the linear combinations of (1, 1, 2) and (1, 2, 3).

(Note that we can multiply the system by -1 on both sides to obtain an equivalent system x_1 + x_2 - x_3 = 0.)

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.9

Exercise 3.1.9. For the plane in \mathbb{R}^3 spanned by the vectors (1, 1, 2) and (1, 2, 3) find the orthogonal complement (i.e., the line in \mathbb{R}^3 perpendicular to the plane). Note that this can be done by solving the system Ax = 0 where the two vectors are the rows of A.

Answer: We have

A = \begin{bmatrix} 1&1&2 \\ 1&2&3 \end{bmatrix}

We solve the system Ax = 0 using Gaussian elimination, starting by subtracting 1 times row 1 from  row 2:

\begin{bmatrix} 1&1&2 \\ 1&2&3 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&2 \\ 0&1&1 \end{bmatrix}

Since the resulting echelon matrix has pivots in columns 1 and 2, we have x_1 and x_2 as basic variables and x_3 as a free variable. Setting x_3 = 1, from the second row we have x_2 + x+3 = x_2 + 1 = 0 or x_2 = -1. From the first row we have x_1 + x_2 + 2x_3 = x_1 - 1 + 2 = 0 or x_1 = -1.

So (-1, -1, 1) is a solution to Ax = 0 and a basis for the orthogonal complement, which consists of the line through the origin and (-1, -1, 1).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.8

Exercise 3.1.8. Suppose that V and W are orthogonal subspaces. Show that their intersection consists only of the zero vector.

Answer: If V and W are orthogonal then we have v^Tw = 0 for any vectors v in V and w in W. Suppose that x is an element of both V and W. Then we have x^Tx = 0. But this means that \|x\| = 0 and thus that x = 0.

So if V and W are orthogonal then V \cap W = \{0\}.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.7

Exercise 3.1.7. For the matrix

A = \begin{bmatrix} 1&2&1 \\ 2&4&3 \\ 3&6&4 \end{bmatrix}

find vectors x and y such that x is orthogonal to the row space of A and y is orthogonal to the column space of A>

Answer: The nullspace of A is orthogonal to the row space of A. We can therefore find a suitable vector x by solving the system Ax = 0.

To solve the system we perform Gaussian elimination. We start by subtracting 2 times row 1 from row 2:

\begin{bmatrix} 1&2&1 \\ 2&4&3 \\ 3&6&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&1 \\ 0&0&1 \\ 3&6&4 \end{bmatrix}

and then subtract 3 times row 1 from row 3:

\begin{bmatrix} 1&2&1 \\ 0&0&1 \\ 3&6&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&1 \\ 0&0&1 \\ 0&0&1 \end{bmatrix}

Finally we subtract row 2 from row 3:

\begin{bmatrix} 1&2&1 \\ 0&0&1 \\ 0&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&1 \\ 0&0&1 \\ 0&0&0 \end{bmatrix}

The echelon matrix has 2 pivots in columns 1 and 3, so x_1 and x_3 are basic variables and x_2 is a free variable.

Setting x_2 = 1 from row 2 we have x_3 = 0 and from row 1 we have x_1 + 2x_2 + x_3 = x_1 + 2 + 0 = 0 or x_1 = -2. So the vector x = (-2, 1, 0) is a solution to the system, a basis for the nullspace of A, and a vector orthogonal to the row space of A.

The left nullspace of A is orthogonal to the column space of A. We can therefore find a suitable vector y by solving the system A^Ty = 0.

To solve the system we perform Gaussian elimination. We start by subtracting 2 times row 1 from row 2:

\begin{bmatrix} 1&2&3 \\ 2&4&6 \\ 1&3&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&3 \\ 0&0&0 \\ 1&3&4 \end{bmatrix}

and then subtract 1 times row 1 from row 3:

\begin{bmatrix} 1&2&3 \\ 0&0&0 \\ 1&3&4 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&3 \\ 0&0&0 \\ 0&1&1 \end{bmatrix}

Finally we exchange rows 2 and row 3:

\begin{bmatrix} 1&2&3 \\ 0&0&0 \\ 0&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&3 \\ 0&1&1 \\ 0&0&0 \end{bmatrix}

The resulting echelon matrix has 2 pivots in columns 1 and 2, so y_1 and y_2 are basic variables and y_3 is a free variable.

Setting y_3 = 1 from row 2 we have y_2 + y_3 = y_2 + 1 = 0 or y_2 = -1. From row 1 we have y_1 + 2y_2 + 3y_3 = y_1 - 2 + 3 = 0 or y_1 = -1. So the vector y = (-1, -1, 1) is a solution to the system, a basis for the left nullspace of A, and a vector orthogonal to the column space of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.6

Exercise 3.1.6. What vectors are orthogonal to (1, 1, 1) and (1, -1, 0) in \mathbb{R}^3? From these vectors create a set of three orthonormal vectors (mutually orthogonal with unit length).

Answer: If x = (x_1, x_2, x_3) is a vector orthogonal to both (1, 1, 1) and (1, -1, 0) then the inner product of x with both vectors must be zero. This corresponds to the system Ax = 0 where

A = \begin{bmatrix} 1&1&1 \\ 1&-1&0 \end{bmatrix}

To solve the system we perform Gaussian elimination. We start by subtracting 1 times row 1 from row 2:

\begin{bmatrix} 1&1&1 \\ 1&-1&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&1 \\ 0&-2&-1 \end{bmatrix}

The echelon matrix has 2 pivots in columns 1 and 2, so x_1 and x_2 are basic variables and x_3 is a free variable.

Setting x_3 = 1 from row 2 we have -2x_2 -x_3 = -2x_2 - 1 = 0 or x_2 = -\frac{1}{2}. From row 1 we have x_1 + x_2 + x_3 = x_1 - \frac{1}{2} + 1 = 0 or x_1 = -\frac{1}{2}. So the vector (-\frac{1}{2}, -\frac{1}{2}, 1) is a solution to the system and thus a vector orthogonal to (1, 1, 1) and (1, -1, 0).

Note that scalar multiples of the vector (-\frac{1}{2}, -\frac{1}{2}, 1) are all orthogonal to (1, 1, 1) and (1, -1, 0). Also note that the vectors (1, 1, 1) and (1, -1, 0) are orthogonal to one another.

To produce orthonormal vectors we can take the vectors above and divide them by their lengths. The length of (1, 1, 1) is \sqrt{1^2+1^2+1^2} = \sqrt{3}, the length of (1, -1, 0) is \sqrt{1^2 + (-1)^2} = \sqrt{2}, and the length of (-\frac{1}{2}, -\frac{1}{2}, 1) is

\sqrt{(-\frac{1}{2})^2 + (-\frac{1}{2})^2 + 1^2} = \sqrt{\frac{3}{2}}

The three orthonormal vectors are then as follows:

\begin{bmatrix} \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} \end{bmatrix} \qquad \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -\frac{1}{\sqrt{6}} \\ -\frac{1}{\sqrt{6}} \\ \frac{2}{\sqrt{6}} \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.5

Exercise 3.1.5. Of the following vectors

v_1 = \begin{bmatrix} 1 \\ 2 \\ -2 \\ 1 \end{bmatrix} \qquad v_2 = \begin{bmatrix} 4 \\ 0 \\ 4 \\ 0 \end{bmatrix} \qquad v_3 = \begin{bmatrix} 1 \\ -1 \\ -1 \\ -1 \end{bmatrix}

which are orthogonal to one another?

Answer: We have the following inner products among the vectors:

v_1^Tv_2 = 1 \cdot 4 + 2 \cdot 0 + (-2) \cdot 4 + 1 \cdot 0

= 4 + 0 - 8 + 0 = -4

v_1^Tv_3 = 1 \cdot 1 + 2 \cdot (-1) + (-2) \cdot (-1) + 1 \cdot (-1)

= 1 - 2 + 2 - 1 = 0

v_2^Tv_3 = 4 \cdot 1 + 0 \cdot (-1) + 4 \cdot (-1) + 0 \cdot (-1)

= 4 + 0 - 4 + 0 = 0

So v_1 and v_2 are orthogonal to v_3 but not to each other.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.4

Exercise 3.1.4. If B is an invertible matrix, describe why row i of B and column j of B^{-1} are orthogonal in the case i \ne j.

Answer: We have BB^{-1} = I. The identity matrix I has ones on the diagonal (i.e., when i = j) and zeros otherwise (when i \ne j).

By the rules of matrix multiplication the ij entry of the matrix product BB^{-1} is equal to the inner product of row i of B with column j of B^{-1}. Since BB^{-1} = I that inner product must be zero when i \ne j, so that row i of B and column j of B^{-1} are orthogonal vectors in that case.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.3

Exercise 3.1.3. In the xy plane two lines are perpendicular if the product of their slopes is -1. Use this fact to derive the condition for two vectors (x_1, x_2) and (y_1, y_2) being orthogonal.

Answer: The line through the origin and (x_1, x_2) has slope x_2/x_1 and the line through the origin and (y_1, y_2) has slope y_2/y_1. If the two lines are perpendicular we have

(x_2/x_1)(y_2/y_1) = -1

Multiplying both sides of the equation by x_1y_1 we have

x_2y_2 = -x_1y_1

or

x_1y_1 + x_2y_2 = 0

which is the condition for the two vectors (x_1, x_2) and (y_1, y_2) being orthogonal.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged | Leave a comment

Linear Algebra and Its Applications, Exercise 3.1.2

Exercise 3.1.2. For \mathbb{R}^2 give an example of linearly independent vectors that are not mutually orthogonal, as well as mutually orthogonal vectors that are not linearly independent.

Answer: The vectors (1, 0) and (1, 1) are linearly independent, since the second vector cannot be expressed as a scalar times the first vector. However the two vectors are not orthogonal since their inner product is 1 \cdot 1 + 1 \cdot 0 = 1.

The zero vector (0, 0) is orthogonal to every vector in \mathbb{R}^2, including itself, but is not linearly independent of such vectors.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment