## Linear Algebra and Its Applications, Exercise 3.3.13

Exercise 3.3.13. Using least squares, find the line that is the best fit to the following measurements:

$b = 4$ at $t = -2$

$b = 3$ at $t = -1$

$b = 1$ at $t = 0$

$b = 0$ at $= 2$

Also, given the matrix

$A = \begin{bmatrix} 1&-2 \\ 1&-1 \\ 1&0 \\ 1&2 \end{bmatrix}$

find the projection of $b = (4, 3, 1, 0)$ onto the column space $\mathcal{R}(A)$.

Answer: Assuming that the line in question has the form $C + Dt$ the problem can be expressed as that of finding a solution to the system

$\begin{bmatrix} 1&-2 \\ 1&-1 \\ 1&0 \\ 1&2 \end{bmatrix} \begin{bmatrix} C \\ D \end{bmatrix} = \begin{bmatrix} 4 \\ 3 \\ 1 \\ 0 \end{bmatrix}$

or $Ax = b$ where $x = (C, D)$ is the exact solution.

In this case there is no exact solution, so we look for the least squares solution $\bar{x} = (\bar{C}, \bar{D})$ that minimizes the error vector $b-A\bar{x}$. The error vector is minimized when it is orthogonal to the column space of $A$ and is therefore in the left nullspace of $A$. We then have $A^T(b - A\bar{x}) = 0$ so that $\bar{x}$ is a solution to the system $A^TA\bar{x} = A^Tb$.

We have

$A^TA = \begin{bmatrix} 1&1&1&1 \\ -2&-1&0&2 \end{bmatrix} \begin{bmatrix} 1&-2 \\ 1&-1 \\ 1&0 \\ 1&2 \end{bmatrix} = \begin{bmatrix} 4&-1 \\ -1&9 \end{bmatrix}$

and

$A^Tb = \begin{bmatrix} 1&1&1&1 \\ -2&-1&0&2 \end{bmatrix} \begin{bmatrix} 4 \\ 3 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 8 \\ -11 \end{bmatrix}$

so that the system $A^TA\bar{x} = A^Tb$ reduces to

$\begin{bmatrix} 4&-1 \\ -1&9 \end{bmatrix} \begin{bmatrix} \bar{C} \\ \bar{D} \end{bmatrix} = \begin{bmatrix} 8 \\ -11 \end{bmatrix}$

or

$\setlength\arraycolsep{0.2em}\begin{array}{rcrcl}4\bar{C}&-&\bar{D}&=&8 \\ -\bar{C}&+&9\bar{D}&=&-11 \end{array}$

expressed as a system of equations.

Multiplying the first equation by $\frac{1}{4}$ and adding it to the second equation produces the system

$\setlength\arraycolsep{0.2em}\begin{array}{rcrcl}4\bar{C}&-&\bar{D}&=&8 \\ &&\frac{35}{4}\bar{D}&=&-9 \end{array}$

From the second equation we have $\bar{D} = -\frac{36}{35}$. Substituting that value into the first equation we have $4\bar{C} + \frac{36}{35} = 8$ or $\bar{C} = \frac{1}{4}(8 - \frac{36}{35}) = \frac{61}{35}$.

The line of best fit is therefore $\frac{61}{35} - \frac{36}{35}t$.

Given the matrix

$A = \begin{bmatrix} 1&-2 \\ 1&-1 \\ 1&0 \\ 1&2 \end{bmatrix}$

the projection matrix $P$ onto the column space of $A$ can be computed as

$P = A(A^TA)^{-1}A^T$

From above we have

$A^TA = \begin{bmatrix} 4&-1 \\ -1&9 \end{bmatrix}$

so that its inverse is

$(A^TA)^{-1} = \frac{1}{4 \cdot 9 - (-1)(-1)} \begin{bmatrix} 9&-(-1) \\ -(-1)&4 \end{bmatrix}$

$= \frac{1}{35} \begin{bmatrix} 9&1 \\ 1&4 \end{bmatrix}$

We then have

$P = A(A^TA)^{-1}A^T$

$= \begin{bmatrix} 1&-2 \\ 1&-1 \\ 1&0 \\ 1&2 \end{bmatrix} \frac{1}{35} \begin{bmatrix} 9&1 \\ 1&4 \end{bmatrix} \begin{bmatrix} 1&1&1&1 \\ -2&-1&0&2 \end{bmatrix}$

$= \frac{1}{35} \begin{bmatrix} 1&-2 \\ 1&-1 \\ 1&0 \\ 1&2 \end{bmatrix} \begin{bmatrix} 7&8&9&11 \\ -7&-3&1&9 \end{bmatrix}$

$= \frac{1}{35} \begin{bmatrix} 21&14&7&-7 \\ 14&11&8&2 \\ 7&8&9&11 \\ -7&2&11&29 \end{bmatrix}$

The projection of the vector $b = (4, 3, 1, 0)$ onto the column space of $A$ is then

$Pb = \frac{1}{35} \begin{bmatrix} 21&14&7&-7 \\ 14&11&8&2 \\ 7&8&9&11 \\ -7&2&11&29 \end{bmatrix} \begin{bmatrix} 4 \\ 3 \\ 1 \\ 0 \end{bmatrix}$

$= \frac{1}{35} \begin{bmatrix} 133 \\ 97 \\ 61 \\ -11 \end{bmatrix} = \begin{bmatrix} \frac{133}{35} \\ \frac{97}{35} \\ \frac{61}{35} \\ -\frac{11}{35} \end{bmatrix}$

The vector $Pb$ corresponds to the points on the least squares line of best fit $\bar{C} + \bar{D}t$ for the times $t = (-2, 1, 0, 2)$:

$\frac{61}{35} - \frac{36}{35} (-2) = \frac{133}{35}$

$\frac{61}{35} - \frac{36}{35} (-1) = \frac{97}{35}$

$\frac{61}{35} - \frac{36}{35} (0) = \frac{61}{35}$

$\frac{61}{35} - \frac{36}{35} (2) = \frac{-11}{35}$

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fifth Edition and the accompanying free online course, and Dr Strang’s other books.

## Linear Algebra and Its Applications, Exercise 3.3.12

Exercise 3.3.12. Given the subspace $V$ spanned by the two vectors $(1, 1, 0, 1)$ and $(0, 0, 1, 0)$ find the following:

a) a set of basis vectors for $V^\perp$

b) the matrix $P$ that projects onto $V$

c) the vector in $V$ that has the minimum distance to the vector $b = (0, 1, 0, -1)$ in $V^\perp$

Answer: a) The subspace $V$ is the column space $\mathcal{R}(A)$ for the matrix

$A = \begin{bmatrix} 1&0 \\ 1&0 \\ 0&1 \\ 1&0 \end{bmatrix}$

for which the vectors $(1, 1, 0, 1)$ and $(0, 0, 1, 0)$ are the columns. The orthogonal complement $V^\perp$ corresponds to the left nullspace $\mathcal{N}(A^T)$, so we can find a basis for $V^\perp$ by solving the system of equations corresponding to $A^Ty =0$:

$\begin{bmatrix} 1&1&0&1 \\ 0&0&1&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ y_4 \end{bmatrix} = 0$

in matrix form or

$\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcrcl}y_{1}&+&y_{2}&+&&&y_{4}&=&0 \\ &&&&y_{3}&&&=&0 \end{array}$

expressed as a system of equations.

The matrix $A^T$ is already in echelon form, with $y_1$ and $y_3$ as basic variables and $y_2$ and $y_4$ as free variables. Setting $y_2 = 1$ and $y_4 = 0$ we have $y_1 = -1$ and $y_3 = 0$. Setting $y_2 = 0$ and $y_4 = 1$ we again have $y_1 = -1$ and $y_3 = 0$. So two solutions to $A^Ty = 0$ are $(-1, 1, 0, 0)$ and $(-1, 0, 0, 1)$ and these form a basis for $V^\perp = \mathcal{N}(A^T)$.

b) Since $V$ is the column space for the matrix $A$ above, the projection matrix onto $V$ is $P = A(A^TA)^{-1}A^T$. We have

$A^TA = \begin{bmatrix} 1&1&0&1 \\ 0&0&1&0 \end{bmatrix} \begin{bmatrix} 1&0 \\ 1&0 \\ 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 3&0 \\ 0&1 \end{bmatrix}$

Since $A^TA$ is a diagonal matrix its inverse is simply

$(A^TA)^{-1} = \begin{bmatrix} \frac{1}{3}&0 \\ 0&1 \end{bmatrix}$

We then have

$P = A(A^TA)^{-1}A^T$

$= \begin{bmatrix} 1&0 \\ 1&0 \\ 0&1 \\ 1&0 \end{bmatrix} \begin{bmatrix} \frac{1}{3}&0 \\ 0&1 \end{bmatrix} \begin{bmatrix} 1&1&0&1 \\ 0&0&1&0 \end{bmatrix}$

$= \begin{bmatrix} \frac{1}{3}&0 \\ \frac{1}{3}&0 \\ 0&1 \\ \frac{1}{3}&0 \end{bmatrix} \begin{bmatrix} 1&1&0&1 \\ 0&0&1&0 \end{bmatrix}$

$= \begin{bmatrix} \frac{1}{3}&\frac{1}{3}&0&\frac{1}{3} \\ \frac{1}{3}&\frac{1}{3}&0&\frac{1}{3} \\ 0&0&1&0 \\ \frac{1}{3}&\frac{1}{3}&0&\frac{1}{3} \end{bmatrix}$

c) The vector in $V$ closest to the vector $b = (0, 1, 0, -1)$ is simply the projection $Pb$ of $b$ onto $V$. But since $b$ is in $V^\perp$ it is orthogonal to all vectors in $V$ and its projection $Pb$ onto $V$ is the zero vector.

This can also be seen by explicitly doing the matrix multiplication:

$Pb = \begin{bmatrix} \frac{1}{3}&\frac{1}{3}&0&\frac{1}{3} \\ \frac{1}{3}&\frac{1}{3}&0&\frac{1}{3} \\ 0&0&1&0 \\ \frac{1}{3}&\frac{1}{3}&0&\frac{1}{3} \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} = 0$

UPDATE: Corrected a typo in the statement of question (b).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fifth Edition and the accompanying free online course, and Dr Strang’s other books.

## Linear Algebra and Its Applications, Exercise 3.3.11

Exercise 3.3.11. Suppose that $S$ is a subspace with orthogonal complement $S^{\perp}$, with $P$ a projection matrix onto $S$ and $Q$ a projection matrix onto $S^{\perp}$. What are $P+Q$ and $PQ$? Also, show that $P-Q$ is its own inverse.

Answer: Given any vector $v$ we have $(P+Q)v = Pv + Qv$ where $Pv$ is the projection of $v$ onto $S$ and $Qv$ is the projection of $v$ onto $S^{\perp}$. Since $S$ and $S^\perp$ are orthogonal complements the sum of the projections $Pv$ and $Qv$ is equal to $v$ itself. So we have $(P+Q)v = Pv + Qv = v$ and since this is true for all $v$ we have $P+Q = I$.

This can be more formally proved as follows: Consider the matrix $I-Q$. It is a projection matrix and it projects onto $S$. Also, if $P$ is a projection matrix onto $S$ then it is unique. Since $P$ is a projection matrix onto $S$ and $I-Q$ is a projection matrix onto $S$ we therefore have $P = I-Q$ or $P+Q = I$. For the full proof see below.

Since $P = I - Q$ we have

$PQ = (I - Q)Q = IQ - Q^2 = Q - Q = 0$.

(Applying $PQ$ to a vector $v$ first projects $v$ onto $S^\perp$ and then projects the resulting vector onto $S$. But projecting any vector in $S^\perp$ onto $S$ will produce the zero vector, since $S^\perp$ and $S$ are orthogonal.)

Finally, we have

$(P-Q)(P-Q) = [(I-Q) - Q][(I-Q) - Q]$

$(I - 2Q)(I - 2Q) = I^2 - 2Q - 2Q + 4Q^2$

$I - 4Q + 4Q = I$

Since $(P-Q)(P-Q) = I$ we have $(P-Q)^{-1} = P-Q$ so that $P-Q$ is its own inverse.

Here is the full proof that if $P$ a projection matrix onto $S$ and $Q$ a projection matrix onto $S^{\perp}$ then $P+Q = I$:

We first show that $I-Q$ is a projection matrix. The identity matrix $I$ is symmetric, and since $Q$ is a projection matrix it is also symmetric. The difference $I-Q$ is therefore symmetric as well, so that we have $(I-Q)^T = I-Q$.

We also have

$(I-Q)^2 = (I-Q)(I-Q) = I^2 - IQ - QI + Q^2$

$= I - 2Q + Q = I-Q$

Since  $(I-Q)^T = I-Q$ and $(I-Q)^2 = I-Q$ we see that $I-Q$ is a projection matrix.

Onto what subspace does $I-Q$ project? For any vector $v$ we have $(I-Q)v = v - Qv$. Since $Q$ is a projection matrix the vector $Qv$ is in the space onto which $Q$ projects, and the vector $v - Qv$ is orthogonal to that space. But $Q$ projects onto $S^\perp$ so $v-Qv$ must therefore be in $(S^\perp)^\perp = S$.

We have thus shown that $I-Q$ is a projection matrix that projects onto $S$. We now show that any such projection matrix is unique.

Suppose that like $P$ the matrix $P'$ is also a projection matrix onto $S$. Consider the vector $(P - P')v$ for any vector $v$. We have

$\|(P - P')v\|^2 = [(P-P')v]^T[(P-P')v]$

$= v^T(P-P')^T(P-P')v$

where we take advantage of the fact that $(AB)^T = B^TA^T$.

But since $P$ and $P'$ are projection matrices they are symmetric, and therefore their difference $P-P'$ is also symmetric. We thus have

$\|(P - P')v\|^2 = v^T(P-P')(P-P')v$

$= v^T[P^2 - PP' - P'P + (P')^2]v$

But since $P$ and $P'$ are projection matrices we have $P = P^2$ and $P' = (P')^2$. We thus have

$\|(P - P')v\|^2 = v^T[P - PP' - P'P + P']v$

Since $P$ is a projection matrix onto $S$ we know that $Pw = w$ for any vector $w$ in $S$. (In other words, when applied to any vector $w$ in $S$ the projection matrix $P$ projects that vector onto itself.) The same is true for $P'$ since it is also a projection matrix onto $S$. We thus have $Pw = P'w = w$ for all $w$ in $S$.

Given any vector $v$ (not necessarily in $S$) we then have $PPv = P'Pv$ since $Pv$ is a vector in $S$. Since $P$ is a projection matrix we have $P^2 = P$ and thus $PPv = Pv$, so that $Pv = P'Pv$ for all vectors $v$. This implies that $P = P'P$.

Similarly for any vector $v$ we have $P'P'v = PP'v$ since $P'v$ is a vector in $S$. Since $P'$ is a projection matrix we have $(P')^2 = P'$ and thus $P'P'v = P'v$, so that $P'v = PP'v$ for all vectors $v$. This implies that $P' = PP'$.

Since $P = P'P$ and $P' = PP'$ we have

$\|(P - P')v\|^2 = v^T(P - P' - P + P')v$

$= v^T \cdot 0 \cdot v = 0$

Since $\|(P - P')v\|^2 = 0$ we have $(P-P')v = 0$, and since this is true for any vector $v$ we must have $P-P' = 0$ or $P = P'$. A projection matrix $P$ onto a subspace $S$ is therefore unique.

Thus since $P$ is a projection matrix onto $S$ and $I-Q$ is also a projection matrix onto $S$ we have $P = I-Q$ or $P+Q=I$.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fifth Edition and the accompanying free online course, and Dr Strang’s other books.

## Linear Algebra and Its Applications, Exercise 3.3.10

Exercise 3.3.10. Given mutually orthogonal vectors $a_1$, $a_2$, and $b$ and the matrix $A$ with columns $a_1$ and $a_2$, what are $A^TA$ and $A^Tb$? What is the projection of $b$ onto the plane formed by $a_1$ and $a_2$?

$A^TA = \begin{bmatrix} a_1^T \\ a_2^T \end{bmatrix} \begin{bmatrix} a_1&a_2 \end{bmatrix} = \begin{bmatrix} a_1^Ta_1&a_1^Ta_2 \\ a_2^Ta_1&a_2^Ta_2 \end{bmatrix} = \begin{bmatrix} \|a_1\|^2&0 \\ 0&\|a_2\|^2 \end{bmatrix}$

where the zero entries are the result of $a_1$ and $a_2$ being orthogonal.

Similarly we have

$A^Tb = \begin{bmatrix} a_1^T \\ a_2^T \end{bmatrix} b = \begin{bmatrix} a_1^Tb \\ a_2^Tb \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} = 0$

where the zero entries are the result of $a_1$ and $a_2$ being orthogonal to $b$.

Since $b$ is orthogonal to both $a_1$ and $a_2$ it is orthogonal to any linear combination of $a_1$ and $a_2$ and therefore is orthogonal to the plane spanned by $a_1$ and $a_2$. The projection of $b$ onto that plane is therefore the zero vector.

This also follows from the formula for the projection matrix $P$ corresponding to the matrix $A$. The projection $p$ of $b$ onto the column space of $A$ (the space spanned by $a_1$ and $a_2$) is

$p = Pb = A(A^TA)^{-1}A^Tb = A(A^TA)^{-1} \cdot 0 = 0$

since $A^Tb = 0$ as discussed above.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

## Linear Algebra and Its Applications, Exercise 3.3.9

Exercise 3.3.9. Suppose that $P$ is a matrix such that $P = P^TP$.

a) Show that $P$ is a projection matrix.

b) If $P = 0$ then what is the subspace onto which $P$ projects?

Answer: a) To show that $P$ is a projection matrix we must show that $P = P^2$ and also that $P = P^T$. We have

$P^T = (P^TP)^T = P^T(P^T)^T = P^TP = P$

Since $P^T = P$ we then have

$P^2 = P P = P^T P = P$

Since $P = P^T = P^2$ the matrix $P$ is a projection matrix.

b) If $P = 0$ then for all vectors $v$ we have $P v = 0$. So $P$ projects onto the subspace consisting of the zero vector.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

## Linear Algebra and Its Applications, Exercise 3.3.8

Exercise 3.3.8. Suppose that $P$ is a projection matrix from $\mathbb R^n$ onto a subspace $S$ with dimension $k$. What is the column space of $P$? What is its rank?

Answer: Suppose that $b$ is a arbitrary vector in $\mathbb R^n$. From the definition of $P$ we know that $Pb$ is a vector in $S$. But $Pb$ is a linear combination of the columns of $P$, so that $Pb$ also is in the column space $\mathcal{R}(P)$. Since any vector in $S$ can be expressed as $Pb$ for some $b$,  all vectors in $S$ are also in $\mathcal{R}(P)$, so that $S \subseteq \mathcal{R}(P)$.

Now suppose that $v$ is an arbitrary vector in the column space $\mathcal{R}(P)$. Then $v$ can be expressed as a linear combination of the columns of $P$ for some set of coefficients $a_1, a_2, \dots, a_n$. Consider the vector $w = (a_1, a_2, \dots, a_n)$. We then have $v = Pw$ by the definition of $w$. But if $v = Pw$ for some $w$ then $v$ is in $S$. So all vectors in $\mathcal{R}(P)$ are also in $S$ and thus $\mathcal{R}(P) \subseteq S$.

Since $S \subseteq \mathcal{R}(P)$ and $\mathcal{R}(P) \subseteq S$ we then have $S = \mathcal{R}(P)$: The column space of $P$ is $S$.

The rank of $P$ is the dimension of its column space. But since $S$ is the column space of $P$, the rank of $P$ is $k$, the dimension of $S$.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

## Linear Algebra and Its Applications, Exercise 3.3.7

Exercise 3.3.7. Given the two vectors $a_1 = (1, 0, 1)$ and $a_2 = (1, 1, -1)$ find the projection matrix $P$ that projects onto the subspace spanned by $a_1$ and $a_2$.

Answer: The subspace spanned by $a_1$ and $a_2$ is the column space $\mathcal{R}(A)$ where

$A = \begin{bmatrix} 1&1 \\ 0&1 \\ 1&-1 \end{bmatrix}$

The projection matrix onto the subspace is then $P = A(A^TA)^{-1}A^T$. We have

$A^TA = \begin{bmatrix} 1&0&1 \\ 1&1&-1 \end{bmatrix} \begin{bmatrix} 1&1 \\ 0&1 \\ 1&-1 \end{bmatrix} = \begin{bmatrix} 2&0 \\ 0&3 \end{bmatrix}$

Since $A^TA$ is a diagonal matrix we can compute its inverse by simply taking the reciprocals of the diagonal entries:

$(A^TA)^{-1} = \begin{bmatrix} \frac{1}{2}&0 \\ 0&\frac{1}{3} \end{bmatrix}$

We then have

$P = A(A^TA)^{-1}A^T = \begin{bmatrix} 1&1 \\ 0&1 \\ 1&-1 \end{bmatrix} \begin{bmatrix} \frac{1}{2}&0 \\ 0&\frac{1}{3} \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 1&1&-1 \end{bmatrix}$

$= \begin{bmatrix} \frac{1}{2}&\frac{1}{3} \\ 0&\frac{1}{3} \\ \frac{1}{2}&-\frac{1}{3} \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 1&1&-1 \end{bmatrix} = \begin{bmatrix} \frac{5}{6}&\frac{1}{3}&\frac{1}{6} \\ \frac{1}{3}&\frac{1}{3}&-\frac{1}{3} \\ \frac{1}{6}&-\frac{1}{3}&\frac{5}{6} \end{bmatrix}$

NOTE: This continues a series of posts containing worked-out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.