Linear Algebra and Its Applications, Review Exercise 1.28

Review exercise 1.28. Compute the following matrices:

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^n \qquad \begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^{-1} \qquad \begin{bmatrix} 1&0&0 \\ l&1&0 \\ 0&m&1\end{bmatrix}^{-1}

Answer: For the first matrix we have

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix} \begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 2l&1&0 \\ 2m&0&1\end{bmatrix}

Based on this, we can guess at the answer for raising the matrix to the power of n, and try to prove it by induction. Assume that for some k \ge 2 we have

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^k = \begin{bmatrix} 1&0&0 \\ kl&1&0 \\ km&0&1\end{bmatrix}

Then for k+1 we have

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^{k+1} = \begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^k \begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}

= \begin{bmatrix} 1&0&0 \\ kl&1&0 \\ km&0&1\end{bmatrix} \begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix} = \begin{bmatrix} 1&0&0 \\ (k+1)l&1&0 \\ (k+1)m&0&1\end{bmatrix}

By induction we then see that for all n \ge 2 we have

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^n = \begin{bmatrix} 1&0&0 \\ nl&1&0 \\ nm&0&1\end{bmatrix}

Note that this equation also holds true for n = 1 and n = 0, and we might guess that it holds true for n = -1 as well. We can test this as follows:

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix} \begin{bmatrix} 1&0&0 \\ -l&1&0 \\ -m&0&1\end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1\end{bmatrix}

and

\begin{bmatrix} 1&0&0 \\ -l&1&0 \\ -m&0&1\end{bmatrix} \begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1\end{bmatrix}

Therefore we have

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ m&0&1\end{bmatrix}^{-1} = \begin{bmatrix} 1&0&0 \\ -l&1&0 \\ -m&0&1\end{bmatrix}

We can find the inverse of the last matrix by Gauss-Jordan elimination. We start by multiplying the first row by l and subtracting it from the second row:

\begin{bmatrix} 1&0&0&\vline&1&0&0 \\ l&1&0&\vline&0&1&0 \\ 0&m&1&\vline&0&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&\vline&1&0&0 \\ 0&1&0&\vline&-l&1&0 \\ 0&m&1&\vline&0&0&1 \end{bmatrix}

We then multiply the second row by m and subtract it from the third row:

\begin{bmatrix} 1&0&0&\vline&1&0&0 \\ 0&1&0&\vline&-l&1&0 \\ 0&m&1&\vline&0&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&\vline&1&0&0 \\ 0&1&0&\vline&-l&1&0 \\ 0&0&1&\vline&lm&-m&1 \end{bmatrix}

We thus have

\begin{bmatrix} 1&0&0 \\ l&1&0 \\ 0&m&1\end{bmatrix}^{-1} = \begin{bmatrix} 1&0&0 \\ -l&1&0 \\ lm&-m&1\end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.27

Review exercise 1.27. State whether the following are true or false. If true explain why, and if false provide a counterexample.

(1) If a matrix A can be factored as A = L_1U_1 = L_2U_2 where L_1 and L_2 are lower triangular with unit diagonals and U_1 and U_2 are upper triangular, then L_1 = L_2 and U_1 = U_2. (In other words, the factorization A = LU is unique.)

(2) If for a matrix A we have A^2 + A = I then A is invertible and A^{-1} = A + I.

(3) If all the diagonal entries of a matrix A are zero then A is singular.

Answer: (1) True. Since L_1 and L_2 are lower triangular with unit diagonal both matrices are invertible, and their inverses are also lower triangular with unit diagonal. (See the proof of this below at the end of the post.) Similarly since U_1 and U_2 are upper triangular with nonzero diagonal those matrices are invertible as well, and their inverses are also upper triangular with nonzero diagonal.

We then start with the equation L_1U_1 = L_2U_2 and multiply both sides by L_2^{-1} on the left and by U_1^{-1} on the right:

L_2^{-1}(L_1U_1)U_1^{-1} = L_2^{-1}(L_2U_2)U_1^{-1}

This equation reduces to L_2^{-1}L_1 = U_2U_1^{-1} = B where B is the product matrix. Now since B is the product of two lower triangular matrices with unit diagonal (i.e., L_2^{-1} and L_1), it itself is a lower triangular matrix with unit diagonal. Since B is the product of two upper triangular matrices (i.e., U_2 and U_1^{-1}), it is also an upper triangular matrix. Since B is both lower triangular and upper triangular it must be a diagonal matrix, and since its diagonal entries are all 1 we must have B = I.

Since L_2^{-1}L_1 = I we can multiply both sides on the left by L_2 to obtain  L_2L_2^{-1}L_1 = L_2I or L_1 = L_2. Similarly since U_2U_1^{-1} = I we can multiply both sides on the right by U_1 to obtain  U_2U_1^{-1}U_1 = IU_1 or U_2 = U_1. So the factorization A = LU is unique.

(2) True. Assume A^2 + A = I. We have I = A^2 + A = A(A + I) so that A + I is a right inverse for A. We also have I = A^2 + A = (A + I)A so that A + I is a left inverse for A. Since A+I is both a left and right inverse of A we know that A is invertible and that A^{-1} = A + I.

(3) False. The matrix

A = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}

has zeros on the diagonal but is nonsingular. In fact we have

A^{-1} = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = A

Proof of the result used in (1) above: Assume L is a lower triangular matrix with unit diagonal. That L is invertible can be seen from Gauss-Jordan elimination (here shown in a 4 by 4 example, although the argument generalizes to all n \ge 2):

\begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ x&1&0&0&\vline&0&1&0&0 \\ x&x&1&0&\vline&0&0&1&0 \\ x&x&x&1&\vline&0&0&0&1 \end{bmatrix}

Note that in forward elimination each of the diagonal entries of L will remain as is, since each of these entries has only zeros above it; each of the zero entries above the diagonal will remain unchanged as well, for the same reason. When forward elimination completes the left hand matrix will be the identity matrix since it will have zeros below the diagonal (from forward elimination), ones on the diagonal (as noted above), and zeroes above the diagonal (also as noted above). Gauss-Jordan elimination is thus guaranteed to complete successfully, so L is invertible.

The right-hand matrix at the end of Gauss-Jordan elimination will be the inverse of L. That matrix was produced from the identity matrix by forward elimination only, since backward elimination was not necessary. For the same reason noted above for the left-hand matrix, forward elimination will preserve the unit diagonal in the right-hand matrix and the zeros above it, with the only possible non-zero entries occurring below the unit diagonal. We thus see that if L is a lower-triangular matrix with unit diagonal then it is invertible and its inverse L^{-1} is also a lower-triangular matrix with unit diagonal.

Assume U is an upper triangular matrix with nonzero diagonal. That U is invertible can be seen from Gauss-Jordan elimination (again shown in a 4 by 4 example):

\begin{bmatrix} x&x&x&x&\vline&1&0&0&0 \\ 0&x&x&x&\vline&0&1&0&0 \\ 0&0&x&x&\vline&0&0&1&0 \\ 0&0&0&x&\vline&0&0&0&1 \end{bmatrix}

Note that forward elimination is not necessary: There are nonzero entries on the diagonal and zeros below them, so we have pivots in every column; therefore the left-hand matrix U is nonsingular and is guaranteed to have an inverse.

We can find that inverse by doing backward elimination to eliminate the entries above the diagonal in the left-hand matrix, and then dividing by the pivots. Note that since backward elimination starts with all zero entries below the diagonal in the right-hand matrix, it will not produce any nonzero entries below the diagonal in that matrix. Also, the diagonal entries in the right-hand matrix are not affected by backward elimination, for the same reason. After backward elimination completes the diagonal entries in the right-hand matrix will still be ones, and any nonzero entries produced will be above the diagonal. Dividing by the pivots in the left-hand matrix will then produce nonzero entries in the diagonal of the right-hand matrix.

The final right-hand matrix after completion of Gauss-Jordan elimination will therefore be an upper triangular matrix with nonzero diagonal entries. We thus see that if U is an upper-triangular matrix with nonzero diagonal then it is invertible and its inverse U^{-1} is also an upper-triangular matrix with nonzero diagonal.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.26

Review exercise 1.26. (a) Given a 3 by 3 matrix A what vector x would make the product Ax have 1 times column 1 of A plus 2 times column 3?

(b) Construct a matrix A for which the sum of column 1 and 2 times column 3 is zero, and show that A is singular. Why?

Answer: (a) If we choose

x = \begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix}

then Ax will consist of column 1 of A plus 2 times column 3. In essence if

A = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ a_{31}&a_{32}&a_{33} \end{bmatrix}

then

Ax = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ a_{31}&a_{32}&a_{33} \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix} = 1 \cdot \begin{bmatrix} a_{11} \\ a_{21} \\ a_{31} \end{bmatrix} + 0 \cdot \begin{bmatrix} a_{12} \\ a_{22} \\ a_{32} \end{bmatrix} + 2 \cdot \begin{bmatrix} a_{13} \\ a_{23} \\ a_{33} \end{bmatrix}

(b) The following matrix has column 1 plus 2 times column 3 equal to 0.

A = \begin{bmatrix} 2&1&-1 \\ -4&2&2 \\ 6&0&-3 \end{bmatrix}

Elimination of this matrix proceeds as follows:

\begin{bmatrix} 2&1&-1 \\ -4&2&2 \\ 6&0&-3 \end{bmatrix} \rightarrow \begin{bmatrix} 2&1&-3 \\ 0&4&0 \\ 0&-3&0 \end{bmatrix} \rightarrow \begin{bmatrix} 2&1&-3 \\ 0&4&0 \\ 0&0&0 \end{bmatrix}

Since there is no pivot in row 3 the matrix is singular. The problem is that since column 3 is a multiple of column 1 (being equal to -2 times column 1) the elimination steps that produce zeros in the (2, 1) and (3, 1) position will also produce zeros in the (2, 3) and (3, 3) positions, so that there is no possible pivot in column 3.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.25

Review exercise 1.25. Given the matrix A where

A = \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&5&1 \end{bmatrix} \begin{bmatrix} 1&2&0 \\ 0&1&5 \\ 0&0&1 \end{bmatrix}

what multiple of row 2 was subtracted from row 3 in elimination? Explain why A is invertible, symmetric, and tridiagonal. What are the pivots?

Answer: From the above we see that A has been factored into A = LL^T. Since L contains the multipliers used in elimination, we can look at l_32 = 5 to determine that 5 times row 2 was subtracted from row 3 in elimination.

Since A can be factored into the product of a lower triangular matrix L and upper triangular matrix L^T we know that A is invertible. Since A = LL^T we have A^T = (LL^T)^T = (L^T)^TL^T = LL^T = A so A is also symmetrical. Since L and L^T are bidiagonal the original matrix A was tridiagonal. Finally, since we can express A as A = LDL^T where D = I we see that the pivots (i.e., the diagonal entries of D) are all 1.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.24

Review exercise 1.24. The equation u + 2v - w = 6 defines a plane in 3-space. Find equations that define the following:

(a) a plane parallel to the first plane but going through the origin

(b) a second plane that (like the original plane) contains the points (6, 0, 0) and (2, 2, 0)

(c) a third plane that intersects the original plane and the one from (b) in the point (4, 1, 0)

Answer: (a) A plane passing through the origin must correspond to an equation that holds true for u = v = w = 0. The equation u + 2v - w = 0 satisfies this condition, and produces a plane parallel to the original plane.

(b) The points (6, 0, 0) and (2, 2, 0) satisfy the original equation u + 2v - w = 6 and are in the plane defined by that equation. For both these points we have w = 0. Therefore if we take the equation for the original plane and change the term involving w we can produce a new and different equation corresponding to a new plane containing these points. One such equation is u + 2v + w = 6.

(c) For the point (4, 1, 0) we have w = 0, and so as in (b) the equation we produce may have any term involving w. We simply need to ensure that the equation is satisfied when u = 4 and v = 1. One such equation is 2u - 2v + w = 6. Since the point (4, 1, 0) satisfies both this equation, the original equation, and the equation from (b) above, it is at the intersection of all three planes.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.23

Review exercise 1.23. Evaluate the following matrix expressions for n = 2 and n = 3

\begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^n \qquad \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^n \qquad \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^{-1}

and then find the general expression for the first two matrices for any n \ge 1.

Answer: We have

\begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^2 = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 4&6\\ 0&0 \end{bmatrix}

and

\begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^3 = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^2 = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} \begin{bmatrix} 4&6 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 8&12 \\ 0&0 \end{bmatrix}

So the general equation appears to be

\begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^n = \begin{bmatrix} 2^n&2^{n-1} \cdot 3 \\ 0&0 \end{bmatrix}

We can prove this by induction: Assume that the above equation holds true for some k. Then for k+1 we have

\begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^{k+1} = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^k = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} \begin{bmatrix} 2^k&2^{k-1} \cdot 3 \\ 0&0 \end{bmatrix}

= \begin{bmatrix} 2^{k+1}&2^k \cdot 3 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 2^{k+1}&2^{(k+1)-1} \cdot 3 \\ 0&0 \end{bmatrix}

So if the equation holds true for k it holds true for k+1 as well. Also, if we define A^1 = A for any matrix A then for n = 1 we have

\begin{bmatrix} 2^1&2^0 \cdot 3 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 2&3 \\ 0&0 \end{bmatrix}^1

so by induction the equation above holds true for all n \ge 1.

Turning to the second matrix, we have

\begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^2 = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 4&9 \\ 0&1 \end{bmatrix}

and

\begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^3 = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^2 = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} \begin{bmatrix} 4&9 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 8&21 \\ 0&1 \end{bmatrix}

The (1, 1) entry of each matrix appears to be 2^n in general, but the (1, 2) entry is more complicated. The expression 3(2^n - 1) looks as if it might work; for n = 2 this is 3 (4 - 1) = 3 \cdot 3 = 9 and for n = 3 this is 3(8 - 1) = 3 \cdot 7 = 21.

We use induction to try to prove this. Assume that for some k we have

\begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^k = \begin{bmatrix} 2^k&3(2^k - 1) \\ 0&1 \end{bmatrix}

We then have

\begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^{k+1} = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^k = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} \begin{bmatrix} 2^k&3(2^k - 1) \\ 0&1 \end{bmatrix}

= \begin{bmatrix} 2 \cdot 2^k&2 \cdot 3(2^k - 1) + 3\\ 0&1 \end{bmatrix} = \begin{bmatrix} 2^{k+1}&3(2 \cdot 2^k - 2) + 3\\ 0&1 \end{bmatrix}

= \begin{bmatrix} 2^{k+1}&3(2^{k+1} - 2 +1) \\ 0&1 \end{bmatrix} = \begin{bmatrix} 2^{k+1}&3(2^{k+1} - 1) \\ 0&1 \end{bmatrix}

So if the equation holds true for k it holds true for k+1 as well. Also, for k = 1 we have

\begin{bmatrix} 2^1&3(2^1 - 1) \\ 0&1 \end{bmatrix} = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^1

so the equation above holds true for all n \ge 1.

Finally for the third matrix we have

\begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^{-1} = \frac{1}{2 \cdot 1 - 3 \cdot 0} \begin{bmatrix} 1&-3 \\ 0&2 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1&-3 \\ 0&2 \end{bmatrix} = \begin{bmatrix} \frac{1}{2}&-\frac{3}{2} \\ 0&1 \end{bmatrix}

Note that

\begin{bmatrix} 2^{-1}&3(2^{-1} - 1) \\ 0&1 \end{bmatrix} = \begin{bmatrix} \frac{1}{2}&3(\frac{1}{2} - 1) \\ 0&1 \end{bmatrix} = \begin{bmatrix} \frac{1}{2}&-\frac{3}{2} \\ 0&1 \end{bmatrix} = \begin{bmatrix} 2&3 \\ 0&1 \end{bmatrix}^{-1}

so that the equation above holds true for n = -1 also.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.22

Review exercise 1.22. Answer the following questions:

(a) If A has an inverse, does A^T also have an inverse? If so, what is it?

(b) If A is both invertible and symmetric, what is the transpose of A^{-1}?

(c) Illustrate (a) and (b) when A = \begin{bmatrix} 2&1 \\ 1&1 \end{bmatrix}.

Answer: (a) Assume A is invertible, so that A^{-1} exists. Then we have AA^{-1} = I and therefore (AA^{-1})^T = I^T = I. But (AA^{-1})^T = (A^{-1})^TA^T so we have (A^{-1})^TA^T = I and (A^{-1})^T is a left inverse for A^T.

Similarly we have A^{-1}A = I and therefore (A^{-1}A)^T = I^T = I. But (A^{-1}A)^T = A^T(A^{-1})^T so we have A^T(A^{-1})^T = I and (A^{-1})^T is a right inverse for A^T.

Since (A^{-1})^T is both a left inverse and a right inverse for A^T we see that A^T is invertible and (A^T)^{-1} = (A^{-1})^T.

(b) If A is symmetric then we have A = A^T. If A is also invertible then A^{-1} exists and from (a) above we know that (A^T)^{-1} = (A^{-1})^T. We then have (A^{-1})^T = (A^T)^{-1} = A^{-1}. Since (A^{-1})^T = A^{-1} we see that A^{-1} is also symmetric if A is.

(c) If we have

A = \begin{bmatrix} 2&1 \\ 1&1 \end{bmatrix} = A^T

then

A^{-1} = \frac{1}{2 \cdot 1 - 1 \cdot 1} \begin{bmatrix} 1&-1 \\ -1&2 \end{bmatrix} = \begin{bmatrix} 1&-1 \\ -1&2 \end{bmatrix} = (A^{-1})^T

Note that A^{-1} is symmetric just as A is.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 1 Comment

Linear Algebra and Its Applications, Review Exercise 1.21

Review exercise 1.21. Given the 2 by 2 matrix

D = \begin{bmatrix} 2&0 \\ 0&5 \end{bmatrix}

describe the rows of DA and the columns of AD.

Answer: When multiplying A from the left by D to produce DA the first row of DA will be 2 times the first row of A and the second row of DA will be 5 times the second row of A. For example

\begin{bmatrix} 2&0 \\ 0&5 \end{bmatrix} \begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} = \begin{bmatrix} 2&4 \\ 15&20 \end{bmatrix}

When multiplying A from the right by D to produce AD the first column of AD will be 2 times the first column of A and the second column of AD will be 5 times the second column of A. For example

\begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} \begin{bmatrix} 2&0 \\ 0&5 \end{bmatrix} = \begin{bmatrix} 2&10 \\ 6&20 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 1 Comment

Linear Algebra and Its Applications, Review Exercise 1.20

Review exercise 1.20. The set of n by n permutation matrices constitute a group.

(a) How many 4 by 4 permutation matrices are there? How many n by n permutation matrices are there?

(b) For the group of 3 by 3 permutation matrices, what does k have to be in order for P^k = I for all matrices P in the group?

Answer: (a) A permutation matrix has a single 1 entry in each row (with all the other entries in that row being zero) and each row must have that single 1 entry in a different column.

If we construct a 4 by 4 permutation matrix, we have 4 choices for the column in which the 1 entry can be in the first row. Having chosen that column, we have only 3 choices for the column having the 1 entry in the second row (because we cannot choose the same column as in the first row). Having chosen the columns in which the 1 entry is placed in rows 1 and 2, we then have only 2 choices for the column in row 3, and then only 1 choice for the column in row 4 (all the other columns already having been used). The number of possible 4 by 4 permutation matrices is therefore 4 \cdot 3 \cdot 2 \cdot 1 = 24.

Similarly, for the group of n by n permutation matrices we have n choices for where to put the 1 entry in row 1, n-1 choices for where to put the 1 entry in row 2, n-2 choices for where to put the 1 entry in row 3, and so on until we have only 1 choice for where to put the 1 entry in row n. The number of possible n by n permutation matrices is therefore n \cdot (n-1) \cdot (n-2) \cdots 3 \cdot 2 \cdot 1 = n!.

(b) There are 3! or six 3 by 3 permutation matrices, of which one is the identity matrix I for which I^k = I for any k \ge 1.

Of the remaining five 3 by 3 permutation matrices, three do simple row exchanges:

  • exchange row 1 and row 2
  • exchange row 1 and row 3
  • exchange row 2 and row 3

For each of these matrices repeating the operation reverses the effect of the exchange, so we have P^2 = I. We then also have P^4 = P^2P^2 = I \cdot I = I and in general P^k = I for any even k.

The remaining two 3 by 3 permutation matrices do forward and reverse shifts of rows:

  • move row 1 to row 2, row 2 to row 3, and row 3 to row 1
  • move row 1 to row 3, row 2 to row 1, and row 3 to row 2

Applying each of these permutation matrices three times shifts all rows back into their original places, so we have P^3 = I. We then have P^6 = P^3P^3 = I \cdot I = I, and in general P^k = I for any k that is a multiple of 3.

Combining the results above, we see that k = 6 is the smallest k for which P^k = I for all 3 by 3 permutation matrixes P.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 1 Comment

Linear Algebra and Its Applications, Review Exercise 1.19

Review exercise 1.19. Solve the following systems of equations using elimination and back substitution:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&+&v&+&w&=&0 \\ u&+&2v&+&3w&=&0 \\ 3u&+&5v&+&7w&=&1 \end{array}    and    \setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} u&+&v&+&w&=&0 \\ u&+&v&+&3w&=&0 \\ 3u&+&5v&+&7w&=&1 \end{array}

Answer: We start with the system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&+&v&+&w&=&0 \\ u&+&2v&+&3w&=&0 \\ 3u&+&5v&+&7w&=&1 \end{array}

and subtract 1 times the first equation from the second and 3 times the first equation from the third to obtain the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&+&v&+&w&=&0 \\ &&v&+&2w&=&0 \\ &&2v&+&4w&=&1 \end{array}

We can then subtract 2 times the second equation from the third to produce the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&+&v&+&w&=&0 \\ &&v&+&2w&=&0 \\ &&&&0&=&1 \end{array}

We have reached a contradiction, so this system has no solution.

We next look at the system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} u&+&v&+&w&=&0 \\ u&+&v&+&3w&=&0 \\ 3u&+&5v&+&7w&=&1 \end{array}

We subtract 1 times the first equation from the second and 3 times the first equation from the third to obtain the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} u&+&v&+&w&=&0 \\ &&&&2w&=&0 \\ &&2v&+&4w&=&1 \end{array}

From the second equation we have w = 0. Substituting w into the third equation yields v = \frac{1}{2}. Substituting v and w into the first equation yields u = -v - w = -\frac{1}{2}. The solution is therefore

\setlength\arraycolsep{0.2em}\begin{array}{rcr}u&=&-\frac{1}{2} \\ v&=&\frac{1}{2} \\ w&=&0 \end{array}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment