Linear Algebra and Its Applications, Exercise 1.6.10

Exercise 1.6.10. Determine the inverses of the following matrices:

A_1 = \begin{bmatrix} 0&0&0&1 \\ 0&0&2&0 \\ 0&3&0&0 \\ 4&0&0&0 \end{bmatrix} \quad A_2 = \begin{bmatrix} 1&0&0&0 \\ -\frac{1}{2}&1&0&0 \\ 0&-\frac{2}{3}&1&0 \\ 0&0&-\frac{3}{4}&1 \end{bmatrix} \quad A_3 = \begin{bmatrix} a&b&0&0 \\ c&d&0&0 \\ 0&0&a&b \\ 0&0&c&d \end{bmatrix}

Answer: The first matrix has the form of a diagonal matrix, only flipped horizontally, and it’s therefore worth seeing if its inverse has an analogous form to the inverse of a diagonal matrix. By analogy the inverse would also be a reverse-diagonal matrix. However a bit of trial and error reveals that each entry on the reverse-diagonal should not be the inverse of the corresponding entry in the original matrix, but rather should be the inverse of the corresponding entry in the original matrix’s transpose.

Forming this matrix B we have

A_1B = \begin{bmatrix} 0&0&0&1 \\ 0&0&2&0 \\ 0&3&0&0 \\ 4&0&0&0 \end{bmatrix} \begin{bmatrix} 0&0&0&\frac{1}{4} \\ 0&0&\frac{1}{3}&0 \\ 0&\frac{1}{2}&0&0 \\ 1&0&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} = I

and

BA_1 = \begin{bmatrix} 0&0&0&\frac{1}{4} \\ 0&0&\frac{1}{3}&0 \\ 0&\frac{1}{2}&0&0 \\ 1&0&0&0 \end{bmatrix} \begin{bmatrix} 0&0&0&1 \\ 0&0&2&0 \\ 0&3&0&0 \\ 4&0&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} = I

so that

A_1^{-1} = B = \begin{bmatrix} 0&0&0&\frac{1}{4} \\  0&0&\frac{1}{3}&0 \\ 0&\frac{1}{2}&0&0 \\  1&0&0&0 \end{bmatrix}

For the second matrix we use Gauss-Jordan elimination:

\begin{bmatrix} 1&0&0&0&1&0&0&0 \\ -\frac{1}{2}&1&0&0&0&1&0&0 \\  0&-\frac{2}{3}&1&0&0&0&1&0 \\ 0&0&-\frac{3}{4}&1&0&0&0&1  \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&1&0&0&0 \\ 0&1&0&0&\frac{1}{2}&1&0&0 \\   0&-\frac{2}{3}&1&0&0&0&1&0 \\  0&0&-\frac{3}{4}&1&0&0&0&1  \end{bmatrix}

\rightarrow \begin{bmatrix} 1&0&0&0&1&0&0&0 \\ 0&1&0&0&\frac{1}{2}&1&0&0  \\   0&0&1&0&\frac{1}{3}&\frac{2}{3}&1&0 \\   0&0&-\frac{3}{4}&1&0&0&0&1  \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&1&0&0&0 \\  0&1&0&0&\frac{1}{2}&1&0&0  \\    0&0&1&0&\frac{1}{3}&\frac{2}{3}&1&0 \\   0&0&0&1&\frac{1}{4}&\frac{1}{2}&\frac{3}{4}&1  \end{bmatrix}

At this point forward elimination is complete, and there is no need to do reverse elimination. We therefore have

A_2^{-1} = \begin{bmatrix} 1&0&0&0 \\ \frac{1}{2}&1&0&0  \\ \frac{1}{3}&\frac{2}{3}&1&0 \\ \frac{1}{4}&\frac{1}{2}&\frac{3}{4}&1   \end{bmatrix}

The third 4 by 4 matrix is composed of two 2 by 2 nonzero submatrices (in the upper left and lower right) and two 2 by 2 zero submatrices (in the upper right and lower left). Consider a second 4 by 4 matrix B also containing two 2 by 2 nonzero submatrices and two 2 by 2 zero submatrices, in the same positions as the original matrix:

B = \begin{bmatrix} b_{11}&b_{12}&0&0 \\ b_{21}&b_{22}&0&0 \\ 0&0&b_{33}&b_{34} \\ 0&0&b_{43}&b_{44} \end{bmatrix}

In order for

A_3B = \begin{bmatrix} a&b&0&0 \\ c&d&0&0 \\ 0&0&a&b \\ 0&0&c&d \end{bmatrix} \begin{bmatrix} b_{11}&b_{12}&0&0 \\  b_{21}&b_{22}&0&0 \\ 0&0&b_{33}&b_{34} \\  0&0&b_{43}&b_{44} \end{bmatrix} = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

we must have

\begin{bmatrix} a&b \\ c&d \end{bmatrix} \begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} = \begin{bmatrix} a&b \\ c&d \end{bmatrix} \begin{bmatrix} b_{33}&b_{34} \\ b_{43}&b_{44} \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

That means that

B = 1/(ad - bc) \begin{bmatrix} d&-b&0&0 \\ -c&a&0&0 \\ 0&0&d&-b \\ 0&0&-c&a \end{bmatrix}

assuming that (ad – bc) is nonzero. We then have

A_3B = 1/(ad - bc) \begin{bmatrix} a&b&0&0 \\ c&d&0&0 \\ 0&0&a&b \\ 0&0&c&d \end{bmatrix} \begin{bmatrix} d&-b&0&0 \\  -c&a&0&0 \\ 0&0&d&-b \\ 0&0&-c&a  \end{bmatrix}

= 1/(ad - bc) \begin{bmatrix} ad - bc&-ab+ab&0&0 \\  cd-cd&ad-bc&0&0 \\ 0&0&ad-bc&-ab+ab \\ 0&0&cd-cd&ad-bc  \end{bmatrix} = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

and

BA_3 = 1/(ad - bc) \begin{bmatrix} d&-b&0&0 \\   -c&a&0&0 \\ 0&0&d&-b \\ 0&0&-c&a   \end{bmatrix} \begin{bmatrix} a&b&0&0 \\  c&d&0&0 \\ 0&0&a&b \\ 0&0&c&d  \end{bmatrix}

= 1/(ad - bc) \begin{bmatrix} ad - bc&-ac+ac&0&0 \\ -bd+bd&ad-bc&0&0 \\ 0&0&ad-bc&bd-bd \\  0&0&-ac+ac&ad-bc  \end{bmatrix} = \begin{bmatrix}  1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\  0&0&0&1 \end{bmatrix}

so that

A_3^{-1} = B = 1/(ad - bc) \begin{bmatrix} d&-b&0&0 \\    -c&a&0&0 \\ 0&0&d&-b \\ 0&0&-c&a    \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.9

Exercise 1.6.9. Given the singular matrix

A = \begin{bmatrix} 2&1&4&6 \\ 0&3&8&5 \\ 0&0&0&7 \\ 0&0&0&9 \end{bmatrix}

show that A has no inverse. If it did have an inverse then multiplying the third row of A^{-1} by the columns of A should give the third row of I. Explain why this is not possible.

Answer: For the 4 by 4 case the third row of I is

\begin{bmatrix} 0&0&1&0 \end{bmatrix}

A zero value for the first entry in this row would be produced by multiplying the third row of A^{-1} by the first column of A. The last three entries in the first column of A are zero and thus would produce zero terms in the product. Since the first entry of the first column of A is nonzero, in order for the first entry of the product to be zero the first entry of the third row of A^{-1} must be zero.

A zero value for the second entry in the third row of I would be produced by multiplying the third row of A^{-1} by the second column of A. The last two entries in the first column of A are zero, and thus would produce zero terms in the product. Similarly the first entry of the third row of A^{-1} is zero (from the previous paragraph) and would also produce a zero term in the product when multiplying the first entry of the second column of A. Since the first term and the last two terms are zero, and the second entry in the second column of A is nonzero, in order for the second term of the product to be zero (and thus the overall product to be zero) the second entry of the third row of A^{-1} must also be zero.

A value of 1 for the third entry in the third row of I would be produced by multiplying the third row of A^{-1} by the third column of A. The last two entries in the first column of A are zero, and thus would produce zero terms in the product. Similarly the first and second entries of the third row of A^{-1} are zero (from the previous two paragraphs) and would also produce zero terms in the product when multiplying the first and second entries of the second column of A.

But this means that the product of the third row of A^{-1} and the third column of A must be zero, which means that the product of A^{-1} and A cannot be I. We conclude therefore that A^{-1} does not exist and that A is not invertible.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.8

Exercise 1.6.8. The matrix

A = \begin{bmatrix} 1&1 \\ 3&3 \end{bmatrix}

has no inverse. Demonstrate this by trying to solve the following:

\begin{bmatrix} 1&1 \\ 3&3 \end{bmatrix} \begin{bmatrix} a&b \\ c&d \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

Answer: Multiplying the first row of the first marix by the first column of the second matrix gives

a + c = 1

However multiplying the second row of the first matrix by the first column of the second matrix gives us

3a + 3c = 0 \rightarrow a + c = 0

We thus have a contradiction, and conclude that the matrix is not invertible.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.7

Exercise 1.6.7. Find three 2 by 2 matrices A such that

A^2 = I

and A is neither I nor -I.

Answer: We first note that the transpose of I is its own inverse:

\begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

Note that this also follows from the result of exercise 1.6.2 that

PP^T = I

where P is a permutation matrix, since the 2 by 2 matrix above is a permutation of I.

Using trial and error we can find a second such matrix:

\begin{bmatrix} 2&-1 \\ 3&-2 \end{bmatrix} \begin{bmatrix} 2&-1 \\ 3&-2 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

and a third:

\begin{bmatrix} \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}} \end{bmatrix} \begin{bmatrix} \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}} \end{bmatrix}= \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.6

Exercise 1.6.6. Invert the following matrices using the Gauss-Jordan method:

A_1 = \begin{bmatrix} 1&0&0 \\ 1&1&1 \\ 0&0&1 \end{bmatrix} \quad A_2 = \begin{bmatrix} 2&-1&0 \\ -1&2&-1 \\ 0&-1&2 \end{bmatrix} \quad A_3 = \begin{bmatrix} 0&0&1 \\ 0&1&1 \\ 1&1&1 \end{bmatrix}

Answer: For the first matrix Gauss-Jordan elimination proceeds as follows: We first subtract 1 times the first row from the second row:

\begin{bmatrix} 1&0&0&1&0&0 \\ 1&1&1&0&1&0 \\ 0&0&1&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&1&0&0 \\ 0&1&1&-1&1&0 \\ 0&0&1&0&0&1 \end{bmatrix}

This completes the process of forward elimination. We then start reverse elimination by subtracting 1 times the third row from the second row:

\begin{bmatrix} 1&0&0&1&0&0 \\ 0&1&1&-1&1&0 \\ 0&0&1&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&1&0&0 \\ 1&1&0&-1&1&-1 \\ 0&0&1&0&0&1 \end{bmatrix}

This completes reverse elimination. Since the left matrix is now the identity matrix we are done, and we have

A_1^{-1} = \begin{bmatrix} 1&0&0 \\ -1&1&-1 \\ 0&0&1 \end{bmatrix}

For the second matrix we begin forward elimination by subtracting -1/2 times the first row from the second row:

\begin{bmatrix} 2&-1&0&1&0&0 \\ -1&2&-1&0&1&0 \\ 0&-1&2&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&-1&0&1&0&0 \\ 0&\frac{3}{2}&-1&\frac{1}{2}&1&0 \\ 0&-1&2&0&0&1 \end{bmatrix}

and then multiply -2/3 by the second row and subtract it from the third:

\begin{bmatrix} 2&-1&0&1&0&0 \\ 0&\frac{3}{2}&-1&\frac{1}{2}&1&0 \\ 0&-1&2&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&-1&0&1&0&0 \\ 0&\frac{3}{2}&-1&\frac{1}{2}&1&0 \\ 0&0&\frac{4}{3}&\frac{1}{3}&\frac{2}{3}&1 \end{bmatrix}

This completes forward elimination, so we begin reverse elimination by multiplying -3/4 by the third row and subtracting it from the second:

\begin{bmatrix} 2&-1&0&1&0&0 \\ 0&\frac{3}{2}&-1&\frac{1}{2}&1&0 \\ 0&0&\frac{4}{3}&\frac{1}{3}&\frac{2}{3}&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&-1&0&1&0&0 \\ 0&\frac{3}{2}&0&\frac{3}{4}&\frac{3}{2}&\frac{3}{4} \\ 0&0&\frac{4}{3}&\frac{1}{3}&\frac{2}{3}&1 \end{bmatrix}

We continue by multiplying the second row by -2/3 and subtracting it from the first row:

\begin{bmatrix} 2&-1&0&1&0&0 \\ 0&\frac{3}{2}&0&\frac{3}{4}&\frac{3}{2}&\frac{3}{4} \\ 0&0&\frac{4}{3}&\frac{1}{3}&\frac{2}{3}&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&0&0&\frac{3}{2}&1&\frac{1}{2} \\ 0&\frac{3}{2}&0&\frac{3}{4}&\frac{3}{2}&\frac{3}{4} \\ 0&0&\frac{4}{3}&\frac{1}{3}&\frac{2}{3}&1 \end{bmatrix}

In the final step we multiply the first row by 1/2, the second row by 2/3, and the third row by 3/4:

\begin{bmatrix} 2&0&0&\frac{3}{2}&1&\frac{1}{2} \\ 0&\frac{3}{2}&0&\frac{3}{4}&\frac{3}{2}&\frac{3}{4} \\ 0&0&\frac{4}{3}&\frac{1}{3}&\frac{2}{3}&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\frac{3}{4}&\frac{1}{2}&\frac{1}{4} \\ 0&1&0&\frac{1}{2}&1&\frac{1}{2} \\ 0&0&1&\frac{1}{4}&\frac{1}{2}&\frac{3}{4} \end{bmatrix}

to obtain

A_2^{-1} = \begin{bmatrix} \frac{3}{4}&\frac{1}{2}&\frac{1}{4} \\ \frac{1}{2}&1&\frac{1}{2} \\ \frac{1}{4}&\frac{1}{2}&\frac{3}{4} \end{bmatrix}

For the third matrix we must first do a row exchange to get a nonzero pivot in the first row. We exchange the first row with the third:

\begin{bmatrix} 0&0&1&1&0&0 \\ 0&1&1&0&1&0 \\ 1&1&1&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&1&0&0&1 \\ 0&1&1&0&1&0 \\ 0&0&1&1&0&0\end{bmatrix}

This completes forward elimination, so we begin reverse elimination by subtracting 1 times the third row from the second row and 1 times the third row from the first row:

\begin{bmatrix} 1&1&1&0&0&1 \\ 0&1&1&0&1&0 \\ 0&0&1&1&0&0\end{bmatrix} \rightarrow \begin{bmatrix} 1&1&0&-1&0&1 \\ 0&1&0&-1&1&0 \\ 0&0&1&1&0&0\end{bmatrix}

Finally we subtract 1 times the second row from the first row:

\begin{bmatrix} 1&1&0&-1&0&1 \\ 0&1&0&-1&1&0 \\ 0&0&1&1&0&0\end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&-1&1 \\ 0&1&0&-1&1&0 \\ 0&0&1&1&0&0\end{bmatrix}

This completes elimination and we have

A_3^{-1} = \begin{bmatrix} 0&-1&1 \\ -1&1&0 \\ 1&0&0\end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.5

Exercise 1.6.5. For a matrix A assume that A^2 is invertible and has inverse B. Prove that A is also invertible, with inverse AB.

Answer: We have

(A^2)^{-1} = B \rightarrow A^2= B^{-1}

We then have

A(AB) = A^2B = B^{-1}B = I

We also have

(AB)A = BB^{-1}(AB)A = BA^2(AB)A = BA(A^2)BA

= BAB^{-1}BA = BAIA = BA^2 = BB^{-1} = I

So AB is both a left and right inverse of A, and thus

A^{-1} = AB

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.4

Exercise 1.6.4. (a) Given AB = AC, show that B = C if A is invertible.

(b) Given

A = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix}

find B and C such that AB = AC but B \ne C.

Answer: (a) If A is invertible then

AB = AC \rightarrow A^{-1}AB = A^{-1}AC \rightarrow B = C

(b) If B is the identity matrix I then we have AB = AI = A. If C is the matrix

C = \begin{bmatrix} 1&0 \\ 1&0 \end{bmatrix}

then we have

AC = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \begin{bmatrix} 1&0 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} = A = AB

So AB = AC even though B and C are not equal.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.3

Exercise 1.6.3. Given AB = C, express A^{-1} in terms of B and C. Similar, given PA = LU, express A^{-1} in terms of P, L, and U.

Answer: Assume that both B and C are invertible (see below). We then have

AB = C \rightarrow (AB)C^{-1} = CC^{-1} \rightarrow A(BC^{-1}) = I

and

AB = C \rightarrow (BC^{-1})(AB)B^{-1} = (BC^{-1})CB^{-1}

\rightarrow (BC^{-1})A(BB^{-1}) =  B(CC^{-1})B^{-1} \rightarrow (BC^{-1})A = BB^{-1} = I

so that

A^{-1} = BC^{-1}

(Note that we need to assume that B is invertible, even though we do not use B^{-1} in the formula for A^{-1}. In the absence of this assumption we can at best prove that

(BC^{-1})AB = B

This equation would always be true if B were zero, in which case we couldn’t draw any conclusions about whether or not BC^{-1} was a left inverse for A.)

Assume L and U are invertible. (If P is a permutation matrix then we already know P is invertible from exercise 1.6.2.) We then have

PA = LU \rightarrow U^{-1}L^{-1}PA = U^{-1}L^{-1}LU = U^{-1}U = I

and

PA = LU \rightarrow PAU^{-1}L^{-1}P = LUU^{-1}L^{-1}P = LL^{-1} = P

\rightarrow P^{-1}PAU^{-1}L{-1}P = P^{-1}P \rightarrow AU^{-1}L{-1}P = I

Since

(U^{-1}L^{-1}P)A = A(U^{-1}L^{-1}P) = I

we have

A^{-1} = U^{-1}L^{-1}P

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.2

Exercise 1.6.2. (a) Find the inverses of the following matrices:

P = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \quad  P = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix}

(b) Why does P^{-1} = P^T for any permutation matrix P?

Answer: (a) The first permutation matrix exchanges the first and third rows of the identity matrix I, while leaving the second row the same. If we apply the permutation matrix again then the first and third rows will be exchanged back into their former positions, and the resulting matrix will be the identity matrix again:

P P = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

So we have

P^{-1} = P = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

For the second matrix, we note that for the (1, 1) entry of the product of P and its inverse to be equal to one (as it would be if the product were the identity matrix), we must multiply the (1, 3) entry of P by one, which implies that the (3, 1) entry of the inverse is one, since we are multiplying the first row of P by the first column of its inverse.

Similarly, for the (2, 2) entry of the product to be one, we must multiply the (2, 1) entry of P by one, which implies that the (1, 2) entry of the inverse is one, since we are multiplying the second row of P by the second column of its inverse.

Finally, for the (3, 3) entry of the product to be one, the (3, 2) entry of P must multiply an entry of one in the (2,3) position of the inverse.

Constructing the matrix A where

A = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix}

we then have

P A = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

and

A P = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

so that

P^{-1} = A = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix}

(b) A permutation matrix P permutes the rows of a matrix it multiples (on the left). Multiplying the identity matrix I by P (on the left) thus permutes the rows of I, and since PI = P, P itself is a permutation of the rows of I.

Now consider the matrix A consisting of P multiplied by the transpose of P. The (i, j) entry of A is found by multiplying row i of P by column j of its transpose. But column j of P’s transpose is simply row j of P, so the (i, j) entry of A equals row i of P times row j of P.

Each row of the identity matrix I contains a one in some column, with that column being different for each row. Since P is a permutation of I, this is true for P as well. So when i = j the (i, j) entry of A is the product of row i of P with itself, which in turn is a sum where all terms but one consist of zero multiplying zero, and the remaining term consists of one multiplying one. When i = j the (i, j) entry of A is therefore one.

On the other hand, when i does not equal j the (i, j) entry of A is the product of row i of P with a different row j of P. Since the two rows will have a one in different columns, this product is a sum where every term is either zero multiplying zero, zero multiplying one, or one multiplying zero. For non-equal i and j the (i, j) entry of A is therefore zero.

Since A has all zero entries except on the diagonal where i = j, A is the identity matrix and we have

P P^T = A = I

Now consider the matrix B produced by multiplying the transpose of P times P. The (i, j) entry of B is the product of row i of the transpose of P times column j of P, which in turn is simply column i of P times column j of P.

Each column of the identity matrix I contains a one in some row, with that row being different for each column. Since P is simply a permutation of the rows of I, this is true for P as well. So when i = j the (i, j) entry of B is the product of column i of P with itself, which in turn is a sum where all terms but one consist of zero multiplying zero, and the remaining term consists of one multiplying one. When i = j the (i, j) entry of B is therefore one.

On the other hand, when i does not equal j the (i, j) entry of B is the product of column i of P with a different column of P. Since the two columns will have a one in different rows, this product is a sum where every term is either zero multiplying zero, zero multiplying one, or one multiplying zero. For non-equal i and j the (i, j) entry of B is therefore zero.

Since B has all zero entries except on the diagonal where i = j, B is the identity matrix and we have

P^T P = B = I

Since the transpose of P is both a left inverse of P and a right inverse of P, the transpose of P is the (unique) inverse of P.

Here is a more formal proof: Since P is a permutation of the identity matrix I, for each row i of P there is a k for which

P_{ik} = 1, P_{ij} = 0, j \ne k

and

P_{lk} = 0, l \ne i

Consider the matrix A where

A = P P^T

We then have

A_{ii} = \sum_l P_{il} P^T_{li} = \sum_l P_{il} P_{il} = P_{ik}P_{ik} + \sum_{l \ne k} P_{il} P_{il} = 1 \cdot 1 + \sum_{l \ne k} 0 \cdot 0 = 1

For j \ne i we have

A_{ij} = \sum_l P_{il} P^T_{lj} = \sum_l P_{il} P_{jl}  = P_{ik} P_{jk} + \sum_{l \ne k} P_{il} P_{jl}

= 1 \cdot 0 + \sum_{l \ne k} 0 \cdot P_{jl}  = 0 + 0 = 0

Since for each row i we have

A_{ii} = 1, A_{ij} = 0, j \ne i

A is the identity matrix I and we have

P P^T = A = I.

Since P is a permutation of the identity matrix I, for each column j of P there is a k (which may be different from the one above) for which

P_{kj} = 1, P_{ij} = 0, i \ne k

and

P_{kl} = 0, l \ne j

Consider the matrix B where

B = P^T P

We then have

B_{jj} = \sum_l P^T_{jl} P_{lj} = \sum_l P_{lj} P_{lj} = P_{kj}P_{kj} + \sum_{l \ne k} P_{lj} P_{lj} = 1 \cdot 1 + \sum_{l \ne k} 0 \cdot 0 = 1

For i \ne j we have

B_{ij} = \sum_l P^T_{il} P_{lj} = \sum_l P_{li} P_{lj}  = P_{ki} P_{kj} + \sum_{l \ne k} P_{li} P_{lj}

= 0 \cdot 1 + \sum_{l \ne k} P_{li} \cdot 0  = 0 + 0 = 0

Since for each column j we have

B_{jj} = 1, B_{ij} = 0, i \ne j

B is the identity matrix I and we have

P^T P = B = I.

Since the transpose of P is both a left and right inverse of P it is the unique inverse of P, and we have

P^{-1} = P^T

for every permutation matrix P.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.1

Exercise 1.6.1. Find the inverses of the following matrices:

A_1 = \begin{bmatrix} 0&2 \\ 3&0 \end{bmatrix} \quad A_2 = \begin{bmatrix} 2&0 \\ 4&2 \end{bmatrix} \quad A_3 = \begin{bmatrix} \cos \theta &-\sin \theta \\ \sin \theta &\cos \theta \end{bmatrix}

Answer: We can use the standard formula for the inverse of a 2 by 2 matrix A:

A = \begin{bmatrix} a&b \\ c&d \end{bmatrix} \quad A^{-1} = 1/(ad - bc) \begin{bmatrix} d&-b \\ -c&a \end{bmatrix}

First, we have

A_1^{-1} = 1/(0 \cdot 0 - 2 \cdot 3) \begin{bmatrix} -0&-2 \\ -3&0 \end{bmatrix} = -\frac{1}{6} \begin{bmatrix} 0&-2 \\ -3&0 \end{bmatrix} = \begin{bmatrix} 0&\frac{1}{3} \\ \frac{1}{2}&0 \end{bmatrix}

We then have

A_2^{-1} = 1/(2 \cdot 2 - 0 \cdot 4) \begin{bmatrix} 2&-0 \\ -4&2 \end{bmatrix} = \frac{1}{4} \begin{bmatrix} 2&-0 \\ -4&2 \end{bmatrix} = \begin{bmatrix} \frac{1}{2}&0 \\ -1&\frac{1}{2} \end{bmatrix}

Finally we have

A_3^{-1} = 1/(\cos^2 \theta + \sin^2 \theta) \begin{bmatrix} \cos \theta &\sin \theta \\ -\sin \theta &\cos \theta \end{bmatrix} = \begin{bmatrix} \cos \theta &\sin \theta \\ -\sin \theta &\cos \theta \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment