Linear Algebra and Its Applications, Review Exercise 1.8

Review exercise 1.8. Given the following matrices:

E = \begin{bmatrix} 1&0&0 \\ 0&2&0 \\ 4&0&1 \end{bmatrix} \quad \rm or \quad E = \begin{bmatrix} 1&1&1 \\ 0&0&0 \end{bmatrix} \quad \rm or \quad E = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

for a matrix A how are the rows of EA related to the rows of A?

Answer: For the first matrix E, the product EA is a 3 by 3 matrix in which:

  1. The first row is equal to the first row of A.
  2. The second row is equal to the second row of A multiplied by 2.
  3. The third row is equal to the sum of 4 times the first row of A and the third row of A.

For the second matrix E, the product matrix EA is a 2 by 3 matrix in which:

  1. The first row is equal to the sum of all three rows of A.
  2. The second row is equal to zero.

The third matrix E is a permutation matrix that reverses the order of rows, so that in the product EA:

  1. The first row is equal to the third row of A.
  2. The second row is equal to the second row of A.
  3. The third row is equal to the first row of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.7

Review exercise 1.7. For each of the each of the 2 by 2 matrices containing only -1 or 1 as entries, determine whether the matrix is invertible or not.

Answer: A 2 by 2 matrix has four entries. If each entry can be either -1 or 1 then there are 16 possible matrices of this type:

A_0 = \begin{bmatrix} -1&-1 \\ -1&-1 \end{bmatrix} \quad A_1 = \begin{bmatrix} -1&-1 \\ -1&1 \end{bmatrix} \quad A_2 = \begin{bmatrix} -1&1 \\ -1&-1 \end{bmatrix} \quad A_3 = \begin{bmatrix} -1&1 \\ -1&1 \end{bmatrix}

A_4 = \begin{bmatrix} -1&-1 \\ 1&-1 \end{bmatrix} \quad A_5 = \begin{bmatrix} -1&-1 \\ 1&1 \end{bmatrix} \quad A_6 = \begin{bmatrix} -1&1 \\ 1&-1 \end{bmatrix} \quad A_7 = \begin{bmatrix} -1&1 \\ 1&1 \end{bmatrix}

A_8 = \begin{bmatrix} 1&-1 \\ -1&-1 \end{bmatrix} \quad A_9 = \begin{bmatrix} 1&-1 \\ -1&1 \end{bmatrix} \quad A_{10} = \begin{bmatrix} 1&1 \\ -1&-1 \end{bmatrix} \quad A_{11} = \begin{bmatrix} 1&1 \\ -1&1 \end{bmatrix}

A_{12} = \begin{bmatrix} 1&-1 \\ 1&-1 \end{bmatrix} \quad A_{13} = \begin{bmatrix} 1&-1 \\ 1&1 \end{bmatrix} \quad A_{14} = \begin{bmatrix} 1&1 \\ 1&-1 \end{bmatrix} \quad A_{15} = \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

Of these matrices, the following eight matrices are singular:

A_0, A_3, A_5, A_6, A_{9}, A_{10}, A_{12}, A_{15}

This can be most easily shown by computing the value ad - bc, which is zero for all these matrices.

The remaining eight matrices are nonsingular and have inverses as follows:

A_{1}^{-1} = \frac{1}{(-1) \cdot 1 - (-1) \cdot (-1)} \begin{bmatrix} -1&-1 \\ -(-1)&-1 \end{bmatrix} = -\frac{1}{2} \begin{bmatrix} -1&-1 \\ 1&-1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1&1 \\ 1&-1 \end{bmatrix} = \frac{1}{2} A_{1}

A_{2}^{-1} = \frac{1}{(-1) \cdot (-1) - 1 \cdot (-1)} \begin{bmatrix} -1&-1 \\ -(-1)&-1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1&-1 \\ 1&-1 \end{bmatrix} = \frac{1}{2} A_{4}

A_{4}^{-1} = \frac{1}{(-1) \cdot (-1) - (-1) \cdot 1} \begin{bmatrix} -1&-(-1) \\ -1&-1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1&1 \\ -1&-1 \end{bmatrix} = \frac{1}{2} A_{2}

A_{7}^{-1} = \frac{1}{-1 \cdot 1 - 1 \cdot 1} \begin{bmatrix} 1&-1 \\ -1&-1 \end{bmatrix} = -\frac{1}{2} \begin{bmatrix} 1&-1 \\ -1&-1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1&1 \\ 1&1 \end{bmatrix} = \frac{1}{2} A_{7}

A_{8}^{-1} = \frac{1}{1 \cdot (-1) - (-1) \cdot (-1)} \begin{bmatrix} -1&-(-1) \\ -(-1)&1 \end{bmatrix} = -\frac{1}{2} \begin{bmatrix} -1&1 \\ 1&1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1&-1 \\ -1&-1 \end{bmatrix} = \frac{1}{2} A_{8}

A_{11}^{-1} = \frac{1}{1 \cdot 1 - 1 \cdot (-1)} \begin{bmatrix} 1&-1 \\ -(-1)&1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1&-1 \\ 1&1 \end{bmatrix} = \frac{1}{2} A_{13}

A_{13}^{-1} = \frac{1}{1 \cdot 1 - (-1) \cdot 1} \begin{bmatrix} 1&-(-1) \\ -1&1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1&1 \\ -1&1 \end{bmatrix} = \frac{1}{2} A_{11}

A_{14}^{-1} = \frac{1}{1 \cdot (-1) - 1 \cdot 1} \begin{bmatrix} -1&-1 \\ -1&1 \end{bmatrix} = -\frac{1}{2} \begin{bmatrix} -1&-1 \\ -1&1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1&1 \\ 1&-1 \end{bmatrix} = \frac{1}{2} A_{14}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.6

Review exercise 1.6. (a) For each of the each of the 2 by 2 matrices containing only 0 or 1 as entries, determine whether the matrix is invertible or not. (b) Of the 10 by 10 matrices containing only 0 or 1 as entries, is a particular matrix chosen at random more likely to be invertible or not?

Answer: (a) A 2 by 2 matrix has four entries. If each entry can be either 0 or 1 then there are 16 possible matrices of this type:

A_0 = \begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix} \quad A_1 = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} \quad A_2 = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} \quad A_3 = \begin{bmatrix} 0&1 \\ 0&1 \end{bmatrix}

A_4 = \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} \quad A_5 = \begin{bmatrix} 0&0 \\ 1&1 \end{bmatrix} \quad A_6 = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} \quad A_7 = \begin{bmatrix} 0&1 \\ 1&1 \end{bmatrix}

A_8 = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \quad A_9 = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} \quad A_{10} = \begin{bmatrix} 1&1 \\ 0&0 \end{bmatrix} \quad A_{11} = \begin{bmatrix} 1&1 \\ 0&1 \end{bmatrix}

A_{12} = \begin{bmatrix} 1&0 \\ 1&0 \end{bmatrix} \quad A_{13} = \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix} \quad A_{14} = \begin{bmatrix} 1&1 \\ 1&0 \end{bmatrix} \quad A_{15} = \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

Of these matrices, the following ten matrices are singular:

A_0, A_1, A_2, A_3, A_4, A_5, A_8, A_{10}, A_{12}, A_{15}

This can be most easily shown by computing the value ad - bc, which is zero for all these matrices.

The remaining six matrices are nonsingular and have inverses as follows:

A_{6}^{-1} = \frac{1}{0 \cdot 0 - 1 \cdot 1} \begin{bmatrix} 0&-1 \\ -1&0 \end{bmatrix} = \frac{1}{-1} \begin{bmatrix} 0&-1 \\ -1&0 \end{bmatrix} = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = A_{6}

A_{7}^{-1} = \frac{1}{0 \cdot 1 - 1 \cdot 1} \begin{bmatrix} 1&-1 \\ -1&0 \end{bmatrix} = \frac{1}{-1} \begin{bmatrix} 1&-1 \\ -1&0 \end{bmatrix} = \begin{bmatrix} -1&1 \\ 1&0 \end{bmatrix}

A_{9}^{-1} = I^{-1} = I = A_{9}

A_{11}^{-1} = \frac{1}{1 \cdot 1 - 1 \cdot 0} \begin{bmatrix} 1&-1 \\ 0&1 \end{bmatrix} = \frac{1}{1} \begin{bmatrix} 1&-1 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 1&-1 \\ 0&1 \end{bmatrix}

A_{13}^{-1} = \frac{1}{1 \cdot 1 - 0 \cdot 1} \begin{bmatrix} 1&0 \\ -1&1 \end{bmatrix} = \frac{1}{1} \begin{bmatrix} 1&0 \\ -1&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ -1&1 \end{bmatrix}

A_{14}^{-1} = \frac{1}{1 \cdot 0 - 1 \cdot 1} \begin{bmatrix} 0&-1 \\ -1&1 \end{bmatrix} = \frac{1}{-1} \begin{bmatrix} 0&-1 \\ -1&1 \end{bmatrix} = \begin{bmatrix} 0&1 \\ 1&-1 \end{bmatrix}

(b) The 10 by 10 matrices have 100 entries, and if each can be 0 or 1 that means there are 2^{100} possible matrices of this type. At present I don’t know of a good method to determine whether the majority of those matrices are invertible or not. I’ll update this post later if and when I have time to work on this some more.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.5

Review exercise 1.5. For each of the systems of equations in review exercise 1.4, factor the corresponding matrices into the forms A = LU or PA = LU.

Answer: For the first system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&&&+&w&=&4 \\ u&+&v&&&=&3 \\ u&+&v&+&w&=&6 \end{array}

the corresponding matrix is

A = \begin{bmatrix} 1&0&1 \\ 1&1&0 \\ 1&1&1 \end{bmatrix}

and the final matrix after elimination is

U = \begin{bmatrix} 1&0&1 \\ 0&1&-1 \\ 0&0&1 \end{bmatrix}

The eliminations steps were as follows:

  1. Subtract the first equation from the second equation (l_{21} = 1).
  2. Subtract the first equation from the third equation (l_{31} = 1).
  3. Subtract the second equation from the third equation (l_{32} = 1).

The matrix of multipliers is thus

L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 1&1&1 \end{bmatrix}

and we have the following factorization:

A = \begin{bmatrix} 1&0&1 \\ 1&1&0 \\ 1&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 1&1&1 \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 0&1&-1 \\ 0&0&1 \end{bmatrix} = LU

For the second system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&+&w&=&0 \\ u&&&+&w&=&0 \\ u&+&v&&&=&6 \end{array}

the corresponding matrix is

A = \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix}

We did an initial exchange of the first and third rows, corresponding to multiplying by the permutation matrix

P = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

The final matrix after elimination was

U = \begin{bmatrix} 1&1&0 \\ 0&-1&1 \\ 0&0&2 \end{bmatrix}

The elimination steps were as follows:

  1. Subtract the first equation from the second (l_{21} = 1).
  2. Add the second equation to the third (l_{32} = -1).

The matrix of multipliers is thus

L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 0&-1&1 \end{bmatrix}

and we have the following factorization:

PA = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 0&-1&1 \end{bmatrix} \begin{bmatrix} 1&1&0 \\ 0&-1&1 \\ 0&0&2 \end{bmatrix} = LU

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.4

Review exercise 1.4. Solve the following systems of equations using elimination and back substitution:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&&&+&w&=&4 \\ u&+&v&&&=&3 \\ u&+&v&+&w&=&6 \end{array}    and    \setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&+&w&=&0 \\ u&&&+&w&=&0 \\ u&+&v&&&=&6 \end{array}

Answer: We start with the system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&&&+&w&=&4 \\ u&+&v&&&=&3 \\ u&+&v&+&w&=&6 \end{array}

by subtracting the first equation from the second and third equations to obtain the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&&&+&w&=&4 \\ &&v&-&w&=&-1 \\ &&v&&&=&2 \end{array}

We can then subtract the second equation from the third to produce the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&&&+&w&=&4 \\ &&v&-&w&=&-1 \\ &&&&w&=&3 \end{array}

We then back-substitute the value of w into the second equation to obtain

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&&&+&w&=&4 \\ &&v&&&=&2 \\ &&&&w&=&3 \end{array}

and then back-substitute the value of v into the first equation  to obtain

\setlength\arraycolsep{0.2em}\begin{array}{rcr}u&=&1 \\ v&=&2 \\ w&=&3 \end{array}

Note that in matrix terms the elimination sequence above corresponds to

\begin{bmatrix} 1&0&1 \\ 1&1&0 \\ 1&1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&-1 \\ 0&1&0 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&-1 \\ 0&0&1 \end{bmatrix}

We next look at the system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&+&w&=&0 \\ u&&&+&w&=&0 \\ u&+&v&&&=&6 \end{array}

which can be represented in matrix terms as

\begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 6 \end{bmatrix}

We can exchange the first and third equations to obtain the system

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} u&+&v&&&=&6 \\ u&&&+&w&=&0 \\ &&v&+&w&=&0 \end{array}

and then subtract the first equation from the second to obtain

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} u&+&v&&&=&6 \\ &&-v&+&w&=&-6 \\ &&v&+&w&=&0 \end{array}

We next add the second equation to the third to obtain

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} u&+&v&&&=&6 \\ &&-v&+&w&=&-6 \\ &&&&2w&=&-6 \end{array}

Solving the third equation yields w = -3, and back-substituting the value of w into the second equation yields v = 3. Finally, back-substituting the value of v into the first equation yields u = 3. The solution is therefore

\setlength\arraycolsep{0.2em}\begin{array}{rcr}u&=&3 \\ v&=&3 \\ w&=&-3 \end{array}

In matrix terms the sequence above corresponds to first multiplying by a permutation matrix to exchange rows 1 and 3:

\begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} = \begin{bmatrix} 1&1&0 \\ 1&0&1 \\ 0&1&1 \end{bmatrix}

and then doing elimination:

\begin{bmatrix} 1&1&0 \\ 1&0&1 \\ 0&1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&0 \\ 0&-1&1 \\ 0&1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&0 \\ 0&-1&1 \\ 0&0&2 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.3

Review exercise 1.3. Find a 2 by 2 matrix A for which a_{12} = \frac{1}{2} and (a) A^2 = I, (b) A^{-1} = A^T, and (c) A^2 = A.

Answer: (a) We have a_{12} = \frac{1}{2} and A^2 = I. Since A^2 = I we see that A is its own inverse. Using the formula for the inverse of a 2 by 2 matrix we therefore have

A = \begin{bmatrix} a&\frac{1}{2} \\ c&d \end{bmatrix} = \frac{1}{ad - \frac{c}{2}} \begin{bmatrix} d&-\frac{1}{2} \\ -c&a \end{bmatrix} = A^{-1}

Since \frac{1}{2} = \frac{1}{ad - \frac{c}{2}} \cdot (-\frac{1}{2}), we have ad - \frac{c}{2} = -1. Then a = \frac{d}{ad - \frac{c}{2}} = -d or d = -a. Substituting for d we then have -a^2 - \frac{c}{2} = -1 or c = 2(1 - a^2). So we can freely choose a and then that value determines the values of c and d.

If we choose a = 1 then

A = \begin{bmatrix} 1&\frac{1}{2} \\ 0&-1 \end{bmatrix}

so that

A^2 = \begin{bmatrix} 1&\frac{1}{2} \\ 0&-1 \end{bmatrix} \begin{bmatrix} 1&\frac{1}{2} \\ 0&-1 \end{bmatrix}

= \begin{bmatrix} 1 \cdot 1 + \frac{1}{2} \cdot 0&1 \cdot \frac{1}{2} + \frac{1}{2} \cdot (-1) \\ 0 \cdot 1 - 1 \cdot 0&0 \cdot \frac{1}{2} + (-1) \cdot (-1) \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

If we choose a = 0 then

A = \begin{bmatrix} 0&\frac{1}{2} \\ 2&0 \end{bmatrix}

and if we choose a = \frac{1}{2} then

A = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{3}{2}&-\frac{1}{2} \end{bmatrix}

There are an infinite number of other matrices for which a_{12} = \frac{1}{2} and A^2 = I, corresponding to other values of a = a_{11}.

(b) We have a_{12} = \frac{1}{2} and A^{-1} = A^T so that

AA^T = \begin{bmatrix} a&\frac{1}{2} \\ c&d \end{bmatrix} \begin{bmatrix} a&c \\ \frac{1}{2}&d \end{bmatrix}

= \begin{bmatrix} a^2 + \frac{1}{4}&ac + \frac{1}{2}d \\ ac + \frac{1}{2}d&c^2 + d^2 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

Since a^2 + \frac{1}{4} = 1 we have a = \pm \frac{\sqrt{3}}{2}. We choose a = \frac{\sqrt{3}}{2}. Since ac + \frac{1}{2}d = \frac{\sqrt{3}}{2}c + \frac{1}{2}d = 0 we have d = -\sqrt{3}c. Finally, since c^2 + d^2 = c^2 + (-\sqrt{3}c)^2 = c^2 + 3c^2 = 1 we have c^2 = \frac{1}{4} or c = \pm \frac{1}{2}. We choose c = \frac{1}{2}.

Since a = \frac{\sqrt{3}}{2} and c = \frac{1}{2} we thus have

A = \begin{bmatrix} \frac{\sqrt{3}}{2}&\frac{1}{2} \\ \frac{1}{2}&-\frac{\sqrt{3}}{2} \end{bmatrix}

and

AA^T = \begin{bmatrix} \frac{\sqrt{3}}{2}&\frac{1}{2} \\ \frac{1}{2}&-\frac{\sqrt{3}}{2} \end{bmatrix} \begin{bmatrix} \frac{\sqrt{3}}{2}&\frac{1}{2} \\ \frac{1}{2}&-\frac{\sqrt{3}}{2} \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

Note that A = A^T in this case so that we also have A^2 = I. This value of A could have been derived from the formulas in (a) above by setting a = \frac{\sqrt{3}}{2}, so that d = -a = -\frac{\sqrt{3}}{2}, and c = 2(1 - a^2) = 2(1 - \frac{3}{4}) = \frac{1}{2}.

If we instead choose c = -\frac{1}{2} then

A = \begin{bmatrix} \frac{\sqrt{3}}{2}&\frac{1}{2} \\ -\frac{1}{2}&\frac{\sqrt{3}}{2} \end{bmatrix}

and

AA^T = \begin{bmatrix} \frac{\sqrt{3}}{2}&\frac{1}{2} \\ -\frac{1}{2}&\frac{\sqrt{3}}{2} \end{bmatrix} \begin{bmatrix} \frac{\sqrt{3}}{2}&-\frac{1}{2} \\ \frac{1}{2}&\frac{\sqrt{3}}{2} \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

Finally, if we choose a = -\frac{\sqrt{3}}{2} then since ac + \frac{1}{2}d = -\frac{\sqrt{3}}{2}c + \frac{1}{2}d = 0 we have d = \sqrt{3}c. Since c^2 + d^2 = c^2 + (\sqrt{3}c)^2 = c^2 + 3c^2 = 1 we again have c^2 = \frac{1}{4} or c = \pm \frac{1}{2}.

Thus a = -\frac{\sqrt{3}}{2}, c = \pm \frac{1}{2}, and d = \sqrt{3}c produces two other matrices for which AA^T = I (and also A^2 = I for the first one):

A = \begin{bmatrix} -\frac{\sqrt{3}}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{\sqrt{3}}{2} \end{bmatrix} \qquad A = \begin{bmatrix} -\frac{\sqrt{3}}{2}&\frac{1}{2} \\ -\frac{1}{2}&-\frac{\sqrt{3}}{2} \end{bmatrix}

The above four matrices are the only matrices for which a_{12} = \frac{1}{2} and AA^T = I.

(c) We have a_{12} = \frac{1}{2} and A^2 = A, so that

A^2 = \begin{bmatrix} a&\frac{1}{2} \\ c&d \end{bmatrix} \begin{bmatrix} a&\frac{1}{2} \\ c&d \end{bmatrix}

= \begin{bmatrix} a^2 + \frac{1}{2}c&\frac{1}{2}a + \frac{1}{2}d \\ ca + dc&\frac{1}{2}c + d^2 \end{bmatrix} = \begin{bmatrix} a&\frac{1}{2} \\ c&d \end{bmatrix} = A

Since \frac{1}{2}a + \frac{1}{2}d = \frac{1}{2} we have a+d =1 or d = 1-a. Since a^2 + \frac{1}{2}c = a we have c = 2(a - a^2). Thus we can choose any value for a and then c and d will be determined accordingly. For example, if we choose a = 1 we have d = 1 - 1 = 0 and c = 2(1 - 1^2) = 0. We then have

A = \begin{bmatrix} 1&\frac{1}{2} \\ 0&0 \end{bmatrix}

and

A^2 = \begin{bmatrix} 1&\frac{1}{2} \\ 0&0 \end{bmatrix} \begin{bmatrix} 1&\frac{1}{2} \\ 0&0 \end{bmatrix} = \begin{bmatrix} 1 \cdot 1 + \frac{1}{2} \cdot 0&1 \cdot \frac{1}{2} + \frac{1}{2} \cdot 0\\ 0 \cdot 1 + 0 \cdot \frac{1}{2}&0 \cdot \frac{1}{2} + 0 \cdot 0 \end{bmatrix} = \begin{bmatrix} 1&\frac{1}{2} \\ 0&0 \end{bmatrix} = A

Similarly if we choose a = 0 we have d = 1 - 0 = 1 and c = 2(0 - 0^2) = 0. We then have

A = \begin{bmatrix} 0&\frac{1}{2} \\ 0&1 \end{bmatrix}

and

A^2 = \begin{bmatrix} 0&\frac{1}{2} \\ 0&1 \end{bmatrix} \begin{bmatrix} 0&\frac{1}{2} \\ 0&1 \end{bmatrix} = \begin{bmatrix} 0&\frac{1}{2} \\ 0&1 \end{bmatrix} = A

Of particular interest is the matrix for which a = \frac{1}{2}. In that case we have d = 1 - \frac{1}{2} = \frac{1}{2} and c = 2(\frac{1}{2} - (\frac{1}{2})^2) = 2(\frac{1}{2} - \frac{1}{4}) = \frac{1}{2}. We then have

A = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix}

and

A^2 = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix} \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix}

= \begin{bmatrix} \frac{1}{4}+\frac{1}{4}&\frac{1}{4}+\frac{1}{4} \\ \frac{1}{4}+\frac{1}{4}&\frac{1}{4}+\frac{1}{4} \end{bmatrix} = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix} = A

There are an infinite number of other matrices for which a_{12} = \frac{1}{2} and A^2 = A, corresponding to other values of a = a_{11}.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.2

Review exercise 1.2. Given matrices A and B as follows

A = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \qquad B = \begin{bmatrix} 1&2 \\ 0&1 \end{bmatrix}

find the products AB and BA, the inverses A^{-1}, B^{-1}, and (AB)^{-1}.

Answer: We have

AB = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \begin{bmatrix} 1&2 \\ 0&1 \end{bmatrix} = \begin{bmatrix}1&2 \\ 2&5 \end{bmatrix}

BA = \begin{bmatrix} 1&2 \\ 0&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} = \begin{bmatrix}5&2 \\ 2&1 \end{bmatrix}

A^{-1} = \frac{1}{1 \cdot 1 - 2 \cdot 0} \begin{bmatrix} 1&-0 \\ -2&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ -2&1 \end{bmatrix}

B^{-1} = \frac{1}{1 \cdot 1 - 2 \cdot 0} \begin{bmatrix} 1&-2 \\ -0&1 \end{bmatrix} = \begin{bmatrix} 1&-2 \\ 0&1 \end{bmatrix}

(AB)^{-1} = B^{-1}A^{-1} = \begin{bmatrix} 1&-2 \\ 0&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ -2&1 \end{bmatrix} = \begin{bmatrix} 5&-2 \\ -2&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.1

Review exercise 1.1. (a) Show the 3 by 3 matrices A and B for which a_{ij} = i - j and b_{ij} = i/j.

(b) Find the products AB, BA, and A^2 of the above matrices.

Answer: (a) We have

A = \begin{bmatrix} 1-1&1-2&1-3 \\ 2-1&2-2&2-3 \\ 3-1&3-2&3-3 \end{bmatrix} = \begin{bmatrix} 0&-1&-2 \\ 1&0&-1 \\ 2&1&0 \end{bmatrix}

and

B = \begin{bmatrix} 1/1&1/2&1/3 \\ 2/1&2/2&2/3 \\ 3/1&3/2&3/3 \end{bmatrix} = \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ 2&1&\frac{2}{3} \\ 3&\frac{3}{2}&1 \end{bmatrix}

(b) We have

AB = \begin{bmatrix} 0&-1&-2 \\ 1&0&-1 \\ 2&1&0 \end{bmatrix} \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ 2&1&\frac{2}{3} \\ 3&\frac{3}{2}&1 \end{bmatrix}

= \begin{bmatrix} 0-2-6&0-1-3&0-\frac{2}{3}-2 \\ 1+0-3&\frac{1}{2}+0-\frac{3}{2}&\frac{1}{3}+0-1 \\ 2+2+0&1+1+0&\frac{2}{3}+\frac{2}{3}+0 \end{bmatrix} = \begin{bmatrix} 8&-4&-\frac{8}{3} \\ -2&-1&-\frac{2}{3} \\ 4&2&\frac{4}{3} \end{bmatrix}

BA = \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ 2&1&\frac{2}{3} \\ 3&\frac{3}{2}&1 \end{bmatrix} \begin{bmatrix} 0&-1&-2 \\ 1&0&-1 \\ 2&1&0 \end{bmatrix}

= \begin{bmatrix} 0+\frac{1}{2}+\frac{2}{3}&-1+0+\frac{1}{3}&-2-\frac{1}{2}+0 \\ 0+1+\frac{4}{3}&-2+0+\frac{2}{3}&-4-1+0 \\ 0+\frac{3}{2}+2&-3+0+1&-6-\frac{3}{2}+0 \end{bmatrix} = \begin{bmatrix} \frac{7}{6}&-\frac{2}{3}&-\frac{5}{2} \\ \frac{7}{3}&-\frac{4}{3}&-5 \\ \frac{7}{2}&-2&-\frac{15}{2} \end{bmatrix}

A^2 = \begin{bmatrix} 0&-1&-2 \\ 1&0&-1 \\ 2&1&0 \end{bmatrix} \begin{bmatrix} 0&-1&-2 \\ 1&0&-1 \\ 2&1&0 \end{bmatrix}

= \begin{bmatrix} 0-1-4&0+0-2&0+1+0 \\ 0+0-2&-1+0-1&-2+0+0 \\ 0+1+0&-2+0+0&-4-1+0 \end{bmatrix} = \begin{bmatrix} -5&-2&1 \\ -2&-2&-2 \\ 1&-2&-5 \end{bmatrix}

(As a point of interest, note that A = -A^T and A^2 = (A^2)^T.  This follows from equation 1M(i) in section 1.6: (A^2)^T = (AA)^T = A^TA^T = (-A)(-A) = AA = A^2.)

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 1.7.10

Exercise 1.7.10. When partial pivoting is used show that \lvert l_{ij} \rvert \le 1 for all multipliers l_{ij} in L. In addition, show that if \lvert a_{ij} \rvert \le 1 for all i and j then after producing zeros in the first column we have \lvert a_{ij} \rvert \le 2, and in general we have \lvert a_{ij} \rvert \le 2^k after producing zeros in column k. Finally, show an 3 by 3 matrix with \lvert a_{ij} \rvert \le 1 and \lvert l_{ij} \rvert \le 1 for which the last pivot is 4.

Answer: (a) With partial pivoting we choose the pivot p_{jj} for each row and column in turn so that the pivot is the largest (in absolute value) of all the candidate pivots in the same column. Thus \lvert p_{jj} \rvert \le \lvert p_{ij} \rvert where p_{ij} are all the other candidate pivots. The multipliers l_{ij} are then equal to the candidate pivots divided by the chosen pivot, so that l_{ij} = p_{jj}/p_{ij}. But since \lvert p_{jj} \rvert \le \lvert p_{ij} \rvert we have \lvert l_{ij} \rvert = \lvert p_{jj}/p_{ij} \rvert \le 1.

(b) Assume that \lvert a_{ij}\rvert \le 1 for all i and j. Without loss of generality assume that \lvert a_{i1} \rvert \le \lvert a_{11} \rvert for all i > 1. (If this were not true then we could simply employ partial pivoting and do a row exchange to ensure that this would be the case.) Also assume that a_{11} \ne 0 (i.e., the matrix is not singular). We then have l_{i1} = a_{i1}/a_{11} for i > 1 and thus \lvert l_{i1} \rvert = \lvert a_{i1}/a_{11} \rvert \le 1 for i > 1.

Let A' be the matrix produced after the first stage of elimination. Then for i = 1 we have a'_{ij} = a_{ij} and thus \lvert a'_{ij} \rvert = \lvert a_{ij} \rvert \le 1 and thus  \lvert a'_{ij} \rvert < 2. For i > 1 we have a'_{ij} = a_{ij} - l_{i1}a_{1j} so that \lvert a'_{ij} \rvert = \lvert a_{ij} - l_{i1}a_{1j} \rvert for i > 1. But we have \lvert a_{ij} \rvert \le 1 for all i (and thus \lvert a_{1j} \rvert \le 1) and \lvert l_{i1} \rvert \le 1 for i > 1. Thus for the product l_{i1}a_{1j} we have \lvert l_{i1}a_{1j} \rvert \le 1 for i > 1 and for the difference a_{ij} - l_{i1}a_{1j} we have \lvert a_{ij} - l_{i1}a_{1j} \rvert \le 2 for i > 1 (with the maximum difference occurring when a_{ij} = 1 and l_{i1}a_{1j} = -1 or vice versa).

So we have \lvert a'_{ij} \rvert < 2 for i = 1 and we also have \lvert a'_{ij} \rvert = \lvert a_{ij} - l_{i1}a_{1j} \rvert \le 2 for i > 1. For the matrix A' after the first stage of elimination we therefore have \lvert a'_{ij} \rvert \le 2 for all i and j.

Assume that after stage k of elimination we have produced the matrix A'' with \lvert a''_{ij} \rvert \le 2^k for all i and j and consider stage k+1 of elimination. In this stage our goal is to find an appropriate pivot for column k+1 and produce zeros in column k+1 for all rows below k+1. Without loss of generality assume that \lvert a''_{i,k+1} \rvert \le \lvert a''_{k+1,k+1} \rvert for all i > k+1 (otherwise we can do a row exchange as noted above). Also assume that a''_{k+1,k+1} \ne 0 (i.e., the matrix is not singular). We then have l_{i,k+1} = a''_{i,k+1}/a''_{k+1,k+1} for i > k+1 and thus \lvert l_{i,k+1} \rvert = \lvert a''_{i,k+1}/a''_{k+1,k+1} \rvert \le 1 for i > k+1.

Let A''' be the matrix produced after stage k+1 of elimination. Then for i \le k+1 we have a'''_{ij} = a''_{ij} and thus \lvert a'''_{ij} \rvert = \lvert a''_{ij} \rvert \le 2^k and thus  \lvert a'''_{ij} \rvert < 2^{k+1}. For i > k+1 we have a'''_{ij} = a''_{ij} - l_{i,k+1}a''_{k+1,j} so that \lvert a'''_{ij} \rvert = \lvert a''_{ij} - l_{i,k+1}a''_{k+1,j} \rvert for i > k+1. But we have \lvert a''_{ij} \rvert \le 2^k for all i (and thus \lvert a''_{k+1,j} \rvert \le 2^k) and \lvert l_{i,k+1} \rvert \le 1 for i > k+1. Thus for the product l_{i,k+1}a''_{k+1,j} we have \lvert l_{i,k+1}a''_{k+1,j} \rvert \le 2^k for i > k+1 and for the difference a''_{ij} - l_{i,k+1}a''_{k+1,j} we have \lvert a''_{ij} - l_{i,k+1}a''_{k+1,j} \rvert \le 2^{k+1} for i > k+1 (with the maximum difference occurring when a''_{ij} = 2^k and l_{i,k+1}a''_{k+1,j} = -2^k or vice versa).

So we have \lvert a'''_{ij} \rvert < 2^k for i \le k+1 and we also have \lvert a'''_{ij} \rvert = \lvert a''_{ij} - l_{i,k+1}a''_{k+1,j} \rvert \le 2^{k+1} for i > k+1. For the matrix A''' after stage k+1 of elimination (which produces zeros in column k) we therefore have \lvert a'''_{ij} \rvert \le 2^{k+1} for all i and j. For k = 1 we also have \lvert a'_{ij} \rvert \le 2^1 = 2 for all i and j after the first stage of elimination (which produces zeros in column 1). Therefore by induction if in the original matrix A we have \lvert a_{ij} \rvert \le 1 then for all k \ge 1 if we do elimination with partial pivoting then after stage k of elimination (which produces zeros in column k) we will have a matrix A' for which \lvert a'_{ij} \rvert \le 2^k.

(c) One way to construct the requested matrix is to start with the desired end result and work backward to find a matrix for which elimination will produce that result, assuming that the absolute value of the multiplier at each step is 1 (the largest absolute value allowed under our assumption) and that after stage k we have no value larger in absolute value than 2^k.

Thus in the final matrix (after stage 2 of elimination) we assume we have a pivot of 4 in column 3 but as yet unknown values for the other entries. The successive matrices at the various stages of elimination (the original matrix and the matrices after stage 1 and stage 2) would then be as follows:

\begin{bmatrix} x&x&x \\ x&x&x \\ x&x&x \end{bmatrix} \rightarrow \begin{bmatrix} x&x&x \\ 0&x&x \\ 0&x&x \end{bmatrix} \rightarrow \begin{bmatrix} x&x&x \\ 0&x&x \\ 0&0&4 \end{bmatrix}

We assume a multiplier of l_{32} = 1 for the single elimination step in stage 2, and prior to that step (i.e., after stage 1 of elimination) we can have no entry larger in absolute value than 2. The successive matrices at the various stages of elimination would then be as follows:

\begin{bmatrix} x&x&x \\ x&x&x \\ x&x&x \end{bmatrix} \rightarrow\begin{bmatrix} x&x&x \\ 0&x&-2 \\ 0&x&2 \end{bmatrix} \rightarrow \begin{bmatrix} x&x&x \\ 0&x&-2 \\ 0&0&4 \end{bmatrix}

so that subtracting row 2 from row 3 in stage 2 (i.e., using the multiplier l_{32} = 1) would produce the value 4 in the pivot position.

We now need to figure out how to produce a value of 2 in the (3,3) position and a value of -2 in the (2,3) position after stage 1 of elimination. Stage 1 consists of two elimination steps, and we assume a multiplier of 1 or -1 for each step; also, all entries in the original matrix can have an absolute value no greater than 1. The original matrix prior to stage 1 of elimination can then be as follows:

\begin{bmatrix} x&x&-1 \\ x&x&-1 \\ x&x&1 \end{bmatrix} \rightarrow\begin{bmatrix} x&x&-1 \\ 0&x&-2 \\ 0&x&2 \end{bmatrix} \rightarrow \begin{bmatrix} x&x&-1 \\ 0&x&-2 \\ 0&0&4 \end{bmatrix}

with the multiplier l_{21} = -1 and the multiplier l_{31} = 1 (i.e., in stage 1 of elimination we are adding row 1 to row 2 and subtracting row 1 from row 3).

Now that we have picked entries for column 3 in the original matrix and suitable multipliers for all elimination steps, we need to pick entries for column 1 and column 2 of the original matrix that are consistent with the chosen multipliers and ensure that elimination will produce nonzero pivots in columns 1 and 2.

We first pick values for the (2,2) and (2,3) entries in the matrix after stage 1 of elimination; since we are using the multiplier l_{32} = 1 those entries should be the same in order to produce a zero in the (2,3) position after stage 2. We’ll try picking a value of 1 for both entries:

\begin{bmatrix} x&x&-1 \\ x&x&-1 \\ x&x&1 \end{bmatrix} \rightarrow\begin{bmatrix} x&x&-1 \\ 0&1&-2 \\ 0&1&2 \end{bmatrix} \rightarrow \begin{bmatrix} x&x&-1 \\ 0&1&-2 \\ 0&0&4 \end{bmatrix}

From above we know that in stage 1 of elimination row 1 is added to row 2 (i.e., the multiplier l_{21} = -1) and row 1 is subtracted from row 3 (i.e., the multiplier l_{31} = 1). Since we have to end up with the same value in the (2,2) and (3,2) positions in either case, the easiest approach is to assume that the (1,2) position in the original matrix has a zero entry, so that adding or subtracting it doesn’t change the preexisting entries for rows 2 and 3:

\begin{bmatrix} x&0&-1 \\ x&1&-1 \\ x&1&1 \end{bmatrix} \rightarrow\begin{bmatrix} x&0&-1 \\ 0&1&-2 \\ 0&1&2 \end{bmatrix} \rightarrow \begin{bmatrix} x&0&-1 \\ 0&1&-2 \\ 0&0&4 \end{bmatrix}

Finally, we need entries in the first column such that adding row 1 to row 2 and subtracting row 1 from row 3 will produce zero:

\begin{bmatrix} 1&0&-1 \\ -1&1&-1 \\ 1&1&1 \end{bmatrix} \rightarrow\begin{bmatrix} 1&0&-1 \\ 0&1&-2 \\ 0&1&2 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \\ 0&0&4 \end{bmatrix}

This completes our task. We now have a matrix

A = \begin{bmatrix} 1&0&-1 \\ -1&1&-1 \\ 1&1&1 \end{bmatrix}

for which \lvert a_{ij} \rvert \le 1 and \lvert l_{ij} \rvert \le 1, and for which elimination produces a final pivot of 4.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 4 Comments

Linear Algebra and Its Applications, Exercise 1.7.9

Exercise 1.7.9. Given the matrix

A = \begin{bmatrix} .001&0 \\ 1&1000\end{bmatrix}

compare the pivots in standard elimination with those used in partial pivoting.

Answer: In the standard elimination process we would use a multiplier l_{21} = 1000 (1/.001) to multiply the first row and subtract it from the second:

\begin{bmatrix} .001&0 \\ 1&1000\end{bmatrix} \rightarrow \begin{bmatrix} .001&0 \\ 0&1000 \end{bmatrix}

Here the pivots are .001 and 1000, a difference of six orders of magnitude.

With partial pivoting we would first do a row exchange:

\begin{bmatrix} .001&0 \\ 1&1000 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1000 \\ .001&0 \end{bmatrix}

and then use the multiplier l_{21} = .001 (.001/1) to multiply the first row and subtract it from the second:

\begin{bmatrix} 1&1000 \\ .001&0 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1000 \\ 0&-1 \end{bmatrix}

In this case the two pivots are 1 and -1 and are of the same order of magnitude.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment