Linear Algebra and Its Applications, Exercise 1.5.19

Exercise 1.5.19. For the following two matrices, specify the values of a, b, and c for which elimination requires row exchanges, and the values for which the matrices in question are singular.

A = \begin{bmatrix} 1&2&0 \\ a&8&3 \\ 0&b&5 \end{bmatrix} \quad A = \begin{bmatrix} c&2 \\ 6&4 \end{bmatrix}

Answer: For the first matrix a row exchange would be necessary if the first step of elimination caused the second entry of the second row, i.e., in the (2,2) position, to become 0. This would occur if we multiplied the first row by 4 and subtracted it from the second row, as we would then have 8 - 4 \cdot 2 = 0 in the (2,2) position. Since the entry in the (1,1) position is 1, multiplying the first row by 4 would be necessary if a were 4:

\begin{bmatrix} 1&2&0 \\ 4&8&3 \\ 0&b&5 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&0&3 \\ 0&b&5 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&b&5 \\ 0&0&3 \end{bmatrix}

If we had both a = 4 and b = 0 then the matrix would be singular:

\begin{bmatrix} 1&2&0 \\ 4&8&3 \\ 0&0&5 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&0&3 \\ 0&0&5 \end{bmatrix}

However, these are not the only values of a and b for which the matrix is singular. Suppose we proceed with elimination. The first step would be to multiply the first row by l_1 = a and subtract the result from the second row:

\begin{bmatrix} 1&2&0 \\ a&8&3 \\ 0&b&5 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&8-2a&3 \\ 0&b&5 \end{bmatrix}

Assuming that a \ne 4 then the next step of elimination would multiply the second row of the matrix by l_2 = b/(8-2a) and subtract the result from the third row:

\begin{bmatrix} 1&2&0 \\ 0&8-2a&3 \\ 0&b&5 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&8-2a&3 \\ 0&0&5-3b/(8-2a) \end{bmatrix}

If the (3,3) entry of the resulting matrix above is zero then the original matrix is singular. In this case we have 3b/(8-2a) = 5. Multiplying both sides by (8-2a) we have 3b = 5 \cdot (8-2a) = 40 - 10a or 10a + 3b = 40.

So the first matrix is singular for all values of a and b for which 10a+3b=40. This includes the case a = 4, b=0 mentioned above, the case a=0, b = 40/3, the case a=5, b=-10/3, and an infinity of others on the line b=-\frac{10}{3}a + \frac{40}{3}.

For the second matrix a row exchange would be necessary if c = 0:

\begin{bmatrix} 0&2 \\ 6&4 \end{bmatrix} \rightarrow \begin{bmatrix} 6&4 \\ 0&2 \end{bmatrix}

On the other hand, if c = 3 then the first elimination produces zeroes in both entries of the second row, and the matrix is singular:

\begin{bmatrix} 3&2 \\ 6&4 \end{bmatrix} \rightarrow \begin{bmatrix} 3&2 \\ 0&0 \end{bmatrix}

UPDATE: Prompted by Theodore’s comment, added a section deriving the complete set of a and b for which the first matrix is singular.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 1.5.18

Exercise 1.5.18. We have the following systems of linear equations:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&-&w&=&2 \\ u&-&v&&&=&2 \\ u&&&-&w&=&2 \end{array} and \setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&-&w&=&0 \\ u&-&v&&&=&0 \\ u&&&-&w&=&0 \end{array} and \setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&+&w&=&1 \\ u&+&v&&&=&1 \\ u&&&+&w&=&1 \end{array}

Which of the systems are singular, and which nonsingular? Which have no solutions? One solution? An infinite number of solutions?

Answer: For the first system we can add the first and second equations to obtain the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&-&w&=&2 \\ u&-&v&&&=&2 \\ u&&&-&w&=&2 \end{array} \rightarrow \setlength\arraycolsep{0.2em}\begin{array}{rcrcr}u&-&w&=&4 \\ u&-&w&=&2 \end{array}

This system is singular and has no solution.

For the second system we can again add the first and second equations to obtain an equivalent system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&-&w&=&0 \\ u&-&v&&&=&0 \\ u&&&-&w&=&0 \end{array} \rightarrow \setlength\arraycolsep{0.2em}\begin{array}{rcrcr}u&-&w&=&0 \\ u&-&w&=&0 \end{array} \rightarrow u = w

This system is also singular, and has infinitely many solutions.

For the third system we can subtract the first equation from the second:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&+&w&=&1 \\ u&+&v&&&=&1 \\ u&&&+&w&=&1 \end{array} \rightarrow \setlength\arraycolsep{0.2em}\begin{array}{rcrcr}u&-&w&=&0 \\ u&+&w&=&1 \end{array} \rightarrow u = \frac{1}{2}, v = \frac{1}{2}, w = \frac{1}{2}

This system is nonsingular and has one solution.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.5.17

Exercise 1.5.17. If LPU order is used then rows are exchanged only at the end of elimination:

A = \begin{bmatrix} 1&1&1 \\ 1&1&3 \\ 2&5&8 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&1 \\ 0&0&2 \\ 0&3&6 \end{bmatrix} = PU = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&1&1 \\ 0&3&6 \\ 0&0&2 \end{bmatrix}

Specify what L is in the above case.

Answer: Since no row exchanges are done during elimination proper, the multipliers in L stay in their original places. In this case the multipliers are 1 in the (2,1) position and 2 in the (3,1) position, so we have

L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix}

We can check this by computing LPU:

LPU = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&1&1 \\ 0&3&6 \\ 0&0&2 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \begin{bmatrix} 1&1&1 \\ 0&0&2 \\ 0&3&6 \end{bmatrix} = \begin{bmatrix} 1&1&1 \\ 1&1&3 \\ 2&5&8 \end{bmatrix}

We thus have LPU = A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.5.16

Exercise 1.5.16. Find a 4 by 4 matrix (preferably a permutation matrix) that is nonsingular and for which elimination requires three row exchanges.

Answer: The following permutation matrix meets the requirement:

\begin{bmatrix} 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \end{bmatrix}

For this matrix elimination requires the following row exchanges:

  1. Exchange row 1 and row 2.
  2. Exchange row 2 and row 3.
  3. Exchange row 3 and row 4.

After the row exchanges the matrix is the identity matrix, which is nonsingular:

\begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.5.15

Exercise 1.5.15. Given the matrices

A = \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 2&3&4 \end{bmatrix} and A = \begin{bmatrix} 1&2&1 \\ 2&4&2 \\ 1&1&1 \end{bmatrix}

find their factors L, D, and U and associated permutation matrix P such that PA = LDU. Confirm that the factors are correct.

Answer: In the first step in elimination for the first matrix we must exchange rows 1 and 2, by multiplying by permutation matrix P_{12}. The intermediate matrices for U, L, and P are as shown:

\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 2&3&4 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ ?&1&0 \\ ?&?&1 \end{bmatrix} \quad \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}

There is already a zero in the (2,1) position, so the corresponding multiplier is zero. We next subtract 2 times row 1 from row 3:

\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&3&2 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&?&1 \end{bmatrix} \quad \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}

and then subtract 3 times row 2 from row 3:

\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&-1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \quad \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}

At this point elimination is complete and we have our final L and P:

\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&-1 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \quad P = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}

We then need to determine D and U. We divide each row of the first matrix by diagonal entry in that row to produce U, with the divisor then going into D:

U = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&1 \end{bmatrix} \quad D = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \quad P = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}

Finally, to confirm the factors are correct we compute LDU and PA:

PA = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 2&3&4 \end{bmatrix} = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 2&3&4 \end{bmatrix}

LDU = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&-1 \end{bmatrix}

= \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 2&3&4 \end{bmatrix} = PA

We now turn to the second matrix:

A = \begin{bmatrix} 1&2&1 \\ 2&4&2 \\ 1&1&1 \end{bmatrix}

As the first step of elimination we multiply row 1 by 2 and subtract it from row 2, producing the following intermediate matrices (which will become U and L respectively):

\begin{bmatrix} 1&2&1 \\ 0&0&0 \\ 1&1&1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ ?&?&1 \end{bmatrix}

Since elimination produced a zero in the pivot position, we need to exchange rows 2 and 3 using the permutation matrix P_{23}, and readjust the intermediate L to reflect the exchange:

\begin{bmatrix} 1&2&1 \\ 1&1&1 \\ 0&0&0 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ ?&1&0 \\ 2&?&1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}

We then subtract 1 times row 1 from (the new) row 2:

\begin{bmatrix} 1&2&1 \\ 0&-1&0 \\ 0&0&0 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&?&1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}

At this point elimination is complete and we have our final L and P respectively:

\begin{bmatrix} 1&2&1 \\ 0&-1&0 \\ 0&0&0 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \quad P = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}

We then need to determine D and U. We divide each row of the first matrix by the diagonal entry in that row (if it’s nonzero) to produce U, with the divisor then going into D:

U = \begin{bmatrix} 1&2&1 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \quad D = \begin{bmatrix} 1&0&0 \\ 0&-1&0 \\ 0&0&1 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \quad P = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}

We then compute PA and LDU to confirm the factorization is correct:

PA = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&2&1 \\ 2&4&2 \\ 1&1&1 \end{bmatrix} = \begin{bmatrix} 1&2&1 \\ 1&1&1 \\ 2&4&2 \end{bmatrix}

LDU = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&-1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&2&1 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \begin{bmatrix} 1&2&1 \\ 0&-1&0 \\ 0&0&0 \end{bmatrix}

= \begin{bmatrix} 1&2&1 \\ 1&1&1 \\ 2&4&2 \end{bmatrix} = PA

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 9 Comments

Linear Algebra and Its Applications, Exercise 1.5.14

Exercise 1.5.14. Find all possible 3 by 3 permutation matrices, along with their inverses.

Answer: The identity matrix I is the first possible permutation matrix, corresponding to not doing a row exchange at all; it is its own inverse:

I = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}

I \cdot I = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

The permutation matrix P_{12} exchanges rows 1 and 2:

P_{12} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}

A second exchange of rows 1 and 2 returns both rows to their original position, so P_{12} is its own inverse:

P_{12}P_{12} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

The permutation matrix P_{13} exchanges rows 1 and 3:

P_{13} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

A second exchange of rows 1 and 3 returns both rows to their original position, so P_{13} is also its own inverse:

P_{13} P_{13} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

The permutation matrix P_{23} exchanges rows 2 and 3:

P_{23} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}

As with P_{12} and P_{13} (and for similar reasons) P_{23} is also its own inverse:

P_{23} P_{23} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

We can generate a fifth permutation matrix by exchanging rows 1 and 3 and then exchanging rows 2 and 3; this is equivalent to multiplying P_{23} by P_{13}:

P_5 = P_{23} P_{13} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix}

(Note that the order of multiplication is important here; since by convention we apply permutation matrices to the left, we put P_{13} on the right and then multiply it by P_{23} on the left.) The resulting matrix sends row 1 to row 2, row 2 to row 3, and row 3 to row 1.

We can generate a sixth permutation matrix by reversing the exchanges used in creating the previous permutation matrix; in other words, we first exchange rows 2 and 3, and then subsequently exchange rows 1 and 3. This is equivalent to multiplying P_{13} by P_{23}:

P_6 = P_{13} P_{23} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix}

The resulting matrix sends row 1 to row 3, row 2 to row 1, and row 3 to row 2, reversing the effect of the fifth permutation matrix. The fifth and sixth matrices are therefore inverses of each other:

P_5 P_6 = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

P_6 P_5 = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

In permuting the rows of a 3 by 3 matrix, we have three possible choices for row 1 of the permuted matrix. (In other words, row 1 of the permuted matrix could be set to row 1, row 2, or row 3 of the original matrix.) Having made that choice, we have two choices remaining for row 2 of the permuted matrix. (For example, if we set row 1 of the permuted matrix to be row 3 of the original matrix, then row 2 of the permuted matrix could be set to either row 1 or row 2 of the original matrix.)

Having made the first two choices, there is only one choice remaining for row 3 of the permuted matrix. (For example, if we set row 1 of the permuted matrix to be row 3 of the original matrix and row 2 of the permuted matrix to be row 1 of the original matrix, then row 3 of the permuted matrix must be set to row 2 of the original matrix.) There are thus 3 times 2 times 1 or 6 possible ways to permute the rows of a 3 x 3 matrix. (This is a special case of the general result that there are n times n-1 times n-2 … times 2 times 1 or n! ways to permute n items.)

We have found six 3 by 3 permutation matrices:

I = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} \quad P_{12} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \quad P_{13} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

P_{23} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \quad P_5 = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \quad P_6 = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix}

This therefore completes the set of all possible 3 x 3 permutation matrices.

Extra credit: From above we have six 3 by 3 permutation matrices: I, P_{12}, P_{13}, P_{23}, P_5 (equal to P_{23}P_{13}), and P_6 (equal to P_{13}P_{23}). There are 36 possible products of the six matrices (six possible choices for the left factor, and six for the right). We already know that IA = AI = A for any matrix A. We also know that any of P_{12}, P_{13}, and P_{23} times itself equals I, since each of these three matrices is its own inverse. Finally, we know that P_{23}P_{13} equals P_5, P_{13}P_{23} equals P_6, and P_5 P_6 equals P_6 P_5 equals I.

What about other products? We have

P_{12} P_{13} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = P_6

and

P_{13} P_{12} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = P_5

We also have

P_{12} P_{23} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = P_5

and

P_{23} P_{12} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = P_6

We then have

P_{13} P_{23} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = P_6

and

P_{23} P_{13} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = P_5

We next find the products of P_5 and P_6 with P_{12}, P_{13}, and P_{23} respectively. We start by multiplying by P_5 on the left:

P_5 P_{12} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = P_{13}

P_5 P_{13} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = P_{23}

P_5 P_{23} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} = P_{12}

and then on the right:

P_{12} P_5 = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = P_{23}

P_{13} P_5 = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} = P_{12}

P_{23} P_5 = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = P_{13}

We can then use the results above to compute products involving P_6 by taking advantage of the fact that P_6 is the inverse of P_5:

P_5 P_{12} = P_{13} \Rightarrow P_6 P_5 P_{12} = P_6 P_{13} \Rightarrow I P_{12} = P_6 P_{13} \Rightarrow P_6 P_{13} = P_{12}
P_5 P_{13} = P_{23} \Rightarrow P_6 P_5 P_{13} = P_6 P_{23} \Rightarrow I P_{13} = P_6 P_{23} \Rightarrow P_6 P_{23} = P_{13}
P_5 P_{23} = P_{12} \Rightarrow P_6 P_5 P_{23} = P_6 P_{12} \Rightarrow I P_{23} = P_6 P_{12} \Rightarrow P_6 P_{12} = P_{23}
P_{12} P_5 = P_{23} \Rightarrow P_{12} P_5 P_6 = P_{23} P_6 \Rightarrow P_{12} I = P_{23} P_6 \Rightarrow P_{23} P_6 = P_{12}
P_{13} P_5 = P_{12} \Rightarrow P_{13} P_5 P_6 = P_{12} P_6 \Rightarrow P_{13} I = P_{12} P_6 \Rightarrow P_{12} P_6 = P_{13}
P_{23} P_5 = P_{13} \Rightarrow P_{23} P_5 P_6 = P_{13} P_6 \Rightarrow P_{23} I = P_{13} P_6 \Rightarrow P_{13} P_6 = P_{23}

Finally, we multiply P_5 and P_6 by themselves:

P_5 P_5 = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = P_6

P_6 P_6 = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} = P_5

So the product of any two of the six 3 x 3 permutation matrices is itself one of the six permutation matrices. The following multiplication table summarizes the results above:

\setlength\arraycolsep{0.2em}\begin{array}{cccccccc}\times&\vline& I&P_{12}&P_{13}&P_{23}&P_5&P_6 \\ \hline I&\vline&I&P_{12}&P_{13}&P_{23}&P_5&P_6 \\ P_{12}&\vline&P_{12}&I&P_6&P_5&P_{23}&P_{13} \\ P_{13}&\vline&P_{13}&P_5&I&P_6&P_{12}&P_{23} \\ P_{23}&\vline&P_{23}&P_6&P_5&I&P_{13}&P_{12} \\ P_5&\vline&P_5&P_{13}&P_{23}&P_{12}&P_6&I \\ P_6&\vline&P_6&P_{23}&P_{12}&P_{13}&I&P_5 \end{array}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.5.13

Exercise 1.5.13. Given the systems of equations

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}u&+&4v&+&2w&=&-2         \\ -2u&-&8v&+&3w&=&32 \\   &&v&+&w&=&1 \end{array}

and

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}&&v&+&w&=&0          \\ u&+&v&&&=&0 \\    u&+&v&+&w&=&1 \end{array}

solve both by elimination. Do row exchanges where necessary, and specify any permutation matrices required.

Answer: The first system of equations can be expressed as follows:

\begin{bmatrix} 1&4&2 \\ -2&-8&3 \\ 0&1&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} -2 \\ 32 \\ 1 \end{bmatrix}

In the first step of elimination we subtract -2 times row 1 from row 2:

\begin{bmatrix} 1&4&2 \\ 0&0&7 \\ 0&1&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} -2 \\ 28 \\ 1 \end{bmatrix}

We then need to exchange rows 2 and 3, so multiply both sides by the appropriate permutation matrix:

\begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&4&2 \\ 0&0&7 \\ 0&1&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} -2 \\ 28 \\ 1 \end{bmatrix}

which produces the following:

\begin{bmatrix} 1&4&2 \\ 0&1&1 \\ 0&0&7 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} -2 \\ 1 \\ 28 \end{bmatrix}

We can then solve for u, v, and w:

7w = 28 \Rightarrow w = 4
v + w = 1 \Rightarrow v = -3
u + 4v + 2w = -2 \Rightarrow u - 12 + 8 = -2 \Rightarrow u = 2

So the solution is (2, -3, 4).

The second system of equations can be expressed as follows:

\begin{bmatrix} 0&1&1 \\ 1&1&0 \\ 1&1&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

Since we have a zero pivot in row 1, we must immediately do a row exchange:

\begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&1 \\ 1&1&0 \\ 1&1&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

which produces

\begin{bmatrix} 1&1&0 \\ 0&1&1 \\ 1&1&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

We can then multiply row 1 by 1 and subtract from row 3:

\begin{bmatrix} 1&1&0 \\ 0&1&1 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

This completes elimination, so we solve for u, v, and w:

w = 1
v + w = 0 \Rightarrow v = -1
u + v = 0 \Rightarrow u = 1

So the solution is (1, -1, 1).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.5.12

Exercise 1.5.12. Could A be factored into the product UL where U is upper triangular and L is lower triangular, instead of being factored into the product LU? If so, how could this other factorization be carried out? Would U and L be the same in both cases?

Answer: The matrix A can be factored into UL by doing elimination in reverse, starting at row n of A and continuing through rows n-1, n-2, and so on, up to row 1, and trying to produce zeros starting from column n on down. For example, in the first steps of such elimination a multiple of row n would be subtracted from row n-1 in order to produce a zero in position n-1,n; a multiple of row n would then be subtracted from row n-2 in order to produce a zero in position n-2,n; and so on. This produces a lower triangular matrix.

The multipliers from the reverse elimination steps are used to create an upper triangular matrix, in analogous manner to regular elimination: When subtracting a multiple of row n from row n-1 we put the multiplier in position n-1,n of the lower triangular matrix; when subtracting a multiple of row n from row n-2 we put the multiplier in position n-2,n of the lower triangular matrix; and so on. The upper triangular matrix has ones on the diagonal.

As an example, consider the matrix defined at the beginning of section 15:

A = \begin{bmatrix} 2&1&1 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix}

The reverse elimination steps would go as follows. At each step we construct the upper triangular matrix using the multipliers used in the corresponding elimination step.

The first step is null because the entry in the 2,3 position of A is already zero. The intermediate matrices are as follows, with the upper triangular matrix having a zero in the 2,3 position and the other entries being yet undetermined:

\begin{bmatrix} 2&1&1 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} \quad \begin{bmatrix} 1&?&? \\ 0&1&0 \\ 0&0&1 \end{bmatrix}

For the second step we subtract \frac{1}{2} times row 3 from row 1, and put the multiplier \frac{1}{2} in the 1,3 position of the upper triangular matrix:

\begin{bmatrix} 3&-\frac{5}{2}&0 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} \quad \begin{bmatrix} 1&?&\frac{1}{2} \\ 0&1&0 \\ 0&0&1 \end{bmatrix}

For the third step we subtract \frac{5}{12} times row 2 from row 1, and put the multiplier \frac{5}{12} in the 1,2 position of the upper triangular matrix. This completes elimination and produces the final lower triangular and upper triangular matrices L and U respectively:

L = \begin{bmatrix} \frac{4}{3}&0&0 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} \quad U = \begin{bmatrix} 1&\frac{5}{12}&\frac{1}{2} \\ 0&1&0 \\ 0&0&1 \end{bmatrix}

We now compute the product UL:

UL = \begin{bmatrix} 1&\frac{5}{12}&\frac{1}{2} \\ 0&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} \frac{4}{3}&0&0 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} = \begin{bmatrix} 2&1&1 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} = A

So we have found U and L such that A = UL. Note that if we reverse the order of the factors we have

LU = \begin{bmatrix} \frac{4}{3}&0&0 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} \begin{bmatrix} 1&\frac{5}{12}&\frac{1}{2} \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} \frac{4}{3}&\frac{5}{9}&\frac{2}{3} \\ 4&-\frac{13}{3}&2 \\ -2&\frac{37}{6}&1 \end{bmatrix} \ne A

So if A = UL it does not necessarily follow that A = LU, or vice versa—and indeed we would not expect this in general, since matrix multiplication is not commutative.

Extra credit: The matrices L and U found via reverse elimination are not exactly in the form we have otherwise been using, since L does not have diagonal entries equal to 1. We can transform L to the proper form by factoring it into a diagonal matrix D times a new lower triangular matrix. We can then multiply the matrix U by D to obtain a new upper triangular matrix.

In our example we have

L = \begin{bmatrix} \frac{4}{3}&0&0 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} = \begin{bmatrix} \frac{4}{3}&& \\ &-6& \\ &&2 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ -\frac{2}{3}&1&0 \\ -1&\frac{7}{2}&1 \end{bmatrix} = DL'

so that

UL = U(DL') = (UD)L' = \begin{bmatrix} 1&\frac{5}{12}&\frac{1}{2} \\ 0&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} \frac{4}{3}&& \\ &-6& \\ &&2 \end{bmatrix} L'
\quad = \begin{bmatrix} \frac{4}{3}&-\frac{5}{2}&1 \\ 0&-6&0 \\ 0&0&2 \end{bmatrix} L' = U'L'

and

U'L' = \begin{bmatrix} \frac{4}{3}&-\frac{5}{2}&1 \\ 0&-6&0 \\ 0&0&2 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ -\frac{2}{3}&1&0 \\ -1&\frac{7}{2}&1 \end{bmatrix} = \begin{bmatrix} 2&1&1 \\ 4&-6&0 \\ -2&7&2 \end{bmatrix} = A

Thus U'L' is another factoring of A, this time into an upper triangular matrix and a lower triangular matrix with ones on the diagonal. Again if we reverse the order of multiplication then the product L'U' is no longer a factoring:

L'U' = \begin{bmatrix} 1&0&0 \\ -\frac{2}{3}&1&0 \\ -1&\frac{7}{2}&1 \end{bmatrix} \begin{bmatrix} \frac{4}{3}&-\frac{5}{2}&1 \\ 0&-6&0 \\ 0&0&2 \end{bmatrix} = \begin{bmatrix} \frac{4}{3}&-\frac{5}{2}&1 \\ -\frac{8}{3}&-\frac{13}{3}&-\frac{2}{3} \\ -\frac{4}{3}&-\frac{37}{2}&1 \end{bmatrix} \ne A

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Tagged , , | 4 Comments

Linear Algebra and Its Applications, Exercise 1.5.11

Exercise 1.5.11. We have a system LUx = b with values for L, U, and b as follows:

\begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} 2&4&4 \\ 0&1&2 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \\ 2 \end{bmatrix}

Solve for x without multiplying L and U to find A.

Answer: We can take advantage of the equations Lc = b and Ux = c to first solve for c and then for x. From Lc = b we have

\begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \\ 2 \end{bmatrix}

Solving for c we have

c_1 = 2
c_1+c_2 = 0 \Rightarrow c_2 = -2
c_1+c_3 = 2 \Rightarrow c_3 = 0

From Ux = c we then have

\begin{bmatrix} 2&4&4 \\ 0&1&2 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 2 \\ -2 \\ 0 \end{bmatrix}

Solving for u, v, and w we have

w = 0
v+2w = -2 \Rightarrow v = -2
2u+4v+4w = 2 \Rightarrow 2u - 8 = 2 \Rightarrow u = 5

and the solution to the system above is

x =\begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 5 \\ -2 \\ 0 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.5.10

Exercise 1.5.10. (a) Both Lc = b and Ux = c take approximately n^2/2 multiplication-substraction steps to solve. Explain why.

(b) Assume A is a 60 by 60 coefficient matrix. How many steps are required to use elimination to solve ten systems involving A?

Answer: (a) L is lower triangular, so we have

b = Lc \Rightarrow \begin{bmatrix} 1&&& \\ l_{21}&1&& \\ \vdots&\vdots&\ddots& \\ l_{n1}&l_{n2}&\cdots&1 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}

The first entry c_1 can be found in a single operation, while solving for c_2 takes two operations, solving for c_3 three operations, and so on until solving for c_n takes n operations.

The total number of operations is then

1 + 2 + 3 + \cdots + n = n(n+1)/2 \approx n^2/2

We have a similar situation with Ux = c, since U is upper triangular:

Ux = c \Rightarrow \begin{bmatrix} u_{11}&u_{12}&\cdots&u_{1n} \\ &u_{22}&\cdots&u_{2n} \\ &&\ddots&\vdots \\ &&&u_{nn} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix}

Here solving for x_n takes one step, solving for x_{n-1} two steps, and so on until solving for x_1 takes n steps. Again the total number of steps is n(n+1)/2 \approx n^2/2.

(b) As discussed on p. 15, doing elimination on an arbitrary n by n matrix takes approximately n^3/3 steps for large n. As a result of performing elimination on A we obtain its two factors L and U. Given the particular system Ax = b we can use Lc = b to solve for c in approximately n^2/2 steps, and can then use Ux = c to solve for x, also in approximately n^2/2 steps, for a total of n^2 steps in all.

For ten systems with the same 60 by 60 matrix the total number of steps is then approximately

n^3/3 + 10n^2 = 60^3/3 + 10 \cdot 60^2= (60 \cdot 60^2)/3 + 10 \cdot 60^2
= 20 \cdot 60^2 + 10 \cdot 60^2 = 30 \cdot 60^2 = 30 \cdot 3,600 = 108,000

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment