## Linear Algebra and Its Applications, Exercise 1.5.15

Exercise 1.5.15. Given the matrices $A = \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 2&3&4 \end{bmatrix}$ and $A = \begin{bmatrix} 1&2&1 \\ 2&4&2 \\ 1&1&1 \end{bmatrix}$

find their factors L, D, and U and associated permutation matrix P such that PA = LDU. Confirm that the factors are correct.

Answer: In the first step in elimination for the first matrix we must exchange rows 1 and 2, by multiplying by permutation matrix $P_{12}$. The intermediate matrices for U, L, and P are as shown: $\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 2&3&4 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ ?&1&0 \\ ?&?&1 \end{bmatrix} \quad \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}$

There is already a zero in the (2,1) position, so the corresponding multiplier is zero. We next subtract 2 times row 1 from row 3: $\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&3&2 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&?&1 \end{bmatrix} \quad \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}$

and then subtract 3 times row 2 from row 3: $\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&-1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \quad \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}$

At this point elimination is complete and we have our final L and P: $\begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&-1 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \quad P = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}$

We then need to determine D and U. We divide each row of the first matrix by diagonal entry in that row to produce U, with the divisor then going into D: $U = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&1 \end{bmatrix} \quad D = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \quad P = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}$

Finally, to confirm the factors are correct we compute LDU and PA: $PA = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 2&3&4 \end{bmatrix} = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 2&3&4 \end{bmatrix}$ $LDU = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 2&3&1 \end{bmatrix} \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&-1 \end{bmatrix}$ $= \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 2&3&4 \end{bmatrix} = PA$

We now turn to the second matrix: $A = \begin{bmatrix} 1&2&1 \\ 2&4&2 \\ 1&1&1 \end{bmatrix}$

As the first step of elimination we multiply row 1 by 2 and subtract it from row 2, producing the following intermediate matrices (which will become U and L respectively): $\begin{bmatrix} 1&2&1 \\ 0&0&0 \\ 1&1&1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ ?&?&1 \end{bmatrix}$

Since elimination produced a zero in the pivot position, we need to exchange rows 2 and 3 using the permutation matrix $P_{23}$, and readjust the intermediate L to reflect the exchange: $\begin{bmatrix} 1&2&1 \\ 1&1&1 \\ 0&0&0 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ ?&1&0 \\ 2&?&1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}$

We then subtract 1 times row 1 from (the new) row 2: $\begin{bmatrix} 1&2&1 \\ 0&-1&0 \\ 0&0&0 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&?&1 \end{bmatrix} \quad \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}$

At this point elimination is complete and we have our final L and P respectively: $\begin{bmatrix} 1&2&1 \\ 0&-1&0 \\ 0&0&0 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \quad P = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}$

We then need to determine D and U. We divide each row of the first matrix by the diagonal entry in that row (if it’s nonzero) to produce U, with the divisor then going into D: $U = \begin{bmatrix} 1&2&1 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \quad D = \begin{bmatrix} 1&0&0 \\ 0&-1&0 \\ 0&0&1 \end{bmatrix} \quad L = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \quad P = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}$

We then compute PA and LDU to confirm the factorization is correct: $PA = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&2&1 \\ 2&4&2 \\ 1&1&1 \end{bmatrix} = \begin{bmatrix} 1&2&1 \\ 1&1&1 \\ 2&4&2 \end{bmatrix}$ $LDU = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&-1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&2&1 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 2&0&1 \end{bmatrix} \begin{bmatrix} 1&2&1 \\ 0&-1&0 \\ 0&0&0 \end{bmatrix}$ $= \begin{bmatrix} 1&2&1 \\ 1&1&1 \\ 2&4&2 \end{bmatrix} = PA$

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition , Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books .

This entry was posted in linear algebra. Bookmark the permalink.

### 9 Responses to Linear Algebra and Its Applications, Exercise 1.5.15

1. Takashi Furuya says:

I find the part “readjust the intermediate L to reflect the exchange” unclear. If there was a need to exchange rows in middle of performing elimination on some arbitrary matrix, what exactly is done to L to reflect this exchange?

• hecker says:

Thanks for your comment. You’re correct, I need to make this more clear. I’ll update the post when I have time.

2. Daniyar Yeralin says:

You have a mistake in your first part. Your U matrix is [ 1 0 1; 0 1 1; 0 0 1], but it should be U= [ 1 0 1; 0 1 1; 0 0 -1]

• Daniyar Yeralin says:

Ohhh no, it is all corrent. Without Diagonal matrix, Upper triangular matrix is U= [ 1 0 1; 0 1 1; 0 0 -1]

• hecker says:

Thank you for reading this blog! Yes, if I remember correctly (it’s been a long time since I wrote this), in the LDU factoring the upper triangular matrix U is supposed to have 1 for all the diagonal entries. That’s why D is calculated the way it is.

3. a0617 says:

Can u tell me how to find L inverse in the first A matrix

• hecker says:

You need to use Gauss-Jordan elimination to find the inverse of a matrix like L. I have examples of how to do this in the answers to exercise 1.6.6, review exercise 1.10, and especially 1.6.22 (which is for a lower triangular matrix like L).

Note that we’re guaranteed that L is invertible because it’s a lower triangular matrix with 1 on the diagonal. For a proof of this see the last part of the answer to exercise 1.27.

4. a0617 says:

Thank you, btw im still having some problem with these questions below, but i may not find the answer from your posts, im appreciate that you show me the answers of those questions below which are all from prof G,Strang’s book.

Q1 Starting from a 3 by 3 matrix A with pivots 2, 7, 6, add a fourth row and column to produce M. What are the first three pivots for M, and why? What fourth row and column are sure to produce 9 as the fourth pivot?

Q2 Compare and discuss the operations (division or multiplication-subtraction) required for solving Ax=b by Gaussian elimination and back substitution and for solving Ax=b by factorizing A into LU and solving Lc=b and Ux=c.

Q3 Find a 3 by 3 permutation matrix with P3=I (but Pnot equal to I). Find a 4 by 4 permutation matrix P with P4 not equal to I.

Q4
(a) Show that A (square matrix with no row exchanges) and AT share the same pivots.
(b) If A is invertible and symmetric, what is the transpose of A-1?
(c) Suppose R is rectangular (m by n) and A is symmetric (m by m), show that RTAR is

Q5
If A is invertible, which properties of A remain true for A-1
(a) A is triangular. (b) A is symmetric. (c) A is tri-diagonal.. (d) All entries are whole numbers. (e) All entries are fractions (including numbers like 3/1).

• hecker says:

ao617: Are these questions from a different edition of Strang’s book? Some of these seem similar to questions in the Third Edition that I’ve already posted answers for. Here are some quick hints for the questions you posted:

Q1. I’m assuming that the 3×3 matrix A is already in echelon form, and we add a fourth row and column to make M. If the 3×3 matrix A has three pivots then all three rows (and columns) are linearly independent. If you add a fourth row (and column) to make a 4×4 matrix M then the fourth row is either a linear combination of the first three rows, or it’s not. If the fourth row is a linear combination of the first three rows then elimination will produce the same pivots in the first three rows and all zeros in the last row. If the fourth row is linearly independent then elimination will produce zeros in the first three columns of the fourth row, and a nonzero pivot in the fourth column of the last row; again the first three pivots will be the same.

As for producing a 9 in the fourth pivot position, suppose you put the value 9 in the fourth row and fourth column, i.e., the (4, 4) position. What other three values would you put in the fourth row and the fourth column to ensure that elimination did not disturb that value of 9? (Hint: The best way is for you not to have do any elimination at all involving the fourth row, and for elimination not to affect the first three values in the fourth column.)

Q2. This is too complicated to deal with in a short note.

Q3: My answer for exercise 1.5.14 lists all six 3×3 permutation matrices. One of those is I, so that can’t be the answer. Three other permutation matrices are their own inverse, i.e., we have P^2 = I. But if P is not I and P^2 = I then P^3 = P^2 P = IP = P. So P^3 is not equal to I, and these can’t be the answer either. That leaves only two matrices for you to try.

For the 4×4 case try taking the 3×3 matrix that was the answer to the first part of Q3 and make a 4×4 matrix out of it by putting a 1 in the (4, 4) position. Then compute P^4.

Q4. (a) This is related to the fact that if the rows are linearly independent the columns are too, and vice versa. But I don’t have time to look at this more closely.

(b) See part (2) of my answer to exercise 1.6.12.

(c) I presume the goal is to show that R^TAR is symmetric. I don’t have time to look at this.

Q5. See my answer to exercise 1.6.12.