Exercise 1.7.10. When partial pivoting is used show that for all multipliers in . In addition, show that if for all and then after producing zeros in the first column we have , and in general we have after producing zeros in column . Finally, show an 3 by 3 matrix with and for which the last pivot is 4.
Answer: (a) With partial pivoting we choose the pivot for each row and column in turn so that the pivot is the largest (in absolute value) of all the candidate pivots in the same column. Thus where are all the other candidate pivots. The multipliers are then equal to the candidate pivots divided by the chosen pivot, so that . But since we have .
(b) Assume that for all and . Without loss of generality assume that for all . (If this were not true then we could simply employ partial pivoting and do a row exchange to ensure that this would be the case.) Also assume that (i.e., the matrix is not singular). We then have for and thus for .
Let be the matrix produced after the first stage of elimination. Then for we have and thus and thus . For we have so that for . But we have for all (and thus ) and for . Thus for the product we have for and for the difference we have for (with the maximum difference occurring when and or vice versa).
So we have for and we also have for . For the matrix after the first stage of elimination we therefore have for all and .
Assume that after stage of elimination we have produced the matrix with for all and and consider stage of elimination. In this stage our goal is to find an appropriate pivot for column and produce zeros in column for all rows below . Without loss of generality assume that for all (otherwise we can do a row exchange as noted above). Also assume that (i.e., the matrix is not singular). We then have for and thus for .
Let be the matrix produced after stage of elimination. Then for we have and thus and thus . For we have so that for . But we have for all (and thus ) and for . Thus for the product we have for and for the difference we have for (with the maximum difference occurring when and or vice versa).
So we have for and we also have for . For the matrix after stage of elimination (which produces zeros in column ) we therefore have for all and . For we also have for all and after the first stage of elimination (which produces zeros in column 1). Therefore by induction if in the original matrix we have then for all if we do elimination with partial pivoting then after stage of elimination (which produces zeros in column ) we will have a matrix for which .
(c) One way to construct the requested matrix is to start with the desired end result and work backward to find a matrix for which elimination will produce that result, assuming that the absolute value of the multiplier at each step is 1 (the largest absolute value allowed under our assumption) and that after stage we have no value larger in absolute value than .
Thus in the final matrix (after stage 2 of elimination) we assume we have a pivot of 4 in column 3 but as yet unknown values for the other entries. The successive matrices at the various stages of elimination (the original matrix and the matrices after stage 1 and stage 2) would then be as follows:
We assume a multiplier of for the single elimination step in stage 2, and prior to that step (i.e., after stage 1 of elimination) we can have no entry larger in absolute value than 2. The successive matrices at the various stages of elimination would then be as follows:
so that subtracting row 2 from row 3 in stage 2 (i.e., using the multiplier ) would produce the value 4 in the pivot position.
We now need to figure out how to produce a value of 2 in the position and a value of -2 in the position after stage 1 of elimination. Stage 1 consists of two elimination steps, and we assume a multiplier of 1 or -1 for each step; also, all entries in the original matrix can have an absolute value no greater than 1. The original matrix prior to stage 1 of elimination can then be as follows:
with the multiplier and the multiplier (i.e., in stage 1 of elimination we are adding row 1 to row 2 and subtracting row 1 from row 3).
Now that we have picked entries for column 3 in the original matrix and suitable multipliers for all elimination steps, we need to pick entries for column 1 and column 2 of the original matrix that are consistent with the chosen multipliers and ensure that elimination will produce nonzero pivots in columns 1 and 2.
We first pick values for the and entries in the matrix after stage 1 of elimination; since we are using the multiplier those entries should be the same in order to produce a zero in the position after stage 2. We’ll try picking a value of 1 for both entries:
From above we know that in stage 1 of elimination row 1 is added to row 2 (i.e., the multiplier ) and row 1 is subtracted from row 3 (i.e., the multiplier ). Since we have to end up with the same value in the and positions in either case, the easiest approach is to assume that the position in the original matrix has a zero entry, so that adding or subtracting it doesn’t change the preexisting entries for rows 2 and 3:
Finally, we need entries in the first column such that adding row 1 to row 2 and subtracting row 1 from row 3 will produce zero:
This completes our task. We now have a matrix
for which and , and for which elimination produces a final pivot of 4.
NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.
If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.
hi,it is very nice to see u again, could u tell me that the operations required for factorizing an n by n 5-diagonal band matrix into L and U is proportional to n. How to prove it?? PS:the reference page of 4th addition of G.strang’s book shall be page69, Figure 1.8
I don’t have time for a full proof, but here’s a sketch of such a proof: Consider an n by n 5-band diagonal matrix A. Factoring A into L and U is done by employing Gaussian elimination on A, so we need to prove that the number of operations for Gaussian elimination is proportional to n.
Consider the first elimination step (the goal of which is to produce a pivot in column 1), and assume that we don’t have to do any row exchanges. Because A is a 5-diagonal band matrix we know that only the first five rows of A can have non-zero entries in column 1; all the other rows have zero in column 1. So we only have to do elimination for at most four rows (rows 2-5) to find a pivot in column 1.
The first elimination step involves finding a suitable multiplier for row 2, multiplying row 1 by that multiplier and then subtracting the result from row 2. But again because A is a 5-diagonal band matrix we also know that row 1 has at most five nonzero entries, in columns 1-5; the entries in row 1 for columns 6 through n are guaranteed to be zero. So for the first elimination step we have a division (to find the multiplier), 5 multiplications (multiplying the chosen multiplier by the nonzero entries of row 1 in columns 1-5), and 5 subtractions (to subtract the results from row 2), for a total of 11 operations. A similar argument shows that the number of operations for row 3 is 11, and the same for rows 4 and 5.
So producing the pivot in column 1 requires doing elimination on four rows (rows 2 through 5), with 11 operations per row, for a total of 4 x 11 = 44 operations. You also have to produce pivots in columns 2 through n in order to complete Gaussian elimination and produce L and U, so the total number of operations would be 44 x n, and the number of operations is thus proportional to n.
To make this proof truly rigorous you’d have to double-check the number of operations required for subsequent pivots in other columns, show that it never exceeds 44 operations (or thereabouts) to find each pivot, and account for any row exchanges required. For the row exchanges, note that in a computer program a row exchange does NOT need to be done by actually moving entire rows of values around in the in-memory representation of the matrix. Instead you can just keep a separate n-element list showing the order of the rows at each point in the overall elimination process, and do a row exchange by modifying no more than two entries in that list.
The bottom line is that you can show that producing each pivot requires no more than c operations, where c is a constant not dependent on n, so the total number of operations is guaranteed to be no more than c x n.
I thuoght 5-diagonal matrix means there’re 5 diagonals. Therefore, in column 1, only row 1 to row 3 has the nonzero entries.
You’re correct, thanks for catching this error!
However this doesn’t really affect the overall sketch of the proof: Row 1 has only 3 nonzero entries, but row 2 has 4 nonzero entries and row 3 has 5 nonzero entries. So once you get to row 3 / column 3 in the elimination the sketch applies as is. The full proof would need to treat rows 1 and 2 as special cases, along with the last two rows of the matrix. In all these four cases the number of operations would be less than the number required for “regular” rows and columns (3 through n-2), so the overall number of operations is still proportional to n. (Or, more correctly, it is bounded by c times n, where c is some constant.)