Linear Algebra and Its Applications, Exercise 1.7.8

Exercise 1.7.8. Take the 10 by 10 Hilbert matrix A for which a_{ij} = 1/(i+j-1) and solve the equation Ax = (1, 0, \dotsc, 0). Make a minor change to A or b and see how the solution x changes.

Answer: We can use the open source statistics software R for this exercise. The following R commands will define a 10 by 10 Hilbert matrix. The first command defines A as a 10 by 10 matrix with all values set to 0, and the second command sets each entry a_{ij} to the proper value for a Hilbert matrix. (Note that there may be a faster or more elegant way to do this in R, but this way is easily understandable.)

> A <- array(0, dim=c(10,10))
> for (i in 1:10) {for (j in 1:10) {A[i, j] <- 1/(i+j-1)}}
>

We then print the matrix to double-check that it appears to be correct (we use the option() function to limit the number of digits printed per entry and increase the width of the printable area, so that the entire matrix can be displayed more compactly):

> options(digits=3)
> options(width=90)
> A
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 1.000 0.500 0.3333 0.2500 0.2000 0.1667 0.1429 0.1250 0.1111 0.1000
[2,] 0.500 0.333 0.2500 0.2000 0.1667 0.1429 0.1250 0.1111 0.1000 0.0909
[3,] 0.333 0.250 0.2000 0.1667 0.1429 0.1250 0.1111 0.1000 0.0909 0.0833
[4,] 0.250 0.200 0.1667 0.1429 0.1250 0.1111 0.1000 0.0909 0.0833 0.0769
[5,] 0.200 0.167 0.1429 0.1250 0.1111 0.1000 0.0909 0.0833 0.0769 0.0714
[6,] 0.167 0.143 0.1250 0.1111 0.1000 0.0909 0.0833 0.0769 0.0714 0.0667
[7,] 0.143 0.125 0.1111 0.1000 0.0909 0.0833 0.0769 0.0714 0.0667 0.0625
[8,] 0.125 0.111 0.1000 0.0909 0.0833 0.0769 0.0714 0.0667 0.0625 0.0588
[9,] 0.111 0.100 0.0909 0.0833 0.0769 0.0714 0.0667 0.0625 0.0588 0.0556
[10,] 0.100 0.091 0.0833 0.0769 0.0714 0.0667 0.0625 0.0588 0.0556 0.0526
>

We next define b = (1, 0, \dotsc, 0):

> b <- array(0, dim=c(10))
> b[1] <- 1
> b
 [1] 1 0 0 0 0 0 0 0 0 0
>

We can now use the R solve() function to solve for x:

> x <- solve(A, b)
> x
[1]      100    -4950    79195  -600553  2522294 -6305679  9608580 -8750614  4375282
[10]  -923666
>

We then multiply A by x to verify that x is indeed a solution to Ax = b:

> A %*% x
           [,1]
 [1,]  1.00e+00
 [2,] -1.89e-10
 [3,] -2.04e-10
 [4,] -1.60e-10
 [5,] -1.02e-10
 [6,]  3.64e-11
 [7,]  2.18e-11
 [8,]  7.28e-11
 [9,]  0.00e+00
[10,]  0.00e+00
> 

Finally we tweak b very slightly and see how the solution to Ax = b changes:

> b2 <- b
> b2[10] <- .01
> b2
[1] 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01
> x2 <- solve(A, b2)
> x2
[1] -9.14e+03  8.26e+05 -1.82e+07  1.70e+08 -8.30e+08  2.32e+09 -3.87e+09  3.80e+09
[9] -2.02e+09  4.48e+08
>

Note that the very minor change to b caused the entries of the solution x to change by one or more orders of magnitude.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.7

Exercise 1.7.7. Take the 3 by 3 Hilbert matrix from the previous exercise

A = \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ \frac{1}{2}&\frac{1}{3}&\frac{1}{4} \\ \frac{1}{3}&\frac{1}{4}&\frac{1}{5} \end{bmatrix}

and compute b assuming that x = (1, 1, 1) and x = (0, 6, -3.6) are solutions to Ax = b.

Answer: We first multiply A times x = (1, 1, 1):

b = Ax = \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ \frac{1}{2}&\frac{1}{3}&\frac{1}{4} \\ \frac{1}{3}&\frac{1}{4}&\frac{1}{5} \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 + \frac{1}{2} + \frac{1}{3} \\ \frac{1}{2} + \frac{1}{3} + \frac{1}{4} \\ \frac{1}{3} + \frac{1}{4} + \frac{1}{5} \end{bmatrix} = \begin{bmatrix} \frac{11}{6} \\ \frac{13}{12} \\ \frac{47}{60} \end{bmatrix} \approx \begin{bmatrix} 1.83 \\ 1.08 \\ .783 \end{bmatrix}

expressing the final result to three significant digits. We then multiply A by x = (0, 6, -3.6) again expressing the final result to three significant digits:

b = Ax = \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ \frac{1}{2}&\frac{1}{3}&\frac{1}{4} \\ \frac{1}{3}&\frac{1}{4}&\frac{1}{5} \end{bmatrix} \begin{bmatrix} 0 \\ 6 \\ -3.6 \end{bmatrix} = \begin{bmatrix} 0 + 3 - 1.2 \\ 0 + 2 - 0.9 \\ 0 + 1.5 - .72 \end{bmatrix} = \begin{bmatrix} 1.80 \\ 1.10 \\ .780 \end{bmatrix}

Note that the values of b are almost identical, though the values of the solutions x are very different. This indicates that the solution x to Ax = b is very sensitive to small differences in b.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.6

Exercise 1.7.6. Given the 3 by 3 Hilbert matrix

A = \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ \frac{1}{2}&\frac{1}{3}&\frac{1}{4} \\ \frac{1}{3}&\frac{1}{4}&\frac{1}{5} \end{bmatrix}

find its inverse (i) using an exact calculation and (ii) rounding off all values to three digits.

Answer: (i) We first multiply the first row times \frac{1}{2} and subtract it from the second row, and then multiply the first row times \frac{1}{3} and subtract it from the third row:

\begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3}&\vline&1&0&0 \\ \frac{1}{2}&\frac{1}{3}&\frac{1}{4}&\vline&0&1&0 \\ \frac{1}{3}&\frac{1}{4}&\frac{1}{5}&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3}&\vline&1&0&0 \\ 0&\frac{1}{12}&\frac{1}{12}&\vline&-\frac{1}{2}&1&0 \\ 0&\frac{1}{12}&\frac{4}{45}&\vline&-\frac{1}{3}&0&1 \end{bmatrix}

We then multiply the second row by 1 and subtract it from the third row:

\begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3}&\vline&1&0&0 \\ 0&\frac{1}{12}&\frac{1}{12}&\vline&-\frac{1}{2}&1&0 \\ 0&\frac{1}{12}&\frac{4}{45}&\vline&-\frac{1}{3}&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3}&\vline&1&0&0 \\ 0&\frac{1}{12}&\frac{1}{12}&\vline&-\frac{1}{2}&1&0 \\ 0&0&\frac{1}{180}&\vline&\frac{1}{6}&-1&1 \end{bmatrix}

This completes forward elimination. We start backward elimination by multiplying the third row by 15 and subtracting it from the second row, and multiplying the third row by 60 and subtracting it from the first row:

\begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3}&\vline&1&0&0 \\ 0&\frac{1}{12}&\frac{1}{12}&\vline&-\frac{1}{2}&1&0 \\ 0&0&\frac{1}{180}&\vline&\frac{1}{6}&-1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&\frac{1}{2}&0&\vline&-9&60&-60 \\ 0&\frac{1}{12}&0&\vline&-3&16&-15 \\ 0&0&\frac{1}{180}&\vline&\frac{1}{6}&-1&1 \end{bmatrix}

We then multiply the second row by 6 and subtract it from the first row:

\begin{bmatrix} 1&\frac{1}{2}&0&\vline&-9&60&-60 \\ 0&\frac{1}{12}&0&\vline&-3&16&-15 \\ 0&0&\frac{1}{180}&\vline&\frac{1}{6}&-1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&9&-36&30 \\ 0&\frac{1}{12}&0&\vline&-3&16&-15 \\ 0&0&\frac{1}{180}&\vline&\frac{1}{6}&-1&1 \end{bmatrix}

This completes backward elimination. We then multiply the second row by 12 and the third row by 180:

\begin{bmatrix} 1&0&0&\vline&9&-36&30 \\ 0&\frac{1}{12}&0&\vline&-3&16&-15 \\ 0&0&\frac{1}{180}&\vline&\frac{1}{6}&-1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&9&-36&30 \\ 0&1&0&\vline&-36&192&-180 \\ 0&0&1&\vline&30&-180&180 \end{bmatrix}

By exact calculation we therefore have

A^{-1} = \begin{bmatrix} 9&-36&30 \\ -36&192&-180 \\ 30&-180&180 \end{bmatrix}

(ii) We start by rounding all values of the matrix A to three digits:

\begin{bmatrix} 1&\frac{1}{2}&\frac{1}{3} \\ \frac{1}{2}&\frac{1}{3}&\frac{1}{4} \\ \frac{1}{3}&\frac{1}{4}&\frac{1}{5} \end{bmatrix} \rightarrow \begin{bmatrix} 1&.500&.333 \\ .500&.333&.250 \\ .333&.250&.200 \end{bmatrix}

We start forward elimination by multiplying the first row times .500 and subtracting it from the second row, and then multiplying the first row times .333 and subtracting it from the third row:

\begin{bmatrix} 1&.500&.333&\vline&1&0&0 \\ .500&.333&.250&\vline&0&1&0 \\ .333&.250&.200&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&.500&.333&\vline&1&0&0 \\ 0&.083&.084&\vline&-.500&1&0 \\ 0&.084&.089&\vline&-.333&0&1 \end{bmatrix}

We continue by multiplying the second row times 1.01  (0.84/0.83 rounded to three digits) and subtracting it from the third row:

\begin{bmatrix} 1&.500&.333&\vline&1&0&0 \\ 0&.083&.084&\vline&-.500&1&0 \\ 0&.084&.089&\vline&-.333&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&.500&.333&\vline&1&0&0 \\ 0&.083&.084&\vline&-.500&1&0 \\ 0&0&.004&\vline&.172&-1.01&1 \end{bmatrix}

At this point forward elimination is complete, and we start reverse elimination by multiplying 21 (.084/.004) times the third row and subtracting it from the second, and multiplying 83.2 (.333/.004 rounded to three digits) times the third row and subtracting it from the first:

\begin{bmatrix} 1&.500&.333&\vline&1&0&0 \\ 0&.083&.084&\vline&-.500&1&0 \\ 0&0&.004&\vline&.172&-1.01&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&.500&0&\vline&-13.3&84.0&-83.2 \\ 0&.083&0&\vline&-4.11&22.2&-21 \\ 0&0&.004&\vline&.172&-1.01&1 \end{bmatrix}

We then multiply the second row by 6.02 (.500/.083 rounded to three digits) and subract it from the first:

\begin{bmatrix} 1&.500&0&\vline&-13.3&84.0&-83.2 \\ 0&.083&0&\vline&-4.11&22.2&-21 \\ 0&0&.004&\vline&.172&-1.01&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&11.4&-50&42.8 \\ 0&.083&0&\vline&-4.11&22.2&-21 \\ 0&0&.004&\vline&.172&-1.01&1 \end{bmatrix}

At this point reverse elimination is complete, and we divide the second row by .083 and the third row by .004:

\begin{bmatrix} 1&0&0&\vline&11.4&-50&42.8 \\ 0&.083&0&\vline&-4.11&22.2&-21 \\ 0&0&.004&\vline&.172&-1.01&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&11.4&-50&42.8 \\ 0&1&0&\vline&-49.5&267&-253 \\ 0&0&1&\vline&43&-252&250 \end{bmatrix}

We then have the approximated value of the inverse of A as

A^{-1} = \begin{bmatrix} 11.4&-50&42.8 \\ -49.5&267&-253 \\ 43&-252&250 \end{bmatrix} \quad \rm vs. \quad A_{-1} = \begin{bmatrix} 9&-36&30 \\ -36&192&-180 \\ 30&-180&180 \end{bmatrix}

Note that the approximated values are roughly 25% to 50% off compared to the exact values.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.5

Exercise 1.7.5. What would the difference matrix in equation (6) look like if the boundary conditions were u(0) = 1 and u(1) = 0 (instead of u(0) = 0 and u(1) = 0)?

Answer: The finite difference equation would still be as given in equation (5):

-u_{j+1} + 2u_j - u_{j-1} = h^2f(jh)

For j = 1 this equation would become

-u_2 + 2u_1 - u_0 = h^2 f(jh) \rightarrow -u_2 + 2u_1 - 1 = h^2 f(jh)

\rightarrow -u_2 + 2u_1 = h^2 f(jh) + 1 = h^2 [f(jh) + \frac{1}{h^2}]

Since h = \frac{1}{6} the corresponding 5 by 5 finite-difference matrix is then

\begin{bmatrix} 2&-1&&& \\ -1&2&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5 \end{bmatrix} = \frac{1}{36} \begin{bmatrix} f(\frac{1}{6}) + 36 \\ f(\frac{2}{6}) \\ f(\frac{3}{6}) \\ f(\frac{4}{6}) \\ f(\frac{5}{6}) \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.4

Exercise 1.7.4. For the differential equation with f(x) = 4\pi^2 \sin 2\pi x the corresponding difference equation is

\begin{bmatrix} 2&-1&0 \\ -1&2&-1 \\ 0&-1&2 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} = \frac{\pi^2}{4} \begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}

for h = \frac{1}{4}. Solve the above equation for u_1, u_2, u_3 and compare their values to the true solution u = \sin 4\pi x at x = \frac{1}{4}x = \frac{1}{2} and x = \frac{3}{4}.

Answer: We do Gaussian elimination on the difference matrix and the right-hand side:

\begin{bmatrix} 2&-1&0 \\ -1&2&-1 \\ 0&-1&2 \end{bmatrix} \quad \frac{\pi^2}{4} \begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&-1&0 \\ 0&\frac{3}{2} &-1 \\ 0&-1&2 \end{bmatrix} \quad \frac{\pi^2}{4} \begin{bmatrix} 1 \\ \frac{1}{2} \\ -1 \end{bmatrix}

\rightarrow \begin{bmatrix} 2&-1&0 \\ 0&\frac{3}{2} &-1 \\ 0&0&\frac{4}{3} \end{bmatrix} \quad \frac{\pi^2}{4} \begin{bmatrix} 1 \\ \frac{1}{2} \\ -\frac{2}{3} \end{bmatrix}

Solving for u_3 we have

\frac{4}{3}u_3 = \frac{\pi^2}{4}(-\frac{2}{3}) \rightarrow u_3 = \frac{3}{4} \frac{\pi^2}{4} (-\frac{2}{3}) = -\frac{\pi^2}{8} \approx -1.23

Solving for u_2 we have

\frac{3}{2}u_2 - u_3 = \frac{\pi^2}{4} \frac{1}{2} \rightarrow \frac{3}{2}u_2 = \frac{\pi^2}{8} - \frac{\pi^2}{8} = 0 \rightarrow u_2 = 0

Finally we solve for u_1:

2u_1 - u_2 = \frac{\pi^2}{4} \rightarrow 2u_1 = \frac{\pi^2}{4} + 0 \rightarrow u_1 = \frac{\pi^2}{8} \approx 1.23

For the exact solution u(x) = \sin 2\pi x we have

u(\frac{1}{4}) = \sin 2 \pi \frac{1}{4} = \sin \frac{\pi}{2} = 1

u(\frac{1}{2}) = \sin 2 \pi \frac{1}{2} = \sin \pi = 0

u(\frac{3}{4}) = \sin 2 \pi \frac{3}{4} = \sin \frac{3\pi}{2} = 1

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.3

Exercise 1.7.3. Given the differential equation

- \frac{d^2u}{dx^2} = f(x), \quad \frac{du}{dx}(0) = \frac{du}{dx}(1) = 0

what is the finite-difference matrix for h = \frac{1}{6} (corresponding to a 5 by 5 matrix)? Note that you can replace the boundary condition \frac{du}{dx}(0) = 0 by u_1 - u_0 = 0 (and hence u_1 = u_0) and the boundary condition \frac{du}{dx}(1) = 0 by u_6 - u_5 = 0 (and hence u_6 = u_5).

Show that the finite-difference matrix applied to the vector (1, 1, 1, 1, 1) yields zero. Similarly, show that for any solution u(x) to the original differential equation, u(x) + 1 is also a solution.

Answer: We can express - \frac{d^2u}{dx^2} = f(x) in finite difference terms as

- [u_{j-1} - 2u_j + u_j+1]/h^2 = f(jh)

\rightarrow -u_{j-1} + 2u_j - u_{j+1} = h^2 f(jh)

We have h = \frac{1}{6} and are thus considering meshpoints x = jh, 0 \le j \le 6. The above equation can then be rewritten as

-u_{j-1} + 2u_j - u_{j+1} = (\frac{1}{6})^2 f(jh) = \frac{1}{36} f(jh)

where u_j is the approximation of u(x) at the point x = jh.

We already know that u_1 = u_0 so that for j = 1 the above equation becomes

-u_0 + 2u_1 - u_2 = \frac{1}{36} f(\frac{1}{6}) \rightarrow u_1 - u_2 = \frac{1}{36} f(\frac{1}{6})

For j = 2 the above equation becomes

-u_1 +2u_2 - u_3 = \frac{1}{36} f(\frac{2}{6}) = \frac{1}{36} f(\frac{1}{3})

The equation has a similar form for j = 3, j = 4, and j = 5. Since u_6 = u_5 for j = 5 the equation becomes

-u_4 + 2u_5 - u_6 = \frac{1}{36} f(\frac{5}{6}) \rightarrow -u_4 + u_5 = \frac{1}{36} f(\frac{5}{6})

The corresponding 5 by 5 finite-difference matrix is then

\begin{bmatrix} 1&-1&&& \\ -1&2&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&1 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5 \end{bmatrix} = \frac{1}{36} \begin{bmatrix} f(\frac{1}{6}) \\ f(\frac{1}{3}) \\ f(\frac{1}{2}) \\ f(\frac{2}{3}) \\ f(\frac{5}{6}) \end{bmatrix}

If we apply the finite-difference matrix to the constant vector (1, 1, 1, 1, 1) we obtain the following:

\begin{bmatrix} 1&-1&&& \\ -1&2&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&1 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 - 1 \\ -1 + 2 - 1 \\ -1 + 2 - 1 \\ -1 + 2 - 1 \\ -1 + 1 \end{bmatrix} = \begin{bmatrix} 0\\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

Finally, assume that u(x) is a solution of - \frac{d^2u}{dx^2} = f(x) and let v(x) = u(x) + 1. Then we have

-\frac{d^2v}{dx^2} = -\frac{d^2}{dx^2} [u(x) + 1] = -\frac{d^2}{dx^2}u(x) - \frac{d^2}{dx^2}(1)

= -\frac{d^2u}{dx^2} - 0 = -\frac{d^2u}{dx^2} = f(x)

We also have

\frac{dv}{dx}(0) = \frac{d}{dx}[u(0) + 1] = \frac{d}{dx}u(o) + \frac{d}{dx}(1) = \frac{du}{dx}(o) + 0 = \frac{du}{dx}(o) = 0

and

\frac{dv}{dx}(1) = \frac{d}{dx}[u(1) + 1] = \frac{d}{dx}u(1) + \frac{d}{dx}(1) = \frac{du}{dx}(1) + 0 = \frac{du}{dx}(1) = 0

So v(x) = u(x) + 1 is also a solution if u(x) is.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.2

Exercise 1.7.2. Given the differential equation

-\frac{d^2u}{dx^2} + u = x, \quad u(0) = u(1) = 0

what is the finite difference matrix for h = \frac{1}{4} (corresponding to a 3 by 3 matrix)?

Answer: We have

-\frac{d^2u}{dx^2} + u \approx - [u(x-h) - 2u(x) + u(x+h)]/h^2 + u(x)

so that the equation above can be expressed in finite-difference terms as

-[u_{j-1} - 2u_j + u_{j+1} + u_j]/h^2 + u_j = jh

\rightarrow -u_{j-1} + 2u_j - u_{j+1} + h^2 u_j = h^2 jh

\rightarrow -u_{j-1} + (2 + h^2)u_j - u_{j+1} = h^3 j

replacing continuous values of x by a set of discrete values x = jh, so tht u_j is the approximation of u(x) at the point x = jh.

In this case we have h = \frac{1}{4} and are thus considering meshpoints x = jh, 0 \le j \le 4. The above equation can then be rewritten as

-u_{j-1} + (2 + \frac{1}{4}^2) u_j - u_{j+1} = (\frac{1}{4})^3 j

This equation can be simplified as follows:

-u_{j-1} + (2 + \frac{1}{16})u_j - u_{j+1} = \frac{1}{64}j

\rightarrow -u_{j-1} + \frac{33}{16}u_j - u_{j+1} = \frac{1}{64} j

We already know that u(0) = u(1) = 0 so that we have u_0 = u_4 = 0. For j = 1 the above equation becomes

-u_0 + \frac{33}{16} u_1 - u_2 = 1 \cdot \frac{1}{64} \rightarrow \frac{33}{16} u_1 - u_2 = \frac{1}{64}

\rightarrow 33 u_2 - 16 u_3 = \frac{1}{4}

For j = 2 the above equation becomes

-u_1 +\frac{33}{16} u_2 - u_3 = 2 \cdot \frac{1}{64} \rightarrow -u_1 + \frac{33}{16} u_2 - u_3 = \frac{2}{64}

\rightarrow -16u_1 + 33u_2 - 16u_3 = \frac{1}{2}

and for j = 3 the above equation becomes

-u_2 + \frac{33}{16} u_3 - u_4 = 3 \cdot \frac{1}{64} \rightarrow -u_2 + \frac{33}{64} u_1 = \frac{3}{64}

\rightarrow -16u_2 + 33u_3 = \frac{3}{4}

The corresponding 3 by 3 finite-difference matrix is then

\begin{bmatrix} 33&-16& \\ -16&33&-16 \\ &-16&33 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} = \begin{bmatrix} \frac{1}{4} \\ \frac{1}{2} \\ \frac{3}{4} \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.7.1

Exercise 1.7.1. Find the LDU factorization for the matrix

A = \begin{bmatrix} 1&-1&&& \\ -1&2&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix}

(This is the same matrix as in equation (6) of section 1.7, only with a_{11} = 1 instead of a_{11} = 2.)

Answer: For the first step of elimination we multiply the first row times the multiplier l_{21} = -1 and subtract it from the second row:

\begin{bmatrix} 1&-1&&& \\ -1&2&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix} \rightarrow \begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix}

In the second step we multiply the second row by the multiplier l_{32} = -1 and subtract it from the third row:

\begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &-1&2&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix} \rightarrow \begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &&1&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix}

In the third step we multiply the third row by the multiplier l_{43} = -1 and subtract it from the fourth row:

\begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &&1&-1& \\ &&-1&2&-1 \\ &&&-1&2 \end{bmatrix} \rightarrow \begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &&1&-1& \\ &&&1&-1 \\ &&&-1&2 \end{bmatrix}

Finally we we multiply the fourth row by the multiplier l_{54} = -1 and subtract it from the fifth row:

\begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &&1&-1& \\ &&&1&-1 \\ &&&-1&2 \end{bmatrix} \rightarrow \begin{bmatrix} 1&-1&&& \\ &1&-1&& \\ &&1&-1& \\ &&&1&-1 \\ &&&&1 \end{bmatrix}

Note that all steps feature the same multiplier. The final factorization is

A = LDU = \begin{bmatrix} &&&& \\ -1&&&& \\ &-1&&& \\ &&-1&& \\ &&&-1& \end{bmatrix} \begin{bmatrix} 1&&&& \\ &1&&& \\ &&1&& \\ &&&1& \\ &&&&1 \end{bmatrix} \begin{bmatrix} &-1&&& \\ &&-1&& \\ &&&-1& \\ &&&&-1 \\ &&&& \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 1.6.23

Exercise 1.6.23. Assume that A and B are square matrices, and that I - BA is invertible. Show that I - AB is invertible as well. (Use the fact that B(I - AB) = (I - BA)B.)

Answer: First, since A and B are square matrices we know that both of the product matrices AB and BA exist and have the same number of rows and columns. We then have

B(I - AB) = B - BAB = (I - BA)B

Since we are assuming that the inverse of I - BA exists, we have

(I - BA)^{-1} B (I - AB) = (I - BA)^{-1} (I - BA) B = IB = B

Multiplying both sides of the resulting equation on the left by A and then adding I - AB to both sides, we have

A(I - BA)^{-1} B (I - AB) = AB

\rightarrow A(I - BA)^{-1} B (I - AB) + (I - AB) = AB + (I - AB)

\rightarrow [A(I - BA)^{-1} B + I] (I - AB) = AB - AB + I = I

So A(I - BA)^{-1} B + I is a left inverse for I - AB. We then multiply I - AB by A(I - BA)^{-1} B + I on the right:

(I - AB)[A(I - BA)^{-1} B + I]

= (I - AB)A(I - BA)^{-1} B + (I - AB)I

= (A - ABA)(I - BA)^{-1} B + (I - AB)

= A(I - BA)(I - BA)^{-1} B + (I - AB)

= AIB + (I - AB) = AB + I - AB = I

So A(I - BA)^{-1} B + I is also a right inverse for I - AB. Since A(I - BA)^{-1} B + I is both a left inverse and right inverse for I - AB we conclude that I - AB is invertible (with A(I - BA)^{-1} B + I as its inverse).

We have thus showed that if I - BA is invertible then I - AB is also invertible.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 1.6.22

Exercise 1.6.22. Find the inverse of the following lower triangular matrix:

A = \begin{bmatrix} 1&0&0&0 \\ \frac{1}{4}&1&0&0 \\ \frac{1}{3}&\frac{1}{3}&1&0 \\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}&1\end{bmatrix}

Answer: We use Gauss-Jordan elimination on the matrix A, starting by multiplying the first row by the multiplier \frac{1}{4} and subtracting it from the second row, multiplying the first row by the multiplier \frac{1}{3} and subtracting it from the third row, and multiplying the first row by the multiplier \frac{1}{2} and subtracting it from the fourth row:

\begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ \frac{1}{4}&1&0&0&\vline&0&1&0&0 \\ \frac{1}{3}&\frac{1}{3}&1&0&\vline&0&0&1&0 \\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}&1&\vline&0&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ 0&1&0&0&\vline&-\frac{1}{4}&1&0&0 \\ 0&\frac{1}{3}&1&0&\vline&-\frac{1}{3}&0&1&0 \\ 0&\frac{1}{2}&\frac{1}{2}&1&\vline&-\frac{1}{2}&0&0&1 \end{bmatrix}

We then multiply the second row by the multiplier \frac{1}{3} and subtract it from the third row, and multiply the second row by the multiplier \frac{1}{2} and subtract it from the fourth row:

\begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ 0&1&0&0&\vline&-\frac{1}{4}&1&0&0 \\ 0&\frac{1}{3}&1&0&\vline&-\frac{1}{3}&0&1&0 \\ 0&\frac{1}{2}&\frac{1}{2}&1&\vline&-\frac{1}{2}&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ 0&1&0&0&\vline&-\frac{1}{4}&1&0&0 \\ 0&0&1&0&\vline&-\frac{1}{4}&-\frac{1}{3}&1&0 \\ 0&0&\frac{1}{2}&1&\vline&-\frac{3}{8}&-\frac{1}{2}&0&1 \end{bmatrix}

Finally we multiply the third row by the multiplier \frac{1}{2} and subtract it from the fourth row:

\begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ 0&1&0&0&\vline&-\frac{1}{4}&1&0&0 \\ 0&0&1&0&\vline&-\frac{1}{4}&-\frac{1}{3}&1&0 \\ 0&0&\frac{1}{2}&1&\vline&-\frac{3}{8}&-\frac{1}{2}&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&\vline&1&0&0&0 \\ 0&1&0&0&\vline&-\frac{1}{4}&1&0&0 \\ 0&0&1&0&\vline&-\frac{1}{4}&-\frac{1}{3}&1&0 \\ 0&0&0&1&\vline&-\frac{1}{4}&-\frac{1}{3}&-\frac{1}{2}&1 \end{bmatrix}

Since the original matrix A is lower triangular and has ones on the diagonal we do not need to do reverse elimination or any other steps. We thus have

A^{-1} = \begin{bmatrix} 1&0&0&0 \\ -\frac{1}{4}&1&0&0 \\ -\frac{1}{4}&-\frac{1}{3}&1&0 \\ -\frac{1}{4}&-\frac{1}{3}&-\frac{1}{2}&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment