Linear Algebra and Its Applications, Review Exercise 1.18

Review exercise 1.18. Given the following matrix

A = \begin{bmatrix} 1&v_1&0&0 \\ 0&v_2&0&0 \\ 0&v_3&1&0 \\ 0&v_4&0&1 \end{bmatrix}

(a) Factor A into the form A = LU (if v_2 \ne 0).

(b) Show that A^{-1} exists and has the same form as A.

Answer: (a) We start elimination by subtracting v_3/v_2 times the second row from the third row (l_{32} = v_3/v_2) and subtracting v_4/v_2 times the second row from the fourth row (l_{32} = v_4/v_2) :

\begin{bmatrix} 1&v_1&0&0 \\ 0&v_2&0&0 \\ 0&v_3&1&0 \\ 0&v_4&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&v_1&0&0 \\ 0&v_2&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

This completes elimination, and we have

L = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&v_3/v_2&1&0 \\ 0&v_4/v_2&0&1 \end{bmatrix} \quad \rm and \quad U = \begin{bmatrix} 1&v_1&0&0 \\ 0&v_2&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

so that

A = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&v_3/v_2&1&0 \\ 0&v_4/v_2&0&1 \end{bmatrix} \begin{bmatrix} 1&v_1&0&0 \\ 0&v_2&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} = LU

(b) We use Gauss-Jordan elimination, starting with the same elimination steps as above:

\begin{bmatrix} 1&v_1&0&0&\vline&1&0&0&0 \\ 0&v_2&0&0&\vline&0&1&0&0 \\ 0&v_3&1&0&\vline&0&0&1&0 \\ 0&v_4&0&1&\vline&0&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&v_1&0&0&\vline&1&0&0&0 \\ 0&v_2&0&0&\vline&0&1&0&0 \\ 0&0&1&0&\vline&0&-v_3/v_2&1&0 \\ 0&0&0&1&\vline&0&-v_4/v_2&0&1 \end{bmatrix}

This completes forward elimination, and we start backward elimination by multiplying v_1/v_2 times the second row and subtracting it from the first:

\begin{bmatrix} 1&v_1&0&0&\vline&1&0&0&0 \\ 0&v_2&0&0&\vline&0&1&0&0 \\ 0&0&1&0&\vline&0&-v_3/v_2&1&0 \\ 0&0&0&1&\vline&0&-v_4/v_2&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&\vline&1&-v_1/v_2&0&0 \\ 0&v_2&0&0&\vline&0&1&0&0 \\ 0&0&1&0&\vline&0&-v_3/v_2&1&0 \\ 0&0&0&1&\vline&0&-v_4/v_2&0&1 \end{bmatrix}

This completes backward elimination, and we divide the second row by v_2:

\begin{bmatrix} 1&0&0&0&\vline&1&-v_1/v_2&0&0 \\ 0&v_2&0&0&\vline&0&1&0&0 \\ 0&0&1&0&\vline&0&-v_3/v_2&1&0 \\ 0&0&0&1&\vline&0&-v_4/v_2&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&0&\vline&1&-v_1/v_2&0&0 \\ 0&1&0&0&\vline&0&1/v_2&0&0 \\ 0&0&1&0&\vline&0&-v_3/v_2&1&0 \\ 0&0&0&1&\vline&0&-v_4/v_2&0&1 \end{bmatrix}

We thus have

A^{-1} = \begin{bmatrix} 1&-v_1/v_2&0&0 \\ 0&1/v_2&0&0 \\ 0&-v_3/v_2&1&0 \\ 0&-v_4/v_2&0&1 \end{bmatrix}

with A^{-1} having the same form as A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.17

Review exercise 1.17. Factor the following two symmetric matrices

A = \begin{bmatrix} 1&2&0 \\ 2&6&4 \\ 0&4&11 \end{bmatrix} \quad A = \begin{bmatrix} a&b \\ b&c \end{bmatrix}

into the form A = LDL^T.

Answer: We start elimination for the first matrix by subtracting 2 times the first row from the second row (l_{21} = 2):

\begin{bmatrix} 1&2&0 \\ 2&6&4 \\ 0&4&11 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&2&4 \\ 0&4&11 \end{bmatrix}

and then subtract 2 times the second row from the third row (l_{32} = 2):

\begin{bmatrix} 1&2&0 \\ 0&2&4 \\ 0&4&11 \end{bmatrix} \rightarrow \begin{bmatrix} 1&2&0 \\ 0&2&4 \\ 0&0&3 \end{bmatrix}

We then have

L = \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&2&1 \end{bmatrix}

and

U = \begin{bmatrix} 1&2&0 \\ 0&2&4 \\ 0&0&3 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&2&0 \\ 0&0&3 \end{bmatrix} \begin{bmatrix} 1&2&0 \\ 0&1&2 \\ 0&0&1 \end{bmatrix} = DL^T

We thus have

A = \begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&2&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&2&0 \\ 0&0&3 \end{bmatrix} \begin{bmatrix} 1&2&0 \\ 0&1&2 \\ 0&0&1 \end{bmatrix} = LDL^T

For the second matrix we subtract b/a times the first row from the second row (l_{21} = b/a):

\begin{bmatrix} a&b \\ b&c \end{bmatrix} \rightarrow \begin{bmatrix} a&b \\ 0&c - b^2/a \end{bmatrix}

We then have

L = \begin{bmatrix} 1&0 \\ b/a&1 \end{bmatrix} \quad \rm \quad U = \begin{bmatrix} a&b \\ 0&c - b^2/a \end{bmatrix} = \begin{bmatrix} a&0 \\ 0&c - b^2/a \end{bmatrix} \begin{bmatrix} 1&b/a \\ 0&1 \end{bmatrix}

and thus

A = \begin{bmatrix} 1&0 \\ b/a&1 \end{bmatrix} \begin{bmatrix} a&0 \\ 0&c - b^2/a \end{bmatrix} \begin{bmatrix} 1&b/a \\ 0&1 \end{bmatrix} = LDL^T

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.16

Review exercise 1.16. Given the following system of equations:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcr} kx&+&y&=&1 \\ x&+&ky&=&1 \end{array}

what must k be for the system to have no solution? One solution? An infinite number of solutions?

Answer: If k = 0 then the system reduces to

\setlength\arraycolsep{0.2em}\begin{array}{rcrcr} &&y&=&1 \\ x&&&=&1 \end{array}

for which x = 1 and y = 1 is (obviously) a solution.

If k \ne 0 then we can multiply the first equation by 1/k and subtract it from the second equation to obtain the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcl} kx&+&y&=&1 \\ &&(k - \frac{1}{k})y&=&1 - \frac{1}{k} \end{array}

If k = 1 then this reduces to

\setlength\arraycolsep{0.2em}\begin{array}{rcrcl} 1 \cdot x&+&y&=&1 \\ &&(1 - \frac{1}{1})y&=&1 - \frac{1}{1} \end{array} \rightarrow \begin{array}{rcrcl} x&+&y&=&1 \\ &&0&=&0 \end{array}

This system has an infinite number of solutions, namely any value of (x, y) for which y = -x + 1.

If k \ne 0 and k \ne 1 then we can solve for y as follows:

y = (1 - \frac{1}{k}) / (k - \frac{1}{k}) = k (1 - \frac{1}{k}) / k(k - \frac{1}{k})

= (k - 1) / (k^2 - 1) = (k - 1) / (k - 1)(k + 1) = \frac{1}{k+1}

We then solve for x as follows:

kx = 1 - y = 1 - \frac{1}{k+1} = \frac{k+1}{k+1} - \frac{1}{k+1}

= \frac{k+1-1}{k+1} = \frac{k}{k+1}

\rightarrow x = \frac{1}{k} \frac{k}{k+1} = \frac{1}{k+1}

So for k \ne 0 and k \ne 1 the system has the unique solution (\frac{1}{k+1}, \frac{1}{k+1}). For example, for k = 2 the unique solution is (\frac{1}{2+1}, \frac{1}{2+1}) = (\frac{1}{3}, \frac{1}{3}).

Note that the solution for k = 0 was (1, 1) which matches that given by our formula: (\frac{1}{0+1}, \frac{1}{0+1}) = (1, 1). Also note that there is no value of k for which no solution exists.

Also note that there is no solution for k = -1, for which the solution (\frac{1}{k+1}, \frac{1}{k+1}) would require dividing by zero. In this case the system of equations is

\setlength\arraycolsep{0.2em}\begin{array}{rcrcr} -x&+&y&=&1 \\ x&-&y&=&1 \end{array} \rightarrow \begin{array}{rcrcr} x&-&y&=&-1 \\ x&-&y&=&1 \end{array}

and produces the contradiction -1 = 1.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Review Exercise 1.15

Review exercise 1.15. For the following n by n matrix A and its inverse

A = \begin{bmatrix} n&-1&\cdots&-1 \\ -1&n&\cdots&-1 \\ \vdots&\vdots&\ddots&\vdots \\ -1&-1&\cdots&n \end{bmatrix} \quad A^{-1} = \frac{1}{n+1} \begin{bmatrix} c&1&\cdots&1 \\ 1&c&\cdots&1 \\ \vdots&\vdots&\ddots&\vdots \\ 1&1&\cdots&c \end{bmatrix}

what is the value of c?

Answer: Since AA^{-1} = I we must have

1 = \frac{1}{n+1} (nc + \sum_{k=2}^{n} (-1) \cdot 1)

= \frac{1}{n+1} (nc - \sum_{k=2}^{n} 1)

= \frac{1}{n+1} (nc - (n -1)) = \frac{1}{n+1} (c-1)n + 1)

Multiplying both sides by n+1 we obtain n+1 = (c-1)n + 1 or n = (c-1)n so that c = 2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.14

Review exercise 1.14. For each of the following find a 3 by 3 matrix B such that for any matrix A

(a) BA = 2A

(b) BA = 2B

(c) The first row of BA is the last row of A and the last row of BA is the first row of A.

(d) The first column of BA is the last column of A and the last column of BA is the first column of A.

Answer: (a) We have 2A = 2IA so that we can choose B = 2I

B = \begin{bmatrix} 2&0&0 \\ 0&2&0 \\ 0&0&2 \end{bmatrix}

(b) For all A we have 0 \cdot A = 0 = 2 \cdot 0 so that we can choose B = 0

B = \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix}

(c) We can choose B to be the permutation matrix that reverses the order of rows in A

B = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

For example

BA = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} = \begin{bmatrix} 7&8&9 \\ 4&5&6 \\ 1&2&3 \end{bmatrix}

(d) The first column of BA is produced by multiplying the first row of B by the first column of A. Since this computation does not involve the last column of A in general it is impossible to find a matrix B such that the first column of BA is equal to the last column of A.

However note that we can reverse the order of columns in A by multiplying A on the right by the value of B from (c) above:

AB = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 3&2&1 \\ 6&5&4 \\ 9&8&7 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.13

Review exercise 1.13. Given the following:

A = LU = \begin{bmatrix} 1&0&0 \\ 4&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} 2&2&4 \\ 0&1&3 \\ 0&0&1 \end{bmatrix} \qquad b = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

use the triangular systems Lc = b and Ux = c to find a solution to Ax = b

Answer: We have

Lc = b \rightarrow \begin{bmatrix} 1&0&0 \\ 4&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \rightarrow \setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} c_1&&&&&=&0 \\ 4c_1&+&c_2&&&=&0 \\ c_1&&&+&c_3&=&1 \end{array}

Since c_1 = 0 and 4c_1 + c_2 = 0 we have c_2 = 0 also. Since c_1 + c_3 = 1 we have c_3 = 1 so that

c = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

We then have

Ux = c \rightarrow \begin{bmatrix} 2&2&4 \\ 0&1&3 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \rightarrow \setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} 2x_1&+&2x_2&+&4x_3&=&0 \\ &&x_2&+&3x_3&=&0 \\ &&&&x_3&=&1 \end{array}

Since x_3 = 1 and x_2 + 3x_3 = 0 we have x_2 = -3. We then have 2x_1 + 2x_2 + 4x_3 = 2x_1 - 2 = 0 so that x_1 = 1 and

x = \begin{bmatrix} 1 \\ -3 \\ 1 \end{bmatrix}

Note that b is equal to the last column of the 3 by 3 identity matrix I. Since A is nonsingular we know that A^{-1} exists, and since we have Ax = b we see that the solution x is equal to the last column of A^{-1}.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.12

Review exercise 1.12. State whether the following are true or false. If a statement is true explain why it is true. If a statement is false provide a counter-example.

(a) If A is invertible and B has the same rows as A but in reverse order, then B is invertible as well.

(b) If A and B are both symmetric matrices then their product AB is also a symmetric matrix.

(c) If A and B are both invertible then their product BA is also invertible.

(d) If A is a nonsingular matrix then it can be factored into the product A = LU of a lower triangular and upper triangular matrix.

Answer: (a) True. If B has the same rows as A but in reverse order then we have B = PA where P is the permutation matrix that reverses the order of rows. For example, for the 3 by 3 case we have

P = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

If we apply P twice then it restores the order of the rows back to the original order; in other words P^2 = I so that P^{-1} = P.

If A is invertible then A^{-1} exists. Consider the product A^{-1}P. We have

B(A^{-1}P) = (PA)(A^{-1}P) = P(A^{-1}A)P = PIP = P^2 = I

so that A^{-1}P is a right inverse for B. We also have

(A^{-1}P)B = (A^{-1}P)(PA) = A^{-1}P^2A = A^{-1}IA = A^{-1}A = I

so that A^{-1}P is a left inverse for B as well. Since A^{-1}P is both a left and right inverse for B we have B^{-1} = A^{-1}P so that B is invertible if A is.

Incidentally, note that while multiplying by P on the left reverses the order of the rows, multiplying by P on the right reverse the order of the columns. For example, in the 3 by 3 case we have

\begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} = \begin{bmatrix} 3&2&1 \\ 6&5&4 \\ 9&8&7 \end{bmatrix}

Thus if A^{-1} exists and B = PA then B^{-1} = A^{-1}P exists and consists of A^{-1} with its columns reversed.

(b) False. The product of two symmetric matrices is not necessarily itself a symmetric matrix, as shown by the following counterexample:

\begin{bmatrix} 2&3 \\ 3&1 \end{bmatrix} \begin{bmatrix} 3&5 \\ 5&1 \end{bmatrix} = \begin{bmatrix} 21&13 \\ 14&16 \end{bmatrix}

(c) True. Suppose that both A and B are invertible; then both A^{-1} and B^{-1} exist. Consider the product matrices BA and A^{-1}B^{-1}. We have

(BA)(A^{-1}B{-1}) = B(AA^{-1})B{-1} = BIB^{-1} = BB^{-1} = I

and also

(A^{-1}B{-1})(BA) = A^{-1}(B{-1}B)A = A^{-1}IA = A^{-1}A = I

So A^{-1}B{-1} is both a left and right inverse for BA and thus (BA)^{-1} = A^{-1}B{-1}. If both A and B are invertible then their product BA is also.

(d) False. A matrix A cannot necessarily be factored into the form A = LU because you may need to do row exchanges in order for elimination to succeed. Consider the following counterexample:

A = \begin{bmatrix} 0&1&2 \\ 1&1&1 \\ 1&2&1 \end{bmatrix}

This matrix requires exchanging the first and second rows before elimination can commence. We can do this by multiplying by an appropriate permutation matrix:

PA = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&2 \\ 1&1&1 \\ 1&2&1 \end{bmatrix} = \begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 1&2&1 \end{bmatrix}

We then multiply the (new) first row by 1 and subtract it from the third row (i.e., the multiplier l_{31} = 1):

\begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 1&2&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 0&1&0 \end{bmatrix}

and then multiply the second row by 1 and subtract it from the third (l_{32} = 1):

\begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 0&1&0 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 0&0&-2 \end{bmatrix}

We then have

L = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&1&1 \end{bmatrix} \quad U = \begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 0&0&-2 \end{bmatrix}

and

LU = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&1&1 \end{bmatrix} \begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 0&0&-2 \end{bmatrix} = \begin{bmatrix} 1&1&1 \\ 0&1&2 \\ 1&2&1 \end{bmatrix} = PA \ne A

So a matrix A cannot always be factored into the form A = LU.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.11

Review exercise 1.11. Suppose E is a 2 by 2 matrix that adds the first equation of a linear system to the second equation. What is E^2? E^8? 8E?

Answer: Since E adds the first equation to the second, we have

E = \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix}

We then have

E^2 = \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix}

and

E^4 = E^2 E^2 = \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 2&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 4&1 \end{bmatrix}

so that

E^8 = E^4 E^4 = \begin{bmatrix} 1&0 \\ 4&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 4&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 8&1 \end{bmatrix}

In general if for some k \ge 1 we have

E^k = \begin{bmatrix} 1&0 \\ k&1 \end{bmatrix}

then for k+1 we have

E^{k+1} = E E^k = \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ k&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ k+1&1 \end{bmatrix}

Since for k = 1 we have

E^1 = E = \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix}

by induction for all n \ge 1 we have

E^n = \begin{bmatrix} 1&0 \\ n&1 \end{bmatrix}

Finally, we have

8E = 8 \begin{bmatrix} 1&0 \\ 1&1 \end{bmatrix} = \begin{bmatrix} 8&0 \\ 8&8 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.10

Review exercise 1.10. Find the inverse of each of the following matrices, or show that the matrix is not invertible.

A = \begin{bmatrix} 1&0&1 \\ 1&1&0 \\ 0&1&1 \end{bmatrix} \quad \rm and \quad A = \begin{bmatrix} 2&1&0 \\ 1&2&1 \\ 0&1&2 \end{bmatrix} \quad \rm and \quad A = \begin{bmatrix} 1&1&-2 \\ 1&-2&1 \\ 2&1&1 \end{bmatrix}

Answer: We use Gauss-Jordan elimination on the first matrix A, starting by multiplying the first row by 1 and subtracting it from the second row:

\begin{bmatrix} 1&0&1&\vline&1&0&0 \\ 1&1&0&\vline&0&1&0 \\ 0&1&1&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&1&\vline&1&0&0 \\ 0&1&-1&\vline&-1&1&0 \\ 0&1&1&\vline&0&0&1 \end{bmatrix}

We then multiply the second row times 1 and subtract it from the third row:

\begin{bmatrix} 1&0&1&\vline&1&0&0 \\ 0&1&-1&\vline&-1&1&0 \\ 0&1&1&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&1&\vline&1&0&0 \\ 0&1&-1&\vline&-1&1&0 \\ 0&0&2&\vline&1&-1&1 \end{bmatrix}

This completes forward elimination. We start backward elimination by multiplying the third row by -\frac{1}{2} and subtracting it from the second row, and multiplying the third row by \frac{1}{2} and subtracting it from the first row:

\begin{bmatrix} 1&0&1&\vline&1&0&0 \\ 0&1&-1&\vline&-1&1&0 \\ 0&0&2&\vline&1&-1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&\frac{1}{2}&\frac{1}{2}&-\frac{1}{2} \\ 0&1&0&\vline&-\frac{1}{2}&\frac{1}{2}&\frac{1}{2} \\ 0&0&2&\vline&1&-1&1 \end{bmatrix}

Finally we divide the third row by 2:

\begin{bmatrix} 1&0&0&\vline&\frac{1}{2}&\frac{1}{2}&-\frac{1}{2} \\ 0&1&0&\vline&-\frac{1}{2}&\frac{1}{2}&\frac{1}{2} \\ 0&0&2&\vline&1&-1&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&\frac{1}{2}&\frac{1}{2}&-\frac{1}{2} \\ 0&1&0&\vline&-\frac{1}{2}&\frac{1}{2}&\frac{1}{2} \\ 0&0&1&\vline&\frac{1}{2}&-\frac{1}{2}&\frac{1}{2} \end{bmatrix}

We then have

A = \begin{bmatrix} 1&0&1 \\ 1&1&0 \\ 0&1&1 \end{bmatrix} \quad \rm and \quad A^{-1} = \begin{bmatrix} \frac{1}{2}&\frac{1}{2}&-\frac{1}{2} \\ -\frac{1}{2}&\frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&-\frac{1}{2}&\frac{1}{2} \end{bmatrix}

We also use Gauss-Jordan elimination on the second matrix, starting by multiplying the first row by \frac{1}{2} and subtracting it from the second row:

\begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 1&2&1&\vline&0&1&0 \\ 0&1&2&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 0&\frac{3}{2}&1&\vline&-\frac{1}{2}&1&0 \\ 0&1&2&\vline&0&0&1 \end{bmatrix}

We next multiply the second row by \frac{2}{3} and subtract it from the third row:

\begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 0&\frac{3}{2}&1&\vline&-\frac{1}{2}&1&0 \\ 0&1&2&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 0&\frac{3}{2}&1&\vline&-\frac{1}{2}&1&0 \\ 0&0&\frac{4}{3}&\vline&\frac{1}{3}&-\frac{2}{3}&1 \end{bmatrix}

This completes forward elimination. We start backward elimination by multiplying the third row by \frac{3}{4} and subtracting it from the second row:

\begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 0&\frac{3}{2}&1&\vline&-\frac{1}{2}&1&0 \\ 0&0&\frac{4}{3}&\vline&\frac{1}{3}&-\frac{2}{3}&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 0&\frac{3}{2}&0&\vline&-\frac{3}{4}&\frac{3}{2}&-\frac{3}{4} \\ 0&0&\frac{4}{3}&\vline&\frac{1}{3}&-\frac{2}{3}&1 \end{bmatrix}

We next multiply the second row by \frac{2}{3} and subtract it from the first:

\begin{bmatrix} 2&1&0&\vline&1&0&0 \\ 0&\frac{3}{2}&0&\vline&-\frac{3}{4}&\frac{3}{2}&-\frac{3}{4} \\ 0&0&\frac{4}{3}&\vline&\frac{1}{3}&-\frac{2}{3}&1 \end{bmatrix} \rightarrow \begin{bmatrix} 2&0&0&\vline&\frac{3}{2}&-1&\frac{1}{2} \\ 0&\frac{3}{2}&0&\vline&-\frac{3}{4}&\frac{3}{2}&-\frac{3}{4} \\ 0&0&\frac{4}{3}&\vline&\frac{1}{3}&-\frac{2}{3}&1 \end{bmatrix}

This completes backward elimination. We then divide the first row by 2, the second row by \frac{3}{2}, and the third row by \frac{4}{3}:

\begin{bmatrix} 2&0&0&\vline&\frac{3}{2}&-1&\frac{1}{2} \\ 0&\frac{3}{2}&0&\vline&-\frac{3}{4}&\frac{3}{2}&-\frac{3}{4} \\ 0&0&\frac{4}{3}&\vline&\frac{1}{3}&-\frac{2}{3}&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&0&0&\vline&\frac{3}{4}&-\frac{1}{2}&\frac{1}{4} \\ 0&1&0&\vline&-\frac{1}{2}&1&-\frac{1}{2} \\ 0&0&1&\vline&\frac{1}{4}&-\frac{1}{2}&\frac{3}{4} \end{bmatrix}

We then have

A = \begin{bmatrix} 2&1&0 \\ 1&2&1 \\ 0&1&2 \end{bmatrix} \quad A^{-1} = \begin{bmatrix} \frac{3}{4}&-\frac{1}{2}&\frac{1}{4} \\ -\frac{1}{2}&1&-\frac{1}{2} \\ \frac{1}{4}&-\frac{1}{2}&\frac{3}{4} \end{bmatrix}

Finally we use Gauss-Jordan elimination on the third matrix, starting by multiplying the first row by 1 and subtracting it from the second row, and multiplying the first row by -2 and subtracting it from the third:

\begin{bmatrix} 1&1&-2&\vline&1&0&0 \\ 1&-2&1&\vline&0&1&0 \\ -2&1&1&\vline&0&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&-2&\vline&1&0&0 \\ 0&-3&3&\vline&-1&1&0 \\ 0&3&-3&\vline&2&0&1 \end{bmatrix}

For the next step we multiply the second row by -1 and subtract it from the third row:

\begin{bmatrix} 1&1&-2&\vline&1&0&0 \\ 0&-3&3&\vline&-1&1&0 \\ 0&3&-3&\vline&2&0&1 \end{bmatrix} \rightarrow \begin{bmatrix} 1&1&-2&\vline&1&0&0 \\ 0&-3&3&\vline&-1&1&0 \\ 0&0&0&\vline&1&1&1 \end{bmatrix}

This leaves us with no pivot for the third row, so elimination fails and the third matrix A is not invertible.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 1.9

Review exercise 1.9. Show a 2 by 2 system of equations (i.e., two equations in two unknowns) that has an infinite number of solutions.

Answer: One possibility is the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcr}u&+&v&=&0 \\ 2u&+&2v&=&0 \end{array}

corresponding to the matrix equation Ax = 0 where

A = \begin{bmatrix} 1&1 \\ 2&2 \end{bmatrix}

The solutions to this system include any (u, v) where v = -u.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment