Linear Algebra and Its Applications, exercise 1.4.23

Exercise 1.4.23. Find the results of squaring, cubing, and in general raising to the power of n the following matrices:

A = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix} \quad B = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \quad C = AB = \begin{bmatrix} \frac{1}{2}&-\frac{1}{2} \\ \frac{1}{2}&-\frac{1}{2} \end{bmatrix}

Answer: For the matrix A we have

A^2 = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix} \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix} = \begin{bmatrix} \frac{1}{2}&\frac{1}{2} \\ \frac{1}{2}&\frac{1}{2} \end{bmatrix} = A

A^3 = A^2A = AA = A^2 = A

By induction we have A^n = A for all n.

For the matrix B we have

B^2 = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

B^3 = B^2B = IB = B

By induction we have B^n = I if n is even and B^n = B if n is odd.

For the matrix C we have

C^2 = \begin{bmatrix} \frac{1}{2}&-\frac{1}{2} \\ \frac{1}{2}&-\frac{1}{2} \end{bmatrix} \begin{bmatrix} \frac{1}{2}&-\frac{1}{2} \\ \frac{1}{2}&-\frac{1}{2} \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix} = 0

C^3 = C^2C = 0 \cdot C = 0

By induction C^n = 0, n \ge 2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.22

Exercise 1.4.22. The x-y plane can be rotated through an angle \theta by the following matrix:

A(\theta) = \begin{bmatrix} \cos \theta&-\sin \theta \\ \sin \theta&\cos \theta \end{bmatrix}

  1. Show that A(\theta_1) A(\theta_2) = A(\theta_1 + \theta_2). Hint: use the identities for \cos(\theta_1 + \theta_2) and \sin(\theta_1 + \theta_2).
  2. Compute A(\theta) A(-\theta).

Answer: (a) We have

A(\theta_1) A(\theta_2) = \begin{bmatrix} \cos \theta_1&-\sin \theta_1 \\ \sin \theta_1&\cos \theta_1 \end{bmatrix} \begin{bmatrix} \cos \theta_2&-\sin \theta_2 \\ \sin \theta_2 &\cos \theta_2 \end{bmatrix}

\quad = \begin{bmatrix} \cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2&-\cos \theta_1 \sin \theta_2 - \sin \theta_1 \cos \theta_2 \\ \sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2&-\sin \theta_1 \sin \theta_2 + \cos \theta_1 \cos \theta_2 \end{bmatrix}

We can simplify the final matrix using the following formulas for the sine and cosine:

\sin (x + y) = \sin x \cos y + \cos x \sin y
\sin (x - y) = \sin x \cos y - \cos x \sin y
\cos (x + y) = \cos x \cos y - \sin x \sin y
\cos (x - y) = \cos x \cos y + \sin x \sin y

(Due to lack of space I’ve omitted a proof of the above identities. See Will Garner’s pre-calculus textbook for a relatively simple geometric proof.)

Using the above identities we have

A(\theta_1) A(\theta_2) = \begin{bmatrix} \cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2&-\cos \theta_1 \sin \theta_2 - \sin \theta_1 \cos \theta_2 \\ \sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2&-\sin \theta_1 \sin \theta_2 + \cos \theta_1 \cos \theta_2 \end{bmatrix}

\quad = \begin{bmatrix} \cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2&-(\sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2) \\ \sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2&\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2 \end{bmatrix}

\quad = \begin{bmatrix} \cos (\theta_1 + \theta_2)&-\sin (\theta_1 + \theta_2) \\ \sin (\theta_1 + \theta_2)&\cos (\theta_1 + \theta_2) \end{bmatrix} = A(\theta_1 + \theta_2)

So A(\theta_1)A(\theta_2) = A(\theta_1 + \theta_2) as hypothesized. In other words, rotating the x-y plane through an angle \theta_1 followed by a second rotation through an angle \theta_2 is equivalent to a single rotation through the angle \theta_1 + \theta_2.

(b) Using the formula derived above we have

A(\theta) A(-\theta) = A(\theta - \theta) = A(0) = \begin{bmatrix} \cos 0&-\sin 0 \\ \sin 0&\cos 0 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

In other words, rotating the x-y plane through an angle \theta followed by a second rotation through the angle -\theta (i.e., through the angle \theta in the reverse direction) takes us back to the original state corresponding to no rotation at all.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.21

Exercise 1.4.21. An alternative way to compute the matrix product AB is as the sum

c_1r_1 + c_2r_2 + \cdots + c_nr_n

where c_i is the ith column of A, r_i is the ith row of B, and the product c_ir_i is a matrix.

  1. Provide an example showing the procedure above for a 2×2 matrix.
  2. Show that the above procedure gives the correct answer for (AB)_{ij} = \sum_{k=1}^{k=n} a_{ik}b_{kj}

Answer: (a) We choose the following 2×2 example matrices, with product as shown:

\begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} \begin{bmatrix} 3&5 \\ 7&9 \end{bmatrix} = \begin{bmatrix} 17&23 \\ 37&51 \end{bmatrix}

Using the alternative mechanism above we can also compute the product as

\begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} \begin{bmatrix} 3&5 \\ 7&9 \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \end{bmatrix} \begin{bmatrix} 3&5 \end{bmatrix} + \begin{bmatrix} 2 \\ 4 \end{bmatrix} \begin{bmatrix} 7&9 \end{bmatrix} = \begin{bmatrix} 3&5 \\ 9&15 \end{bmatrix} + \begin{bmatrix} 14&18 \\ 28&36 \end{bmatrix} = \begin{bmatrix} 17&23 \\ 37&51 \end{bmatrix}

(b) For an mxn matrix A and nxp matrix B, the kth column of A and the kth row of B are as follows:

c_k = \begin{bmatrix} a_{1k} \\ a_{2k} \\ \vdots \\ a_{nk} \end{bmatrix} \quad r_k = \begin{bmatrix} b_{k1}&b_{k2}&\cdots&b_{kn} \end{bmatrix}

and their matrix product is

c_kr_k = \begin{bmatrix} a_{1k}b_{k1}&a_{1k}b_{k2}&\cdots&a_{1k}b_{kn} \\ a_{2k}b_{k1}&a_{2k}b_{k2}&\cdots&a_{2k}b_{kn} \\ \vdots&\vdots&\ddots&\vdots \\ a_{nk}b_{k1}&a_{nk}b_{k2}&\cdots&a_{nk}b_{kn} \end{bmatrix}

so that we have (c_kr_k)_{ij} = a_{ik}b_{kj}. If we define C = c_1r_1 + c_2r_2 + \cdots + c_nr_n = \sum_{k=1}^{k=n} c_kr_k then we have

C_{ij} = \sum_{k=1}^{k=n} (c_kr_k)_{ij} = \sum_{k=1}^{k=n} a_{ik}b_{kj}

so that C = AB as hypothesized.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.20

Exercise 1.4.20. If A and B are two nxn matrices with all entries equal to 1, what are the entries (AB)_{ij} of their product AB? Use the following summation formula to find the answer:

(AB)_{ij} = \sum_{k=1}^{k=n} a_{ik}b_{kj}

Also, if C is a third nxn matrix with all entries equal to 2, compute the entries of (AB)C = A(BC), using the corresponding summation formulas for the two sides of that equation

\sum_{j=1}^{j=n} (\sum_{k=1}^{k=n} a_{ik}b_{kj}) c_{jl} = \sum_{k=1}^{k=n} a_{ik} (\sum_{j=1}^{j=n} b_{kj}c_{jl})

Answer: If a_{ij} = b_{ij} = 1 for all i and j, then we have

(AB)_{ij} = \sum_{k=1}^{k=n} a_{ik}b_{kj} = \sum_{k=1}^{k=n} 1 \cdot 1 = \sum_{k=1}^{k=n} 1 = n

for all i and j.

If c_{ij} = 2 for all i and j, then the i-jth entry of (AB)C is

\sum_{j=1}^{j=n} (\sum_{k=1}^{k=n} a_{ik}b_{kj}) c_{jl} = \sum_{j=1}^{j=n} (\sum_{k=1}^{k=n} 1 \cdot 1) \cdot 2 = \sum_{j=1}^{j=n} n \cdot 2 = 2n^2

and the i-jth entry of A(BC) is

\sum_{k=1}^{k=n} a_{ik} (\sum_{j=1}^{j=n} b_{kj}c_{jl}) = \sum_{k=1}^{k=n} 1 \cdot (\sum_{j=1}^{j=n} 1 \cdot 2) = \sum_{k=1}^{k=n} 2n = 2n^2

with (AB)C = A(BC) as expected.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.19

Exercise 1.4.19. Given matrices A and B, which of the following matrices are equal to (A + B)^2?

  1. (B + A)^2
  2. A^2 + 2AB + B^2
  3. A(A + B) + B(A + B)
  4. (A + B)(B + A)
  5. A^2 + AB + BA + B^2

Answer: First, since matrix addition is commutative we have

A + B = B + A \Rightarrow (A + B)^2 = (B + A)^2

as well as

A + B = B + A \Rightarrow (A + B)^2 = (A + B)(A + B) = (A + B)(B + A)

So the matrices referenced in (a) and (d) above are equal to the original matrix (A + B)^2.

Second, since matrix multiplication is distributive we have

(A + B)^2 = (A + B)(A + B) = A(A + B) + B(A + B)

and

A(A + B) + B(A + B) = A^2 + AB + BA + B^2

So the matrices referenced in (c) and (e) above are equal to the original matrix (A + B)^2.

However in general matrix multiplication is not commutative, so

AB \ne BA \Rightarrow A^2 + AB + BA + B^2 \ne A^2 + 2AB + B^2

and the matrix referenced in item (b) above is not in general equal to (A + B)^2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.18

Exercise 1.4.18. Given arbitrary nxn matrices A and B, show that the first column of AB is the same as A times the first column of B. Hint: Let x = (1, 0, …, 0) be a column vector with n components, and use the fact that A(Bx) = (AB)x.

Answer: We have

Bx = \begin{bmatrix} b_{11}&b_{12}&\cdots&b_{1n} \\ b_{21}&b_{22}&\cdots&b_{2n} \\ \vdots&\vdots&\ddots&\vdots  \\ b_{n1}&b_{22}&\cdots&b_{nn}  \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} = \begin{bmatrix} b_{11} \\ b_{21} \\ \vdots \\ b_{n1} \end{bmatrix}

So Bx is equal to the first column of B. Similarly if C = AB we have

Cx = \begin{bmatrix} c_{11}&c_{12}&\cdots&c_{1n} \\ c_{21}&c_{22}&\cdots&c_{2n} \\ \vdots&\vdots&\ddots&\vdots  \\ c_{n1}&c_{22}&\cdots&c_{nn}  \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} = \begin{bmatrix} c_{11} \\ c_{21} \\ \vdots \\ c_{n1} \end{bmatrix}

So again Cx is equal to the first column of C or, to put it another way, (AB)x is equal to the first column of AB.

But we also have A(Bx) = (AB)x, which means that A times the first column of B is equal to the first column of AB.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.17

Exercise 1.4.17. Assume A is a 2×2 matrix

A = \begin{bmatrix} a&b \\ c&d \end{bmatrix}

and further assume that AB = BA for any 2×2 matrix B, including the matrices

B_1 = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \quad B_2 = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix}

Show that a = d and that b and c are zero.

Answer: We have

AB_1 = \begin{bmatrix} a&b \\ c&d \end{bmatrix} \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} = \begin{bmatrix} a&0 \\ c&0 \end{bmatrix}

and

B_1A = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \begin{bmatrix} a&b \\ c&d \end{bmatrix} = \begin{bmatrix} a&b \\ 0&0 \end{bmatrix}

Since AB = BA for all B we have

AB_1 = B_1A \Rightarrow \begin{bmatrix} a&0 \\ c&0 \end{bmatrix} = \begin{bmatrix} a&b \\ 0&0 \end{bmatrix} \Rightarrow b = c = 0

Similarly we have

AB_2 = \begin{bmatrix} a&b \\ c&d \end{bmatrix} \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 0&a \\ 0&c \end{bmatrix}

and

B_2A = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} \begin{bmatrix} a&b \\ c&d \end{bmatrix} = \begin{bmatrix} c&d \\ 0&0 \end{bmatrix}

Since AB = BA for all B we have

AB_2 = B_2A \Rightarrow \begin{bmatrix} 0&a \\ 0&c \end{bmatrix} = \begin{bmatrix} c&d \\ 0&0 \end{bmatrix} \Rightarrow a = d

The result is that we have

A = \begin{bmatrix} a&0 \\ 0&a \end{bmatrix} = a \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = aI

for some a (which could be zero); in other words, A is a multiple of the identity matrix I.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.16

Exercise 1.4.16. For the following matrices

E = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} \quad F = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \quad G = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix}

verify that (EF)G = E(FG).

Answer: We have

EF = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&0&1 \end{bmatrix}

and

(EF)G = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&1&1 \end{bmatrix}

We also have

FG = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&1&1 \end{bmatrix}

and

E(FG) = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&1&1 \end{bmatrix} = (EF)G

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.15

Exercise 1.4.15. Given the matrix E given by

E = \begin{bmatrix} 1&7 \\ 0&1 \end{bmatrix}

and an arbitrary 2×2 matrix A, describe the rows of the product matrix EA and the columns of AE.

Answer: For EA we have

EA = \begin{bmatrix} 1&7 \\ 0&1 \end{bmatrix} \begin{bmatrix} a_{11}&a_{12} \\ a_{21}&a_{22} \end{bmatrix}

The first row of EA is a linear combination of the two rows of A, with coefficients 1 and 7 respectively; in other words, the first row of EA is equal to the first row of A plus 7 times the second row of A. The second row of EA is equal to the second row of A.

For AE we have

AE = \begin{bmatrix} a_{11}&a_{12} \\ a_{21}&a_{22} \end{bmatrix} \begin{bmatrix} 1&7 \\ 0&1 \end{bmatrix}

The first column of AE is equal to the first column of A. The second column of AE is a linear combination of the two columns of A, with coefficients 7 and 1 respectively; in other words, the second column of AE is equal to 7 times the first column of A plus the second column of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, exercise 1.4.14

Exercise 1.4.14. Show example 2×2 matrices having the following properties:

  1. A matrix A with real entries such that A^2 = -I
  2. A nonzero matrix B such that B^2 = 0
  3. Two matrices C and D with nonzero product such that CD = -DC
  4. Two matrices E and F with all nonzero entries such that EF = 0

Answer: (a) If A^2 = -I we have

\begin{bmatrix} a_{11}&a_{12} \\ a_{21}&a_{22} \end{bmatrix} \begin{bmatrix} a_{11}&a_{12} \\ a_{21}&a_{22} \end{bmatrix} = \begin{bmatrix} -1&0 \\ 0&-1 \end{bmatrix}

We then have the following:

a_{11}^2 + a_{12}a_{21} = -1
a_{11}a_{12} + a_{12}a_{22} = 0 \Rightarrow a_{12}(a_{11} + a_{22}) = 0 \Rightarrow a_{11} = -a_{22} \: \rm if \: a_{12} \ne 0
a_{21}a_{11} + a_{22}a_{21} = 0 \Rightarrow a_{21}(a_{11} + a_{22}) = 0 \Rightarrow a_{11} = -a_{22} \: \rm if \: a_{21} \ne 0
a_{21}a_{12} + a_{22}^2 = -1

Assume that a_{11} = -a_{22} = 0. From the first and fourth equations above we then have

a_{11}^2 + a_{12}a_{21} = a_{12}a_{21} = -1
a_{21}a_{12} + a_{22}^2 = a_{21}a_{12} = -1

which reduces to the single equation a_{12}a_{21} = -1. Assume that a_{12} = a for some real nonzero a. We then have a_{21} = -1/a.

So a matrix A meeting the above criterion is

A = \begin{bmatrix} 0&a \\ -1/a&0 \end{bmatrix}

where a is nonzero, for which

A^2 = \begin{bmatrix} 0&a \\ -1/a&0 \end{bmatrix} \begin{bmatrix} 0&a \\ -1/a&0 \end{bmatrix} = \begin{bmatrix} a(-1/a)&0 \\ 0&(-1/a)a \end{bmatrix} = \begin{bmatrix} -1&0 \\ 0&-1 \end{bmatrix} = -I

We can obtain a specific example of A by setting a = 1, in which case

A = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix}

(b) If B^2 = 0 we have

\begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} \begin{bmatrix} b_{11}&b_{12} \\ b_{21}&b_{22} \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix}

We then have the following:

b_{11}^2 + b_{12}b_{21} = 0 \Rightarrow b_{12}b_{21} = -b_{11}^2
b_{11}b_{12} + b_{12}b_{22} = 0 \Rightarrow b_{12}(b_{11} + b_{22}) = 0 \Rightarrow b_{11} = -b_{22} \: \rm if \: b_{12} \ne 0
b_{21}b_{11} + b_{22}b_{21} = 0 \Rightarrow b_{21}(b_{11} + b_{22}) = 0 \Rightarrow b_{11} = -b_{22} \: \rm if \: b_{21} \ne 0
b_{21}b_{12} + b_{22}^2 = 0 \Rightarrow b_{12}b_{21} = -b_{22}^2 \Rightarrow b_{22}^2 = b_{11}^2

Assume that b_{12} and b_{21} are nonzero, and choose b_{11} = b where b is a nonzero real number. Then from the second and third equations we have b_{22} = -b_{11} = -b. From the first and fourth equations we should have b_{11}^2 = b_{22}^2 and this is indeed the case, since b^2 = (-b)^2.

Substituting into the first equation we then have

b_{11}^2 + b_{12}b_{21} = 0 \Rightarrow b_{12}b_{21} = -b_{11}^2 \Rightarrow b_{12}b_{21} = -b^2

(We could have used the fourth equation just as well for this.)

If we choose b_{12} = -b we then have b_{21} = b. This gives us the following matrix

B = \begin{bmatrix} b&-b \\ b&-b \end{bmatrix}

for which

B^2 = \begin{bmatrix} b&-b \\ b&-b \end{bmatrix} \begin{bmatrix} b&-b \\ b&-b \end{bmatrix} = \begin{bmatrix} b^2 -b^2&-b^2 + b^2 \\ b^2 - b^2&-b^2 + b^2 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix}

If we set b = 1 then we obtain the specific example

B = \begin{bmatrix} 1&-1 \\ 1&-1 \end{bmatrix}

(c) If CD = -DC we have

\begin{bmatrix} c_{11}&c_{12} \\ c_{21}&c_{22} \end{bmatrix} \begin{bmatrix} d_{11}&d_{12} \\ d_{21}&d_{22} \end{bmatrix} = - \begin{bmatrix} d_{11}&d_{12} \\ d_{21}&d_{22} \end{bmatrix} \begin{bmatrix} c_{11}&c_{12} \\ c_{21}&c_{22} \end{bmatrix}

By the rules of matrix multiplication we then have

c_{11}d_{11} + c_{12}d_{21} = -(d_{11}c_{11} + d_{12}c_{21})
c_{11}d_{12} + c_{12}d_{22} = -(d_{11}c_{12} + d_{12}c_{22} )
c_{21}d_{11} + c_{22}d_{21} = -(d_{21}c_{11} + d_{22}c_{21})
c_{21}d_{12} + c_{22}d_{22} = -(d_{21}c_{12} + d_{22}c_{22})

Taking the second and third equations above and rearranging terms we have

c_{12}d_{22} + c_{12}d_{11} = -d_{12}c_{11} - d_{12}c_{22} \Rightarrow c_{12}(d_{11} + d_{22}) = -d_{12}(c_{11} + c_{22})
c_{21}d_{11} + c_{21}d_{22} = -d_{21}c_{11}  - d_{21}c_{22} \Rightarrow c_{21}(d_{11} + d_{22}) = -d_{21}(c_{11}  + c_{22})

The easiest way to satisfy the resulting equations is to set

c_{11} = c_{22} = d_{11} = d_{22} = 0 \Rightarrow c_{11}  + c_{22} = d_{11} + d_{22} = 0

We then have

\begin{bmatrix} 0&c_{12} \\ c_{21}&0 \end{bmatrix} \begin{bmatrix} 0&d_{12} \\ d_{21}&0 \end{bmatrix} = - \begin{bmatrix} 0&d_{12} \\ d_{21}&0 \end{bmatrix} \begin{bmatrix} 0&c_{12} \\ c_{21}&0 \end{bmatrix}

which gives us the following equations:

c_{12}d_{21} = -d_{12}c_{21}
c_{21}d_{12} = -d_{21}c_{12}

which reduce to the single equation c_{12}d_{21} = -c_{21}d_{12}. If we set c_{12} = c_{21} = c where c \ne 0 then we have d_{21} = -d_{12}. If we set d_{12} = d where d \ne 0 then d_{21} = -d. We then have the following matrices C and D:

C = \begin{bmatrix} 0&c \\ c&0 \end{bmatrix} \quad D = \begin{bmatrix} 0&d \\ -d&0 \end{bmatrix}

with

CD = \begin{bmatrix} 0&c \\ c&0 \end{bmatrix} \begin{bmatrix} 0&d \\ -d&0 \end{bmatrix} = \begin{bmatrix} -cd&0 \\ 0&cd \end{bmatrix} \ne 0 \: \rm if \: c \ne 0, d \ne 0

and

DC = \begin{bmatrix} 0&d \\ -d&0 \end{bmatrix} \begin{bmatrix} 0&c \\ c&0 \end{bmatrix} = \begin{bmatrix} cd&0 \\ 0&-cd \end{bmatrix} = - \begin{bmatrix} -cd&0 \\ 0&cd \end{bmatrix} = -CD

One example of C and D can be found by setting c = d = 1:

C = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} \quad D = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix}

(d) Using the result of (b) above, if we set

E = F = \begin{bmatrix} 1&-1 \\ 1&-1 \end{bmatrix}

then we will have EF = 0 with both E and F having all nonzero entries.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment