Linear Algebra and Its Applications, Review Exercise 2.5

Review exercise 2.5. Given the matrices

A = \begin{bmatrix} 0&0&1 \\ 0&0&1 \\ 1&1&1 \end{bmatrix} \qquad B = \begin{bmatrix} 0&0&1&2 \\ 0&0&1&2 \\ 1&1&1&0 \end{bmatrix}

find their ranks and nullspaces.

Answer: We can use elimination to reduce A to echelon form. We first exchange the first and third rows:

\begin{bmatrix} 0&0&1 \\ 0&0&1 \\ 1&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&1 \\ 0&0&1 \\ 0&0&1 \end{bmatrix}

and then subtract 1 times the second row from the third row:

\begin{bmatrix} 1&1&1 \\ 0&0&1 \\ 0&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&1 \\ 0&0&1 \\ 0&0&0 \end{bmatrix}

The resulting echelon matrix has two pivots and thus rank of 2; this is also the rank of A.

Solving for the nullspace, we have x_1 and x_3 as basic variables and x_2 as a free variable. Setting x_2 = 1, from the second row of the echelon matrix we have x_3 = 0 and from the first row we have x_1 + x_2 +x_3 = x_1 + 1 + 0 = 0 or x_1 = -1.

So the nullspace \mathcal{N}(A) is the 1-dimensional subspace of \mathbb{R}^3 with basis vector (-1, 1, 0). (In other words, the nullspace of A is the line passing through the origin and the point (-1, 1, 0).)

Similarly we can use elimination to reduce B to echelon form. We first exchange the first and third rows:

\begin{bmatrix} 0&0&1&2 \\ 0&0&1&2 \\ 1&1&1&0 \end{bmatrix} \Rightarrow\begin{bmatrix} 1&1&1&0 \\ 0&0&1&2 \\ 0&0&1&2 \end{bmatrix}

and then subtract 1 times the second row from the third row:

\begin{bmatrix} 1&1&1&0 \\ 0&0&1&2 \\ 0&0&1&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&1&0 \\ 0&0&1&2 \\ 0&0&0&0 \end{bmatrix}

The resulting echelon matrix has two pivots and thus rank of 2; this is also the rank of B.

Solving for the nullspace, we have x_1 and x_3 as basic variables and x_2 and x_4 as free variables. Setting x_2 = 1 and x_4 = 0, from the second row of the echelon matrix we have x_3 + 2 x_4 = x_3 + 2 \cdot 0 = 0 or x_3 = 0, and from the first row we have x_1 + x_2 +x_3 = x_1 + 1 + 0 = 0 or x_1 = -1.

Setting x_2 = 0 and x_4 = 1, from the second row of the echelon matrix we have x_3 + 2 x_4 = x_3 + 2 \cdot 1 = 0 or x_3 = -2, and from the first row we have x_1 + x_2 +x_3 = x_1 + 0 - 2 = 0 or x_1 = 2.

So the nullspace \mathcal{N}(B) is the 2-dimensional subspace of \mathbb{R}^4 with basis vectors (-1, 1, 0, 0) and (2, 0, -2, 1).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.4

Review exercise 2.4. Given the matrix

A = \begin{bmatrix} 1&2&0&2&1 \\ -1&-2&1&1&0 \\ 1&2&-3&-7&-2\end{bmatrix}

find its echelon form U and the dimensions of the column space, nullspace, row space, and left nullspace of A.

Answer: We perform elimination on A to reduce it to echelon form. In the first step we multiply the first row of A by -1 and subtract it from the second row:

\begin{bmatrix} 1&2&0&2&1 \\ -1&-2&1&1&0 \\ 1&2&-3&-7&-2\end{bmatrix}

\Rightarrow \begin{bmatrix} 1&2&0&2&1 \\ 0&0&1&3&1 \\ 1&2&-3&-7&-2\end{bmatrix}

and then multiply the first row of A by 1 and subtract it from the third row:

\begin{bmatrix} 1&2&0&2&1 \\ 0&0&1&3&1 \\ 1&2&-3&-7&-2\end{bmatrix}

\Rightarrow \begin{bmatrix} 1&2&0&2&1 \\ 0&0&1&3&1 \\ 0&0&-3&-9&-3\end{bmatrix}

Finally we multiply the second row of A by 1 and subtract it from the third row:

\begin{bmatrix} 1&2&0&2&1 \\ 0&0&1&3&1 \\ 0&0&-3&-9&-3\end{bmatrix}

\Rightarrow \begin{bmatrix} 1&2&0&2&1 \\ 0&0&1&3&1 \\ 0&0&0&0&0\end{bmatrix}

The resulting matrix

U = \begin{bmatrix} 1&2&0&2&1 \\ 0&0&1&3&1 \\ 0&0&0&0&0\end{bmatrix}

is in echelon form with pivots in columns 1 and 3.

Since U has two pivots its rank r = 2. The rank of A is the same as the rank of U so the rank of A is also r = 2. This is the dimension of the column space of A.

The number of columns of A is n = 5 so the dimension of the nullspace of A is n - r = 5 - 2 = 3.

The dimension of the row space of A is the same as that of the column space of A, so the dimension of the row space of A is also r = 2.

Finally, the number of rows of A is m = 3 so the dimension of the left nullspace of A is m - r = 3 - 2 = 1.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.3

Review exercise 2.3. State whether each of the following is true or false. If false, provide a counterexample.

i) If a subspace S is spanned by a set of m vectors x_1 through x_m then the dimension of S is m.

ii) If S_1 and S_2 are two subspaces of a vector space V then the intersection of S_1 and S_2 is nonempty.

iii) For any matrix A if Ax = Ay then we have x = y.

iv) For any matrix A if A is reduced to echelon form then the rows of the resulting matrix U form a unique basis for the row space of A.

v) If A is a square matrix and the columns of A are linearly independent then the columns of A^2 are also linearly independent.

Answer: i) The statement is false. The dimension of S is the number of vectors in its basis, and the basis vectors have to be linearly independent. However it is possible that some of the vectors in the spanning set may be linear combinations of other vectors in the set; in that case m would be larger than the number of basis vectors, and thus larger than the dimension of S. For example, the vectors (1, 0), (0, 1), and (1, 1) span \mathbb{R}^2 but the dimension of \mathbb{R}^2 is 2, not 3.

ii) The statement is true. Every vector space V contains the zero vector. Since S_1 and S_2 are subspaces they are also vector spaces in their own right, and therefore both contain the zero vector also. So the intersection of S_1 and S_2 is guaranteed to contain (at least) the zero vector and thus will always be nonempty.

iii) The statement is false. The matrix A could be the zero matrix, in which case we would have Ax = Ay = 0 no matter what values x and y had.

iv) The statement is false. The row space of A is the same as the row space of U (since the rows of U are linear combinations of the rows of A) and the (nonzero) rows of U do form a basis for A. However this basis is not unique.

For example, suppose that

A = \begin{bmatrix} 1&0 \\ 1&2 \end{bmatrix}

Then A can be reduced to echelon form as

U = \begin{bmatrix} 1&0 \\ 0&2 \end{bmatrix}

The vectors (1, 0) and (0, 2) form a basis for the row space of A but this basis is not unique. For example, the original rows (1, 0) and (1, 2) are also linearly independent and form a basis for the row space of A.

v) The statement is true. If the columns of A are linearly independent then A is nonsingular and has an inverse A^{-1}. (See the discussion on page 98.) We then have

(A^{-1})^2A^2 = A^{-1}(A^{-1}A)A = A^{-1}IA = A^{-1}A = I

and also

A^2(A^{-1})^2 = A(AA^{-1})A^{-1} = AIA^{-1} = AA^{-1} = I

So (A^{-1})^2 is both a left and right inverse for A^2 and we see that A^2 is invertible with (A^2)^{-1} = (A^{-1})^2. But if A^2 is invertible then it is nonsingular and its columns are linearly independent.

UPDATE: Corrected a typo in the answer to (iv) (a reference to A should have been a reference to U).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.2

Review exercise 2.2. Find a basis for a two-dimensional subspace of \mathbb{R}^3 that does not contain (1, 0, 0), (0, 1, 0), or (0, 0, 1).

Answer: One approach is to come up with a linear system that has a two-dimensional nullspace that excludes the coordinate vectors. For the nullspace to be two-dimensional there must be two free variables and only one basic variable; we assume that x_2 and x_3 are free variables and x_1 is the basic variable.

One possible system is x_1 + x_2 + x_3 = 0. The coordinate vectors are not solutions to this equation and hence are not part of the nullspace.

If we set x_2 = 1 and x_3 = 0 then we have x_1 = -1. If we set x_2 = 0 and x_3 = 1 then we again have x_3 = -1. The two vectors (-1, 1, 0) and (-1, 0, 1) are thus basis vectors for the nullspace.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.1

Review exercise 2.1. For each of the following subspaces of \mathbb{R}^4 find a suitable set of basis vectors:

a) all vectors for which x_1 = 2x_4

b) all vectors for which x_1+x_2+x_3 = 0 and x_3+x_4=0

c) all vectors consisting of linear combinations of the vectors (1, 1, 1, 1), (1, 2, 3, 4), and (2, 3, 4, 5)

Answer: a) Since there are no constraints on x_2, x_3, and x_4 they may assume any value. (In other words, x_2, x_3, and x_4 are all free variables and only x_1 is a basic variable.) If x_2 = x_3 = 0 and x_4 = 1 then the  vector (2, 0, 0, 1) is in the subspace and can serve as a basis vector. If x_2 = 1 and x_3 = x_4 = 0 then we obtain a second basis vector (0, 1, 0, 0) and if x_2 = x_4 = 0 and x_3 = 1 then we obtain a third basis vector (0, 0, 1, 0).

The vector (2, 0, 0, 1), (0, 1, 0, 0) and (0, 0, 1, 0) thus form a basis for the subspace.

b) The two equations form the linear system

\begin{bmatrix} 1&1&1&0 \\ 0&0&1&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = 0

In this system x_1 and x_3 are basic variables and x_2 and x_4 are free variables. If we set x_2 = 1 and x_4 = 0 then we have x_3 = 0 and x_1 = -1. If we set x_2 = 0 and x_4 = 1 then we have x_3 = -1 and x_1 = 1.

The two vectors (-1, 1, 0, 0) and (1, 0, -1, 1) are thus basis vectors for the subspace (which happens to be the nullspace of the matrix above).

c) We have (2, 3, 4, 5) = (1, 1, 1, 1) + (1, 2, 3, 4). So of the three vectors only two are linearly independent, and (1, 1, 1, 1) and (1, 2, 3, 4) can serve as a basis for the subspace spanned by the vectors.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.6.21

Exercise 2.6.21. Consider the transformation from \mathbb{R}^3 to \mathbb{R}^3 that takes (x_1, x_2, x_3) into (x_2, x_3, x_1). What is the axis of rotation for the transformation? What is the angle of rotation?

Answer: We can approach this problem in at least two ways. The first way is to look at the effect that the transformation has on the unit vectors (1, 0, 0), (0, 1, 0), and (0, 0, 1) along the x, y, and z axes respectively.

To begin with, the transformation will send (0, 1, 0) to (1, 0, 0), in essence taking all points on the y-axis to the x-axis. One rotation that does this is the rotation through 90 degrees with the z-axis as the axis of rotation; however this rotation would send (0, 0, 1) to (0, 0, 1) (in other words, leave it unchanged), which is incorrect. Another possible rotation is the rotation through 180 degrees with the axis of rotation being the 45-degree line y = x in the xy plane; However this rotation would send (0, 0, 1) to (0, 0, -1), which is also incorrect. We conclude that the axis of rotation must be somewhere between the xy plane and the z-axis.

Similarly, the transformation will send (0, 0, 1) to (0, 1, 0), taking points on the z-axis to the y-axis. Two possible rotations that do this are the rotation through 90 degrees with the x-axis as the axis of rotation and the rotation through 180 degrees with the axis of rotation being the 45-degree line z = y in the yz plane; However these rotations do not correctly transform the vector (0, 0, 1) and so we conclude that the axis of rotation must be somewhere between the yz plane and the x-axis.

A similar argument based on the transformation sending (1, 0, 0) to (0, 0, 1) leads us to conclude that the axis of rotation must be somewhere between the xz plane and the y-axis.

So the axis of rotation must lie off the coordinate axes and the coordinate planes, and since the transformation is symmetric with respect to the coordinates we conclude that the axis of rotation is at an equal distance (in terms of degrees) from each of the coordinate axes and planes. The obvious candidate for the axis of rotation is then the line x = y = z that passes through the origin and the point (1, 1, 1), and is at an angle of 45 degrees from each of the axes and coordinate planes.

The second and simpler way to determine the axis of rotation is to recall that points on the axis of rotation remain unchanged by a rotation transformation. The transformation in question simply shifts each value to the left one position, so if the three values in a vector are all equal then the vector would remain unchanged by the transformation. We therefore conclude that the axis of rotation is the line for which x = y= z in agreement with the argument above.

Note that applying the transformation once sends (x_1, x_2, x_3) into (x_2, x_3, x_1), applying it again to the resulting vector sends (x_2, x_3, x_1) into (x_3, x_1, x_2), and applying it a third time sends (x_3, x_1, x_2) into (x_1, x_2, x_3), the original vector. Applying the transformation three times thus corresponds to a rotation of 360 degrees, so applying the transformation once corresponds to a rotation of 120 degrees.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.6.20

Exercise 2.6.20. A nonlinear transformation f from a vector space V to a vector space W is invertible a) if for any b in W there exists some x in V such that f(x) = b and b) if x and y are in V then f(x) = f(y) implies that x = y. Describe which of the following transformations from \mathbb{R} to \mathbb{R} are invertible, and explain why or why not:

a) f(x) = x^3

b) f(x) = e^x

c) f(x) = x+11

d) f(x) = \cos x

Answer: a) If b is any real number then x = \sqrt[3]{b} exists and is a unique solution to x^3 = b. The transformation f(x) = x^3 is therefore invertible.

b) We have e^x > 0 for all x in \mathbb{R}, so if b \le 0 then there is no x for which e^x = b. The transformation f(x) = e^x is therefore not invertible.

c) If b is any real number then b-11 exists and is a unique solution to x+11 = b. The transformation f(x) = x+11 is therefore invertible.

d) We have -1 \le \cos x \le 1 for all x in \mathbb{R} so if b < -1 or b > 1 then there is no x for which \cos x = b. Also note even for -1 \le b \le 1 solutions to \cos x = b are not unique, since (for example) \cos 0 = \cos 2\pi = 1. The transformation f(x) = \cos x is therefore not invertible.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.6.19

Exercise 2.6.19. Let V be the vector space consisting of all cubic polynomials of the form a_0 + a_1x + a_2x^2 + a_3x^3 and let S be the subset of V consisting of only those cubic polynomials for which \int_0^1 p(x) \,dx = 0. Show that S is a subspace of V and find a set of basis vectors for S.

Answer: For S to be a subspace it must be closed under both vector addition and scalar multiplication. First, suppose p is a member of S and consider the vector cp for any scalar c.  By the constant factor rule in integration we then have

\int_0^1 cp \,dx = c \int_0^1 p \,dx = c \cdot 0 = 0

So cp is also a member of S and S is closed under scalar multiplication.

Second, suppose q is also a member of S and consider the vector p+q. By the sum rule in integration we have

\int_0^1 (p+q) \,dx = \int_0^1 p \,dx + \int_0^1 q \,dx = 0+0 = 0

So p+q is also a member of S and S is closed under vector addition.

Since S is closed under both scalar multiplication and vector addition it is a subspace of V.

We now attempt to find a set of basis vectors for S. If p is in S then it is in the form a_0 + a_1x + a_2x^2 + a_3x^3. Taking the indefinite integral of p we have

\int p \,dx = \int (a_0 + a_1x + a_2x^2 + a_3x^3) \,dx

= \int a_0 \,dx + \int a_1x \,dx + \int a_2x^2 \,dx + \int a_3x^3 \,dx

= a_0x + \frac{1}{2}a_1x^2 + \frac{1}{3}a_2x^3 + \frac{1}{4}a_3x^4 + C

where C is a constant.

The definite integral of p over the interval from 0 to 1 is then

(a_0 \cdot 1 + \frac{1}{2}a_1 \cdot 1^2 + \frac{1}{3}a_2 \cdot 1^3 + \frac{1}{4}a_3 \cdot 1^4 + C)

- (a_0 \cdot 0 + \frac{1}{2}a_1 \cdot 0^2 + \frac{1}{3}a_2 \cdot 0^3 + \frac{1}{4}a_3 \cdot 0^4 + C)

= a_0 + \frac{1}{2}a_1 + \frac{1}{3}a_2 + \frac{1}{4}a_3

Since p is a member of S we then have

\int_0^1 p \,dx = a_0 + \frac{1}{2}a_1 + \frac{1}{3}a_2 + \frac{1}{4}a_3 = 0

We can create a set of basis vectors for S by constructing vectors \begin{bmatrix} a_0&a_1&a_2&a_3 \end{bmatrix}^T meeting this criterion. For the first basis vector we arbitrarily set a_0 = 1 and a_2 = a_3 = 0. We then have

a_0 + \frac{1}{2}a_1 + \frac{1}{3}a_2 + \frac{1}{4}a_3 = 1 + \frac{1}{2}a_1 = 0

\rightarrow \frac{1}{2}a_1 = -1 \rightarrow a_1 = -2

So our first basis vector is \begin{bmatrix} 1&-2&0&0 \end{bmatrix}^T corresponding to the polynomial 1 - 2x.

For the second basis vector we arbitrarily set a_1 = 1 and a_0 = a_3 = 0. We then have

a_0 + \frac{1}{2}a_1 + \frac{1}{3}a_2 + \frac{1}{4}a_3 = \frac{1}{2} + \frac{1}{3}a_2 = 0

\rightarrow \frac{1}{3}a_2 = -\frac{1}{2} \rightarrow a_2 = -\frac{3}{2}

So our second basis vector is \begin{bmatrix} 0&1&-\frac{3}{2}&0 \end{bmatrix}^T corresponding to the polynomial x - \frac{3}{2}x^2. Note that this vector is linearly independent of the first basis vector since it includes a term in x^2 that the first vector lacks.

For the third basis vector we arbitrarily set a_2 = 1 and a_0 = a_1 = 0. We then have

a_0 + \frac{1}{2}a_1 + \frac{1}{3}a_2 + \frac{1}{4}a_3 = \frac{1}{3} + \frac{1}{4}a_3 = 0

\rightarrow \frac{1}{4}a_3 = -\frac{1}{3} \rightarrow a_3 = -\frac{4}{3}

So our third basis vector is \begin{bmatrix} 0&0&1&-\frac{4}{3} \end{bmatrix}^T corresponding to the polynomial x^2 - \frac{4}{3}x^3. Note that this vector is linearly independent of the first and second basis vectors since it includes a term in x^3 that those vectors lack.

The subspace S can have at most three basis vectors. (If S had four basis vectors then since they would be linearly independent they would span \mathbb{R}^4 and we would have S = \mathbb{R}^4. But this is not the case.) The following vectors can thus serve as a basis for S:

\begin{bmatrix} 1 \\ -2 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 1 \\ -\frac{3}{2} \\ 0 \end{bmatrix}\qquad \begin{bmatrix} 0 \\ 0 \\ 1 \\ -\frac{4}{3} \end{bmatrix}

corresponding to the polynomials x^2 - \frac{4}{3}x^2, x - \frac{3}{2}x^2, and x^2 - \frac{4}{3}x^3 respectively.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.6.18

Exercise 2.6.18. Given a vector \begin{bmatrix} x_1&x_2&x_3 \end{bmatrix}^T in \mathbb{R}^3 find a matrix A that produces a corresponding vector \begin{bmatrix} 0&x_1&x_2&x_3 \end{bmatrix}^T in \mathbb{R}^4 in which all entries are shifted right one place. Find a second matrix B that takes a  vector \begin{bmatrix} x_1&x_2&x_3&x_4 \end{bmatrix}^T in \mathbb{R}^4 and produces the vector \begin{bmatrix} x_2&x_3&x_4 \end{bmatrix}^T in \mathbb{R}^3 in which all entries are shifted left one place. What are the product matrices AB and BA and what effects do they have?

Answer: We can construct A by considering its effect on the elementary vectors in \mathbb{R}^3

e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \qquad e_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \qquad e_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

Applying A to each of e_1 through e_3 respectively will shift each of their entries to the right (or down, depending on your point of view) and produce the following vectors in \mathbb{R}^4:

\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}

As discussed in the answer to exercise 2.6.16, we can do this by having each column of A be the vector into which the corresponding elementary vector should be transformed. We thus have

A = \begin{bmatrix} 0&0&0 \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}

so that

Ax = \begin{bmatrix} 0&0&0 \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ x_1 \\ x_2 \\ x_3 \end{bmatrix}

Now consider B. We can construct B by considering its effect on the elementary vectors in \mathbb{R}^4

e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad e_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad e_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \qquad e_4 = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}

Applying B to each of e_1 through e_4 respectively will shift each of their entries to the left (or up) and produce the following vectors in \mathbb{R}^3:

\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

We thus have

B = \begin{bmatrix} 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

so that

Bx = \begin{bmatrix} 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} x_2 \\ x_3 \\ x_4 \end{bmatrix}

Constructing the product matrices we have

AB = \begin{bmatrix} 0&0&0 \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix}

and

BA = \begin{bmatrix} 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} \begin{bmatrix} 0&0&0 \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}= I

The product $matrix BA$ corresponds to taking a vector \begin{bmatrix} x_1&x_2&x_3 \end{bmatrix}^T in \mathbb{R}^3 and shifting the entries right to produce a corresponding vector \begin{bmatrix} 0&x_1&x_2&x_3 \end{bmatrix}^T in \mathbb{R}^4 and then shifting the entries left to produce the original vector \begin{bmatrix} x_1&x_2&x_3 \end{bmatrix}^T in \mathbb{R}^3. So BA preserves all vectors and is thus equal to the identity matrix on \mathbb{R}^3.

On the other hand the product $matrix AB$ corresponds to taking a vector \begin{bmatrix} x_1&x_2&x_3&x_4 \end{bmatrix}^T in \mathbb{R}^4 and shifting the entries left to produce a corresponding vector \begin{bmatrix} x_2&x_3&x_4 \end{bmatrix}^T in \mathbb{R}^3 and then shifting the entries right to produce the vector \begin{bmatrix} 0&x_2&x_3&x_4 \end{bmatrix}^T in \mathbb{R}^4. So the net effect of AB is to change the first entry of a vector in \mathbb{R}^4 to zero and preserve the second, third, and fourth entries.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.6.17

Exercise 2.6.17. Find a matrix A corresponding to the linear transformation of cyclically permuting vectors in \mathbb{R}^4 such that A applied to \begin{bmatrix} x_1&x_2&x_3&x_4 \end{bmatrix}^T produces \begin{bmatrix} x_2&x_3&x_4&x_1 \end{bmatrix}^T. Determine the effect of A^2 and A^3 and explain why A^3 = A^{-1}.

Answer: We can construct A by considering its effect on the elementary vectors in \mathbb{R}^4

e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad e_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad e_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \qquad e_4 = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}

Applying the given cyclic permutation is equivalent to shifting each entry of a vector up or to the left (depending on how you look at it), so applying A to each of e_1 through e_4 will produce the following vectors respectively:

\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}

In other words we must have Ae_1 = e_4, Ae_2 = e_1, Ae_3 = e_2, and Ae_4 = e_3.

As discussed in the previous exercise, we can do this by having each column of A be the vector into which the corresponding elementary vector should be transformed. In other words, the first column of A should be e_4 (since A transforms e_1 into e_4), the second column should be e_1 (since A transforms e_2 into e_1), and similarly the third and fourth columns should be e_2 and e_3 respectively.

We thus have

A = \begin{bmatrix} 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \\ 1&0&0&0 \end{bmatrix}

Consider A^2. We have

A^2e_1 = A(Ae_1) = Ae_4 = e_3

A^2e_2 = A(Ae_2) = Ae_1 = e_4

A^2e_3 = A(Ae_3) = Ae_2 = e_1

A^2e_4 = A(Ae_4) = Ae_3 = e_2

Constructing the columns of A^2 as we did for A we see that

A^2 = \begin{bmatrix} 0&0&1&0 \\ 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \end{bmatrix}

and that A^2 has the effect of shifting the entries of a vector up (or to the left) two places:

A^2x = \begin{bmatrix} 0&0&1&0 \\ 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} x_3 \\ x_4 \\ x_1 \\ x_2 \end{bmatrix}

For A^3 we have

A^3e_1 = A^2(Ae_1) = A^2e_4 = e_2

A^3e_2 = A^2(Ae_2) = A^2e_1 = e_3

A^3e_3 = A^2(Ae_3) = A^2e_2 = e_4

A^3e_4 = A^2(Ae_4) = A^2e_3 = e_1

Constructing the columns of A^3 as we did for A and A^2 we see that

A^3 = \begin{bmatrix} 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \end{bmatrix}

and that A^3 has the effect of shifting the entries of a vector up (or to the left) three places:

A^3 = \begin{bmatrix} 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} x_4 \\ x_1 \\ x_2 \\ x_3 \end{bmatrix}

Finally for A^4 we have

A^4e_1 = A^3(Ae_1) = A^3e_4 = e_1

A^4e_2 = A^3(Ae_2) = A^3e_1 = e_2

A^4e_3 = A^3(Ae_3) = A^3e_2 = e_3

A^4e_4 = A^3(Ae_4) = A^3e_3 = e_4

so that

A^4 = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} = I

In other words, A^4 has the effect of shifting the entries of vectors in \mathbb{R}^4 up (or to the left) four places, restoring the original vectors.

We then have A^3A = A^4 = I and AA^3 = A^4 = I. Since A^3 is both a left and right inverse of A we have A^3 = A^{-1}.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment