Linear Algebra and Its Applications, Exercise 2.1.8

Exercise 2.1.8. Consider the following system of linear equations:

Ax = \begin{bmatrix} 1&1&1 \\ 1&0&2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

Does the set of solutions x form a point, line, or plane? Is it a subspace? Is it the nullspace of A? The column space of A?

Answer: The system Ax = 0 corresponds to

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr}x_1&+&x_2&+&x_3&=&0 \\ x_1&&&+&2x_3&=&0 \end{array}

From the second equation we have x_1 = -2x_3 and we can substitute into the first equation to obtain -2x_3 + x_2 + x_3 = 0 or x_2 - x_3 = 0. We therefore have x_2 = x_3 so that the set of solutions x can be represented as

x = \begin{bmatrix} -2x_3 \\ x_3 \\ x_3 \end{bmatrix} = x_3 \begin{bmatrix} -2 \\ 1 \\ 1 \end{bmatrix}

where x_3 is a free variable. The set of solutions x is therefore a line passing through the origin and the point (-2, 1, 1).

Since the solution set is a line passing through the origin it is a subspace, and since x is the set of all vectors for which Ax = 0 it is by definition the nullspace \mathcal{N}(A) of A (see page 68).

The column space of A is the set of all vectors v that are linear combinations of the columns of A:

v = c_1 \begin{bmatrix} 1 \\ 1 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 1 \\ 2 \end{bmatrix}

and thus contains 2 by 1 vectors. All solution vectors x are 3 by 1 vectors; the set of solutions x is not the same as the column space of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.1.7

Exercise 2.1.7. Of the following, which are subspaces of \mathbf{R}^{\infty}?

(a) the set of all sequences that include infinitely many zeros, e.g., (1,0,1,0,\dotsc)

(b) the set of all sequences of the form (x_1,x_2,\dotsc) where x_j = 0 from some point onward

(c) the set of all decreasing sequences, i.e., x_{j+1} \le x_{j} for all j

(d) the set of all sequences such that x_j converges to a limit as j \rightarrow \infty

(e) the set of all arithmetic progressions for which x_{j+1} - x_j is constant for all j

(f) the set of all geometric progressions of the form (x_1, kx_1, k^2x_1, \dotsc) for any choice of x_1 and k

Answer: (a) This set is not a subspace because it is not closed under addition: If we have x = (1,0,1,0,\dotsc) and y = (0,1,0,1,\dotsc) then both x and y are in the set but their sum x+y = (1,1,1,1,\dotsc) is not.

(b) This set is closed under scalar multiplication: Consider x = (x_1, x_2, \dotsc) and suppose for some i we have x_k = 0 for k \ge i. We then have cx = (cx_1, cx_2, \dotsc). Since we have x_k = 0 for k \ge i we also have cx_k = 0 for k \ge i. So c x is also a member of the set.

This set is also closed under vector addition. Consider x from above and y where y = (y_1, y_2, \dotsc) and for some j we have y_k = 0 for k \ge j, and consider the sum x+y = (x_1+y_1, x_2+y_2,\dotsc). Choose l so that l \ge i and l \ge j. Then x_k = 0 for k \ge l and y_k = 0 for k \ge l so that x_k+y_k = 0 for k \ge l. The sum x+y is therefore also a member of the set.

Since the set is closed under both vector addition and scalar multiplication and it is a subset of the vector space of infinite sequences, it is a subspace of that vector space.

(c) This set is not a subspace because it is not closed under scalar multiplication: The sequence x = (0, -1, -2, \dotsc) is a member of the set, but -x = (0, 1, 2, \dotsc) is not.

(d) We first check that the set of converging sequences is closed under scalar multiplication. Let x be a member of this set, so that l(x) = \lim_{j \rightarrow \infty} x_j exists. Then for any \epsilon > 0 there exists n such that \left| l(x) - x_j \right| < \epsilon for j > n. Now consider c x where c is any scalar. If c = 0 then c x = (0, 0, \dotsc) so that it converges to the limit 0.

Suppose that c \ne 0 and choose any \epsilon > 0. Since l(x) exists we can choose n such that \left| l(x) - x_j \right| < \epsilon/\left|c\right| for j > n. Multiplying both sides by \left|c\right| we have \left|c\right| \left| l(x) - x_j \right| < \epsilon. But \left|c\right| \left| l(x) - x_j \right| = \left|c(l(x)-x_j)\right| = \left| c l(x) - c x_j \right|. We therefore see that for any \epsilon > 0 we can choose n such that \left| c l(x) - c x_j \right| < \epsilon.

This means that l(cx) = \lim_{j \rightarrow \infty} c x_j exists and is equal to c l(x) so that for any scalar c \ne 0 and converging sequence x the sequence c x is also in the set of converging sequences. Since c x converges both for c = 0 and c \ne 0 the set is therefore closed under scalar multiplication.

We next check that the set of converging sequences is closed under vector addition. Let y also be a member of this set, so that l(y) = \lim_{j \rightarrow \infty} y_j exists. Then for any \epsilon > 0 there exists n such that \left| l(y) - y_j \right| < \epsilon for j > n.

Now consider x + y = (x_1+y_1, x_2+y_2, \dotsc) and choose any \epsilon > 0. Since l(x) exists we can choose m such that \left| l(x) - x_j \right| < \epsilon/2 for j > m, and since l(y) exists we can choose n such that \left| l(y) - y_j \right| < \epsilon/2 for j > n. Choose p such that p \ge m and p \ge n. Adding both sides of the two inequalities we have \left| l(x) - x_j \right| + \left| l(y) - y_j \right| < \epsilon for j > p.

We have \left| a+b \right| \le \left| a \right| + \left| b \right| for any a and b, and thus for any j > p we have

\left| (l(x)+l(y)) - (x_j+y_j) \right| = \left| (l(x)-x_j) + (l(y)-y_j) \right|

\le \left| l(x)-x_j \right| + \left| l(y)-y_j \right| < \epsilon

So for any \epsilon > 0 we can choose p such that \left| (l(x)+l(y)) - (x_j+y_j) \right| < \epsilon for all j > p. This means that l(x+y) = \lim_{j \rightarrow \infty} x_j+y_j exists and is equal to l(x)+l(y) so that for any two converging sequences x and y the sequence x+y is also in the set of converging sequences. The set is therefore closed under vector addition.

Since the set of converging sequences is closed under both vector addition and scalar multiplication and it is a subset of the vector space of infinite sequences, it is a subspace of that vector space.

(e) We first check that the set of arithmetic progressions is closed under scalar multiplication. Let x be a member of this set, so that x_{j+1} - x_j is a constant value a for all j. Then for c x = (c x_1, c x_2, \dotsc) we have c x_{j+1} - c x_j = c (x_{j+1} - x_j) = c a for all j. The sequence c x is therefore also an arithmetic progression, and the set is closed under scalar multiplication.

We next check that the set of arithmetic progressions is closed under vector addition. Let y also be a member of this set, so that y_{j+1} - y_j is a constant value b for all j. Then for x+y = (x_1+y_1, x_2+y_2, \dotsc) we have

(x_{j+1}+y_{j+1}) - (x_j+y_j) = (x_{j+1} - x_j) + (y_{j+1} - y_j) = a+b

for all j. The sequence x+y is therefore also an arithmetic progression, and the set is closed under vector addition.

Since the set of arithmetic progressions is closed under both vector addition and scalar multiplication and it is a subset of the vector space of infinite sequences, it is a subspace of that vector space.

(f) The set of geometric progressions is not a subspace because it is not closed under vector addition: For example, suppose that x = (1, 2, 4, 8, \dotsc) (for which x_1 = 1 and k = 2), and y = (1, 3, 9, 27, \dotsc) (for which x_1 = 1 and k = 3). We then have x + y = (2, 5, 13, 35, \dotsc). For x+y we have x_1 = 2 (from the first element) and k = 5/2 = 2.5 (from the second element). If x + y is a geometric progression then the third element should be k^2x_1 = 2.5^2 \cdot 2 = 6.25 \cdot 2 = 12.5 instead of the actual value of 13. So x + y is not a geometrical progression for all geometric progressions x and y, and the set is not closed under vector addition.

UPDATE: Fixed typos in the question (\mathbf{R}^3 should have been \mathbf{R}^{\infty}) and in the answers to (b) (x_k = 0 should have been y_k = 0) and (d) (\epsilon < 0 should have been \epsilon > 0).

UPDATE 2: Fixed typos in the question and answer for (f) (references to x_2 and x_3 should have been to x_1, and a reference to x_1 should have been to -x_1).

UPDATE 3: Fixed the answer for (f); the original answer (claiming that -x was not a geometric progression) was incorrect. Thanks go to Samuel for pointing this out.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 7 Comments

Linear Algebra and Its Applications, Exercise 2.1.6

Exercise 2.1.6. Consider the equation x+2y+z=6 that defines a plane P in 3-space. For the parallel plane P_0 through the origin find its equation and explain whether P and P_0 are subspaces of \mathbf{R}^3.

Answer: The origin (0,0,0) must be a solution of the equation for P_0. The equation x+2y+z=0 satisfies this criterion and its plane P_0 is parallel to the original plane P.

P is not a subspace because it does not contain the origin, and thus if v is in P then 0 \cdot v will not be. P_0 is a subspace of \mathbf{R}^3 since the sum of any two vectors u and v in P_0 is also in P_0 as is the scalar multiple cv for any vector v in P_0 and any scalar c (including c=0).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.1.5

Exercise 2.1.5. There are eight rules for vector addition and scalar multiplication operations in a vector space:

  1. x+y = y+x
  2. x+(y+z) = (x+y)+z
  3. There is a zero vector (0) such that x+0 = x for all x
  4. For each x there exists one and only one vector -x such that x+(-x)=0
  5. 1x = x for all x
  6. (c_1c_2)x = c_1(c_2x)
  7. c(x+y) = cx+cy
  8. (c_1+c_2)x = c_1x+c_2x

(a) Define addition in \mathbf{R}^2 such that it adds an extra one to each component, e.g., (1, 3) + (4, 5) = (6, 9) instead of (5,8). Assume the rule for scalar multiplication is left unchanged. Which of the above rules is broken by this redefinition of addition?

(b) Define vector addition on the set of positive real numbers such that x+y = xy and define scalar multiplication on the same set so that cx = x^c. Show that the set of positive real numbers is a vector space under the operations thus defined, and describe the zero vector.

Answer: (a) For any two vectors u = (u_1, u_2) and v = (v_1, v_2) rule 1 is satisfied:

u + v = (u_1+v_1+1, u_2+v_2+1)

= (v_1+u_1+1, v_2+u_2+1) = v+u

Rule 2 is satisfied as well:

u+(v+w) = (u_1, u_2) + (v_1+w_1+1,v_2+w_2+1)

= (u_1+(v_1+w_1+1)+1, u_2+(v_2+w_2+1)+1)

= ((u_1+v_1+1)+w_1+1, (u_2+v_2+1)+w_2+1)

= (u_1+v_1+1, u_2+v_2+1) + (w_1,w_2) = (u+v)+w

Rule 3 is satisfied with (-1,-1) as the zero vector:

(u_1,u_2) + (-1,-1) = (u_1-1+1,u_2-1+1) = (u_1,u_2)

Rule 4 is satisfied with -x = (-x_1-2,-x_2-2):

(u_1,u_2) + (-u_1-2,-u_2-2) = (u_1-u_1-2+1,u_2-u_2-2+1) = (-1,-1)

Rule 5 and rule 6 are satisfied since scalar multiplication was not redefined:

1u = 1(u_1,u_2) = (1u_1,1u_2) = (u_1,u_2) = u

(c_1c_2)u = ((c_1c_2)u_1,(c_1c_2)u_2) = c_1(c_2u_1,c_2u_2) = c_1(c_2u)

However rule 7 is not satisfied since for u = v = (1, 1) we have

2(u+v) = 2(1+1+1,1+1+1) = 2(3,3) = (6,6)

while

2u + 2v = 2(1,1) + 2(1,1) = (2,2) + (2,2)

= (2+2+1,2+2+1) = (5,5)

Rule 8 is also not satisfied since for u = (1, 1) we have

(2+2)u = 4(1,1) = (4,4)

while

2u + 2u = 2(1,1) + 2(1,1) = (2,2) + (2,2)

= (2+2+1,2+2+1) = (5,5)

Thus under the redefinition of addition rules 1-6 are satisfied while rules 7 and 8 are not.

(b) We use the symbols \oplus and \otimes to denote the following definitions of vector addition and scalar multiplication on the set of positive real numbers:

x \oplus y = xy and c \otimes x = x^c

Rules 1 and 2 are satisfied:

x \oplus y = xy = yx = y \oplus x

x \oplus (y \oplus z) = x(yz) = (xy)z = (x \oplus y) \oplus z

Rule 3 is satisfied with 1 as the zero vector:

x \oplus 1 = x \cdot 1 = x

Rule 4 is satisfied with 1/x as -x:

x \oplus 1/x = x (1/x) = 1

Rules 5 through 8 are also satisfied:

1 \otimes x = x^1 = x

(c_1c_2) \otimes x = x^{c_1c_2} = x^{c_2c_1} = (x^{c_2})^{c_1} = c_1 \otimes (c_2 \otimes x)

c \otimes (x \oplus y) = (xy)^c = x^cy^c = (c \otimes x) \oplus (c \otimes y)

(c_1+c_2) \otimes x = x^{c_1+c_2} = x^{c_1}x^{c_2} = (c_1 \otimes x) \oplus (c_2 \otimes x)

Since all eight rules are satisfied the set of positive real numbers is a vector space under the vector addition and scalar multiplication operations defined above, with 1 as the zero vector.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.1.4

Exercise 2.1.4. Consider the set of all 3 by 3 symmetric matrices and the set of all 3 by 3 lower triangular matrices, each of which are subspaces of the space of all 3 by 3 matrices. What is the smallest subspace of the space of 3 by 3 matrices that contains both of those subspaces? What is the largest subspace contained in both of those subspaces?

Answer: Any subspace must be closed under vector addition and scalar multiplication. So if a subspace of the set of 3 by 3 matrices contains both the subspace of 3 by 3 symmetric matrices and the subspace of 3 by 3 lower triangular matrices, then it has to also contain all possible linear combinations of those two subspaces. (Otherwise it wouldn’t be a vector space.)

In particular, the subspace we’re looking for has to contain all matrices of the form S+L where S is a 3 by 3 symmetric matrix and L is a 3 by 3 lower triangular matrix. Now, in general symmetric matrices can have nonzero entries above the diagonal, to match corresponding nonzero entries below the diagonal. Therefore in general the sum S+L will produce a matrix that has entries both above and below the diagonal.

The question then becomes, are there any restrictions on what S+L can look like? Or can S+L end up being any possible 3 by 3 matrix? To find out, we take a random 3 by 3 matrix

A = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ a_{31}&a_{32}&a_{33} \end{bmatrix}

and try to represent it as the sum of a 3 by 3 symmetric matrix S and a 3 by 3 lower triangular matrix L. Since L must have zeros above the diagonal (by definition), the values for a_{12}, a_{13}, and a_{23} (the entries of A above the diagonal) have to come from S. We can also take the diagonal entries a_{11}, a_{22}, and a_{33} from S as well, since we can just set the diagonal entries of L to be zero. (It will still be lower triangular in this case.)

Since S is a symmetric matrix the entries below the diagonal must be the same as the entries above the diagonal, so we have

S = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{12}&a_{22}&a_{23} \\ a_{13}&a_{23}&a_{33} \end{bmatrix}

What then should L look like? We know the entries above the diagonal are zero (because L is lower triangular) and also that the diagonal entries of L are zero as well (from the previous paragraph). So we have

L = \begin{bmatrix} 0&0&0 \\ l_{21}&0&0 \\ l_{31}&l_{32}&0 \end{bmatrix}

and we have to find suitable values for l_{21}, l_{31}, and l_{32}.

For A to equal the sum of S and L we must have

\begin{array}{rcrcr} a_{21}&=&a_{12}&+&l_{21} \\ a_{31}&=&a_{13}&+&l_{31} \\ a_{32}&=&a_{23}&+&l_{32} \end{array}

where the first term in each sum above comes from S and the second term from L. Solving for l_{21}, l_{31}, and l_{32} we have

\begin{array}{rcrcr} l_{21}&=&a_{21}&-&a_{12} \\ l_{31}&=&a_{31}&-&a_{13} \\ l_{32}&=&a_{32}&-&a_{23} \end{array}

so that

L = \begin{bmatrix} 0&0&0 \\ a_{21} - a_{12}&0&0 \\ a_{31} - a_{13}&a_{32} - a_{23}&0 \end{bmatrix}

We then have

S+L = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{12}&a_{22}&a_{23} \\ a_{13}&a_{23}&a_{33} \end{bmatrix} + \begin{bmatrix} 0&0&0 \\ a_{21}-a_{12}&0&0 \\ a_{31}-a_{13}&a_{32}-a_{23}&0 \end{bmatrix}

= \begin{bmatrix} a_{11}+0&a_{12}+0&a_{13}+0 \\ a_{12}+a_{21}-a_{12}&a_{22}+0&a_{23}+0 \\ a_{13}+a_{31}-a_{13}&a_{23}+a_{32}-a_{23}&a_{33}+0 \end{bmatrix}

= \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ a_{31}&a_{32}&a_{33} \end{bmatrix} = A

So, for any 3 by 3 matrix A we can find a 3 by 3 symmetric matrix S and a 3 by 3 lower triangular matrix L for which S+L = A. Put another way, adding 3 by 3 symmetric matrices and 3 by 3 lower triangular matrices together can produce any possible 3 by 3 matrix. So the smallest possible subspace containing both all 3 x 3 symmetric matrices and all 3 by 3 lower triangular matrices is the space of all 3 by 3 matrices.

What about the largest subspace contained in both the subspace of 3 by 3 symmetric matrices and 3 by 3 lower triangular matrices? If a matrix A is in the subspace of lower triangular matrices then we must have a_{ij} = 0 for j > i. If A is also in the subspace of symmetric matrices then we must also have a_{ij} = a_{ji} for all i and j, and thus a_{ij} = 0 for j < i. So we have a_{ij} = 0 for i \ne j and A is a diagonal matrix.

The sum of two diagonal matrices is itself a diagonal matrix, so the set of all diagonal matrices is closed under vector addition. If D is a diagonal matrix and c is a scalar then cD is also a diagonal matrix (even if c = 0), so the set of all diagonal matrices is closed under scalar multiplication. The set of all 3 by 3 diagonal matrices is therefore a subspace of the space of all 3 by 3 matrices, and it is the largest subspace that is contained in both the subspace of 3 by 3 symmetric matrices and the subspace of 3 by 3 lower triangular matrices.

UPDATE: I completely rewrote the answer to the first part of the problem to provide a more complete explanation, in response to a question from a reader. (I also corrected a typo in the definition of L.)

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 11 Comments

Linear Algebra and Its Applications, Exercise 2.1.3

Exercise 2.1.3. For each of the following two matrices

A = \begin{bmatrix} 1&-1 \\ 0&0 \end{bmatrix} \qquad B = \begin{bmatrix} 0&0&0 \\ 0&0&0 \end{bmatrix}

describe the matrix’s column space and nullspace.

Answer: The column space for A consists of the linear combinations of its columns:

c_1 \begin{bmatrix} 1 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} -1 \\ 0 \end{bmatrix} = \begin{bmatrix} c_1 - c_2 \\ 0 \end{bmatrix}

The column space \mathcal{R}(A) is therefore the x axis, i.e., the set of vectors of the form (x_1, 0).

The nullspace \mathcal{N}(A) is the set of vectors x for which Ax = 0. We then have

Ax = \begin{bmatrix} 1&-1 \\ 0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} x_1 - x_2 \\ 0 \end{bmatrix} = 0

so that we have x_1 - x_2 = 0 or x_1 = x_2. The nullspace \mathcal{N}(A) is therefore the line x_2 = x_1 passing through the origin at a 45 degree angle to the x axis.

For the matrix B the column space \mathcal{R}(B) contains all vectors of the form

c_1 \begin{bmatrix} 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

Therefore \mathcal{R}(B) consists only of the point (0, 0).

For any vector x = (x_1, x_2, x_3) we have

Bx = \begin{bmatrix} 0&0&0 \\ 0&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

Therefore the nullspace \mathcal{N}(B) = \mathbf{R}^3.

UPDATE: Corrected references to \mathcal{R}(A) that should have been to \mathcal{R}(B). Thanks go to nubilaveritas for catching these errors.

UPDATE 2: Corrected the product Bx to be 2 by 1 rather than 3 by 1. Thanks go to James Teow for catching the error.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 4 Comments

Linear Algebra and Its Applications, Exercise 2.1.2

Exercise 2.1.2. Of the following subsets of \mathbf{R}^3, which are subspaces and which are not?

(a) the set of vectors (0, b_2, b_3) with the first component b_1 = 0

(b) the set of vectors (1, b_2, b_3) with the first component b_1 = 1

(c) the set of vectors (0, b_2, b_3) and (b_1, 0, b_3) for which b_1b_2 = 0

(d) the single vector (0, 0, 0)

(e) all linear combinations of the vectors (1, 1, 0) and (2, 0, 1)

(f) all vectors (b_1, b_2, b_3) for which b_3 - b_2 + 3b_1 = 0

Answer: (a) This set is a subspace: It is closed under vector addition, since for two members of the set u = (0, u_2, u_3) and v = (0, v_2, v_3) the sum (0, u_2+v_2, u_3+v_3) is also in the set. It is also closed under scalar multiplication, since for any vector v = (0, v_2, v_3) the product cv = (0, cv_2, cv_3) is also in the set.

(b) This set is not a subspace: It is not closed under scalar multiplication, since for a vector v = (1, v_2, v_3) the product 0 \cdot v = (0, 0, 0) is not in the set. It is also not closed under vector addition, since for the vectors u = (1, u_2, u_3) and v = (1, v_2, v_3) the sum (2, u_2+v_2, u_3+v_3) is not in the set.

(c) This set is not a subspace: It is not closed under vector addition, since the vectors (1, 0, 0) and (0, 1, 0) are in the set but their sum (1, 1, 0) is not.

(d) This set is a subspace: It is closed under vector addition, since the sum of (0, 0, 0) and (0, 0, 0) is (0, 0, 0). It is also closed under scalar multiplication, since for any scalar c the product c \cdot (0, 0, 0) = (0, 0, 0).

(e) This set is a subspace: It is closed under scalar multiplication, since for any vector u = a \cdot (1, 1, 0) + b \cdot (2, 0, 1) the product  cu = ca \cdot (1, 1, 0) + cb \cdot (2, 0, 1) is also a linear combination of (1, 1, 0) and (2, 0, 1). It is also closed under vector addition, since for any two vectors u = a \cdot (1, 1, 0) + b \cdot (2, 0, 1) and v = c \cdot (1, 1, 0) + d \cdot (2, 0, 1) their sum u + v = (a+c) \cdot (1, 1, 0) + (b+d) \cdot (2, 0, 1) is also a linear combination of (1, 1, 0) and (2, 0, 1).

(f) This is a subspace: It is closed under scalar multiplication, since given a vector u = (u_1, u_2, u_3) in the set, for the scalar product cu = (cu_1, cu_2, cu_3) we have

cu_3 - cu_2 +3cu_1 = c(u_3 - u_2 + 3u_1) = c \cdot 0 = 0

It is also closed under vector addition, since given two vectors u = (u_1, u_2, u_3) and u = (u_1, u_2, u_3) in the set, for their sum u + v = (u_1 + v_1, u_2 + v_2, u_3 + v_3) we have

(u_3+v_3) - (u_2+v_2) + 3 \cdot (u_1+v_1) = (u_3 - u_2 + 3u_1) + (v_3 - v_2 + 3v_1) = 0 + 0 = 0

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.1.1

Exercise 2.1.1. Construct the following:

(a) a subset of 2-D space closed under vector addition and subtraction but not scalar multiplication

(b) a subset of 2-D space closed under scalar multiplication but not vector addition

Answer: (a) The set of all vectors (i, j) where i and j are integers; for example, (0, 0), (1, 1), (-2, 3), etc. This set is closed under vector addition, since the sum or difference of two integers is always an integer. However it is not closed under scalar multiplication, since (for example) multiplying (1, 1) by \frac{1}{2} produces a result (\frac{1}{2}, \frac{1}{2}) not in the set.

(b) The set of all points on the x axis and y axis, that is (x_1, 0) and (0, x_2), is closed under scalar multiplication but not under vector addition.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Completing Chapter 1 of Linear Algebra and Its Applications

Yesterday I posted the final worked-out exercise from chapter 1 of  Gilbert Strang’s Linear Algebra and Its Applications, Third Edition. My first post was for exercise 1.2.2 almost exact 15 months ago. The book has eight chapters and two appendices, all with associated exercises, so based on my track record thus far I’ll finish the book in 150 months total or over 11 years from now. However more recently I’ve picked up the pace considerably and have maintained a pace of posting one exercise per day. At one point I estimated that the book contained 700-800 exercises, so even at that rate it will still take almost three years to complete unless I can post more rapidly.

However I think this project is worth doing, as (re)working through the exercises for the posts has considerably improved my understanding of the material. There were several articles where I had typos in my first attempts, I did only partial proofs (for example, showing a matrix was a right inverse while forgetting to show it was a left inverse, or I just didn’t understand how to work the exercise.

Most notably, the very first time I tried to work exercise 1.4.24 (on paper) I didn’t understand at all how multiplication of block matrices was supposed to work. Even when I worked it out again and did my post on exercise 1.4.24 I got the right answer without fully understanding why it was right. This lack of understanding also showed up in working the final part of exercise 1.6.10. and motivated me to stop working on further exercises until  I proved for myself how multiplication of block matrices worked.

That in turn led me to look at various questions about diagonal block matrices, including how to define them, how to multiply them, and how and when you can find their inverses. In the course of investigating those questions I went some way beyond what I needed to show in order to work the Chapter 1 exercises, but I did get a good basic understanding of block matrices in general and diagonal block matrices in particular, one that I think will be useful in future. (This also produced some minor refinements to my thoughts about multiplying block matrices and led me to update my original post.)

No rest for the weary: My next post will be tomorrow, for the first exercise of chapter 2.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Review Exercise 1.29

Review exercise 1.29. Find 2 by 2 matrices that will

(a) reverse the direction of a vector

(b) project a vector onto the x_2 axis

(c) rotate a vector counter-clockwise through 90 degrees

(d) reflect a vector about the line x_1 = x_2 that is 45 degrees above the x_1 axis

Answer: (a) We want a matrix A such that Ax = -x. We have Ix = x where I is the 2 by 2 identity matrix, and thus -Ix = -x. So we have Ax = -x if A = -I

A = \begin{bmatrix} -1&0 \\ 0&-1 \end{bmatrix}

(b) We want a matrix A such that Ax = b where b = (0, x_2). By experiment we see that the following matrix will work

A = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}

Ax = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ x_2 \end{bmatrix}

(c) We want a matrix A such that Ax = b where b is a vector rotated 90 degrees counter-clockwise from x. Such a matrix would send, e.g., the vector (2, 1) to (-1, 2) and in general would send the vector x = (x_1, x_2) to (-x_2, x_1). By experiment we see that the matrix

A = \begin{bmatrix} 0&-1 \\ 1&0 \end{bmatrix}

will do this:

Ax = \begin{bmatrix} 0&-1 \\ 1&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -x_2 \\ x_1 \end{bmatrix}

(d) We want a matrix A such that Ax = b where b is a vector reflected about the line x_2 = x_1. Such a matrix would send, e.g., the vector (2, 1) to (1, 2) and in general would send the vector x = (x_1, x_2) to (x_2, x_1). By experiment we see that the matrix

A = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}

will do this:

Ax = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} x_2 \\ x_1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment