Linear Algebra and Its Applications, Review Exercise 2.25

Review exercise 2.25. Suppose that T is a linear transformation from \mathbb{R}^3 to itself, and T transforms the point (u, v, w) to the point (u+v+w, u+v, u). What does the inverse transformation T^{-1} do to the point (x, y, z)?

Answer: The effect of T^{-1} is to reverse the effect of T. Since T takes the first entry of a vector and makes it the last entry of the resulting vector, T^{-1} must take the last entry of a vector and make it the first. So applying T^{-1} to the point (x, y, z) results in a point whose first entry is z.

Next, T takes the second entry of a vector, adds to it the first entry, and makes the  sum the second entry of the resulting vector. In reversing this T^{-1} must take the second entry and subtract from it the original first entry. So applying T^{-1} to the point (x, y, z) results in a point whose second entry is y-z (since z was the original first entry, as discussed in the previous paragraph).

Finally, T takes the third entry of a vector, adds to it the first and second entries, and makes the sum the first entry of the resulting vector . In reversing this T^{-1} must take the first entry and subtract from it the original first entry and second entry. So applying T^{-1} to the point (x, y, z) results in a point whose third entry is x - z - (y-z) = x - y. (Recall that z was the original first entry, and y-z the original second entry.)

The inverse transformation T^{-1} thus transforms the point (x, y, z) into the point (z, y-z, x-y). To confirm this, we apply the transformation T to (z, y-z, x-y) resulting in the point

(z +y-z + x-y, z + y-z, z) = (x, y, z)

The transformation T^{-1} is thus indeed the inverse of the transformation T.

Note that another way to compute T^{-1} is to take the matrix [T] corresponding to the transformation T and compute its inverse [T]^{-1}.

The linear transformation T corresponds to the matrix

[T] = \begin{bmatrix} 1&1&1 \\ 1&1&0 \\ 1&0&0 \end{bmatrix}

so that applying T to (u, v, w) gives

\begin{bmatrix} 1&1&1 \\ 1&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} u+v+w \\ u+v \\ w \end{bmatrix}

We can compute the inverse of [T] using Gauss-Jordan elimination. Start with

\begin{bmatrix} 1&1&1&\vline&1&0&0 \\ 1&1&0&\vline&0&1&0 \\ 1&0&0&\vline&0&0&1 \end{bmatrix}

Subtract 1 times the first row from the second row:

\Rightarrow \begin{bmatrix} 1&1&1&\vline&1&0&0 \\ 0&0&-1&\vline&-1&1&0 \\ 1&0&0&\vline&0&0&1 \end{bmatrix}

Subtract 1 times the first row from the third row:

\Rightarrow \begin{bmatrix} 1&1&1&\vline&1&0&0 \\ 0&0&-1&\vline&-1&1&0 \\ 0&-1&-1&\vline&-1&0&1 \end{bmatrix}

Exchange the second and third rows:

\Rightarrow \begin{bmatrix} 1&1&1&\vline&1&0&0 \\ 0&-1&-1&\vline&-1&0&1 \\ 0&0&-1&\vline&-1&1&0 \end{bmatrix}

Subtract 1 times the third row from the second row:

\Rightarrow \begin{bmatrix} 1&1&1&\vline&1&0&0 \\ 0&-1&0&\vline&0&-1&1 \\ 0&0&-1&\vline&-1&1&0 \end{bmatrix}

Subtract -1 times the third row from the first row:

\Rightarrow \begin{bmatrix} 1&1&0&\vline&0&1&0 \\ 0&-1&0&\vline&0&-1&1 \\ 0&0&-1&\vline&-1&1&0 \end{bmatrix}

Subtract -1 times the second row from the first row:

\Rightarrow \begin{bmatrix} 1&0&0&\vline&0&0&1 \\ 0&-1&0&\vline&0&-1&1 \\ 0&0&-1&\vline&-1&1&0 \end{bmatrix}

Multiply both the second row and the third row by -1:

\Rightarrow \begin{bmatrix} 1&0&0&\vline&0&0&1 \\ 0&1&0&\vline&0&1&-1 \\ 0&0&1&\vline&1&-1&0 \end{bmatrix}

We thus have

[T]^{-1} = \begin{bmatrix} 0&0&1 \\ 0&1&-1 \\ 1&-1&0 \end{bmatrix}

so that

\begin{bmatrix} 0&0&1 \\ 0&1&-1 \\ 1&-1&0 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} z \\ y-z \\ x-y \end{bmatrix}

and

[T][T]^{-1} = \begin{bmatrix} 1&1&1 \\ 1&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} 0&0&1 \\ 0&1&-1 \\ 1&-1&0 \end{bmatrix}

= \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} = I

So the linear transformation T^{-1} as defined by [T]^{-1} is indeed the inverse of the original transformation T.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.24

Review exercise 2.24. Suppose that A is a 3 by 5 matrix with the elementary vectors e_1, e_2, and e_3 in its column space. Does A has a left inverse? A right inverse?

Answer: Since e_1, e_2, and e_3 are in the column space the dimension of the column space must be 3 (since e_1, e_2, and e_3 are linearly independent) and thus the rank of A is r = 3 = m, the number of rows of A.

Since the rank of A equals the number of rows A has a 5 by 3 right inverse C. However it does not have a left inverse B since the rank r = 3 is less than the number of columns n = 5.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.23

Review exercise 2.23. Given any three vectors v_1, v_2, and v_3 in \mathbb{R}^3 find a matrix A that transforms the three elementary vectors e_1, e_2, and e_3 respectively into those three vectors.

Answer: When multiplying e_1 by A only the entries in the first column of A are multiplied by one; all other entries are multiplied by zero. So the first column of A should be set to v_1. Similarly when multiplying e_2 by A only the entries in the second column of A are multiplied by one; all other entries are multiplied by zero. The second column of  A should therefore be set to v_2. Finally the third column of A should be set to v_3.

If v_1 = (a_1, a_2, a_3), v_2 = (b_1, b_2, b_3), and v_3 = (c_1, c_2, c_3) then we have

A = \begin{bmatrix} a_1&b_1&c_1 \\ a_2&b_2&c_2 \\ a_3&b_3&c_3 \end{bmatrix}

so that (for example)

Ae_2 = \begin{bmatrix} a_1&b_1&c_1 \\ a_2&b_2&c_2 \\ a_3&b_3&c_3 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}

= \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} = v_2

Note that if A is invertible if and only if the three vectors v_1, v_2, and v_3 are linearly independent.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.22

Review exercise 2.22. a) Given

A = \begin{bmatrix} 1&2&0&3 \\ 0&0&0&0 \\ 2&4&0&1 \end{bmatrix} \qquad b = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}

what conditions must b satisfy in order for Ax = b to have a solution?

b) Find a basis for the nullspace of A.

c) Find the general solution for Ax = b for those cases when a solution exists.

d) Find a basis for the column space of A?

e) What is the rank of A^T?

Answer: a) Given the system

\begin{bmatrix} 1&2&0&3 \\ 0&0&0&0 \\ 2&4&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}

from the second row we have

0 \cdot x_1 + 0 \cdot x_2 + 0 \cdot x_3 + 0 \cdot x_4 = b_2

This can be true only if b_2 = 0.

b) The system Ax = 0 corresponds to

\begin{bmatrix} 1&2&0&3 \\ 0&0&0&0 \\ 2&4&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}

Doing Gaussian elimination on A we subtract 2 times the first row from the third:

\begin{bmatrix} 1&2&0&3 \\ 0&0&0&0 \\ 2&4&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&0&3 \\ 0&0&0&0 \\ 0&0&0&-5 \end{bmatrix}

Since there are pivots in columns 1 and 4 we have x_1 and x_4 as basic variables and x_2 and x_3 as basic variables. (Note that this would be true whether or not we do a row exchange to put the matrix in true echelon form.)

From the third equation we have x_4 = 0. Setting x_2 = 1 and x_3 = 0, from the first equation we have x_1 + 2x_2 + 3x_4 = x_1 + 2 + 0 = 0 or x_1 = -2. Setting x_2 = 0 and x_3 = 1, from the first equation we have x_1 + 2x_2 + 3x_4 = x_1 + 0 + 0 = 0 or x_1 = 0.

The following vectors are thus solutions to Ax = 0 and serve as basis vectors for the nullspace of A:

\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}

c) From (a) above, in order for Ax = b to have a solution we must have b_2 = 0. The system Ax = b then becomes

\begin{bmatrix} 1&2&0&3 \\ 0&0&0&0 \\ 2&4&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ 0 \\ b_3 \end{bmatrix}

To find a particular solution we set the free variables x_2 = x_3 = 0. From the first equation we have x_1 + 2x_2 + 3x_4 = x_1 + 3x_4 = b_1 and from the third equation we have 2x_1 + 4x_2 + x_4 = 2x_1 + x_4 = b_3. Subtracting twice the first equation from the third equation we obtain -5x_4 = b_3-2b_1 or x_4 = \frac{1}{5}(2b_1-b_3).

Substituting the values of x_4 into the first equation we have

x_1 + 3x_4 = x_1 + \frac{3}{5}(2b_1-b_3)

= x_1 + \frac{6}{5}b_1 - \frac{3}{5}b_3 = b_1

or

x_1 = b_1 -\frac{6}{5}b_1 + \frac{3}{5}b_3

= -\frac{1}{5}b_1 + \frac{3}{5}b_3 = \frac{1}{5}(-b_1+3b_3).

The particular solution to Ax = b is therefore

\frac{1}{5} \begin{bmatrix} -b_1+3b_3 \\ 0 \\ 0 \\ 2b_1-b_3 \end{bmatrix}

Combining the particular solution with the homogeneous solution from (b) above gives the general solution

x = \frac{1}{5} \begin{bmatrix} -b_1+3b_3 \\ 0 \\ 0 \\ 2b_1-b_3 \end{bmatrix} + c_1 \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}

where c_1 and c_2 are arbitrary scalars.

d) From (b) above we have pivots in columns 1 and 4 of the echelon form of A. Columns 1 and 4 of A therefore serve as a basis for the column space of A:

\begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix} \qquad \begin{bmatrix} 3 \\ 0 \\ 1 \end{bmatrix}

e) The rank of A^T is the same as the rank of A, namely 2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.21

Review exercise 2.21. Consider an n by n matrix with the value 1 for every entry. What is the rank of such a matrix? Consider another n by n matrix equivalent to a checkerboard, with a_{ij} = 0 if i+j is even and a_{ij} = 1 if i+j is odd. What is the rank of this matrix?

Answer: If each element in the matrix has the value 1 then every column of the matrix is equal to every other column of the matrix, and every row equal to every other row. There is only one column (or row) that is linearly independent (e.g., column 1 or row 1) and thus the rank of the matrix is 1.

For the second matrix, the first and second rows are linearly independent, since the first row has the form (0, 1, 0, 1, \ldots) and the second row has the form (1, 0, 1, 0, \ldots). Every other odd row is equal to the first row: If i is odd then i+j is even whenever 1+j is even, and i+j is odd whenever 1+j is odd; thus we have a_{ij} = a_{1j} for all odd i and all j.

Similarly, every other even row is equal to the second row: If i is even then i+j is even whenever 2+j is even, and i+j is odd whenever 2+j is odd; thus we have a_{ij} = a_{2j} for all even i and all j.

Since the “checkerboard” matrix contains only two linearly independent rows, its rank is 2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.20

Review exercise 2.20. Consider the set of all 5 by 5 permutation matrices. How many such matrices are there? Are the matrices linearly independent? Do the matrices span the set of all 5 by 5 matrices?

Answer: An example member of this set is

\begin{bmatrix} 1&0&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&0&1 \\ 0&1&0&0&0 \\ 0&0&0&1&0 \end{bmatrix}

Note that for any such permutation matrix each row has the value 1 in one column and the value 0 in all other columns. Also note that if a given row has a 1 in a given column then no other row can have a 1 in that column.

This allows us to calculate the number of permutation matrices as follows: For the first row we have five columns in which to put a 1. Having chosen a 1 column for the first row, we then have four choices for the column to contain a 1 in the second row (because we can’t put it in the same column as in the first row). Having chosen the 1 columns for the first two rows we have three choices for the 1 column in the third row, and having chosen the 1 columns for the first three rows we have two choices for the 1 column in the fourth row. Finally, having chosen the 1 columns for the first four rows there is only one choice for the 1 column in the fifth row.

The number of 5 by 5 permutation matrices is therefore

5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = 120 = 5!

(In general the number of n by n permutation matrices is n!.)

The set of 5 by 5 matrices has dimension 5 \cdot 5 = 25. Since the number of 5 by 5 permutation matrices is greater than 25 the set of permutation matrices cannot be linearly independent.

Does the set of 5 by 5 permutation matrices span the space of all 5 by 5 matrices? One way to address this question is to first look at the set of 3 by 3 permutation matrices, of which there are 3! = 6 matrices. A linear combination of such matrices looks as follows:

c_1 \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} + c_2 \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}

+ c_3 \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} + c_4 \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix}

+ c_5 \begin{bmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} + c_6 \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

= \begin{bmatrix} c_1+c_2&c_3+c_4&c_5+c_6 \\ c_3+c_5&c_1+c_6&c_2+c_4 \\ c_4+c_6&c_2+c_5&c_1+c_3 \end{bmatrix}

Notice that for each row in the resulting matrix the sum of the entries in the row is the same, namely c_1+c_2+c_3+c_4+c_5+c_6.

Now consider the set of 5 by 5 permutation matrices P_i and consider forming a linear combination of those matrices; this produces a matrix

A = c_1P_1 + c_2P_2 + \cdots + c_{120}P_{120}

= \sum_{i=1}^{120} c_iP_i

for some arbitrary set of scalars c_i.

Consider an arbitrary row of A. Of the 120 5 by 5 permutation matrices there are 24 permutation matrices that have the value 1 in column 1, so the entry for column 1 in that row of A will be the sum of the 24 scalars c_i that multiply those matrices. Similarly, there is a different set of 24 permutation matrices that have a 1 in column 2, so the entry for column 2 in that row of A will be the sum of the different 24 scalars c_i that multiply those matrices. Continuing in this way for columns 3 through 5, we see that each scalar c_i contributes to the value of one and only one column in that row of A, and that (as in the 3 by 3 case) the sum of the entries for all columns in that row is equal to the sum of the 120 scalars, or \sum_{i = 1}^{120} c_i.

We chose an arbitrary row, so this same argument applies to all rows of A: the sum of the entries for all columns in each row of A is equal to the same value \sum_{i = 1}^{120} c_i.

There are matrices for which the sum of the entries in each row is not equal, for example the matrix

B = \begin{bmatrix} 1&0&0&0&1 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \end{bmatrix}

for which the sum of the entries in the first row is 2 and the sum of the entries in all other rows is 1. The matrix B cannot be expressed as a linear combination of the 5 by 5 permutation matrices (otherwise the sum of the entries in each row of B would be the same), and thus the set of 5 by 5 permutation matrices does not span the space of all 5 by 5 matrices.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.19

Review exercise 2.19. Consider the set of elementary 3 by 3 matrices E_{ij} with ones on the diagonal and at most one nonzero entry below the diagonal. What subspace is spanned by these matrices?

Answer: An example member of this set is

E_{32} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix}

Since each matrix in the set has only zeros above the diagonal, a linear combination pf such matrices will also have only zeros above the diagonal.

Since each matrix in the set has ones on the diagonal, multiplying a given matrix by a scalar produces a matrix which has that scalar value for all diagonal entries. Adding a number of such matrices in turn produces a matrix for which all diagonal entries are equal.

Since each matrix in the set has some nonzero entry below the diagonal in some arbitrary location, a linear combination of such matrices could have any arbitrary set of values below the diagonal.

The subspace spanned by the set of 3 by 3 elementary matrices E_{ij} is therefore the set of all 3 by 3 lower triangular matrices for which the diagonal values are equal to one another.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.18

Review exercise 2.18. Suppose that A is an n by n matrix with rank n. Show that if A^2 = A then A = I.

Answer: Since the rank of A is n we know that the columns of A are linearly independent and that the inverse A^{-1} exists. We then have

I = AA^{-1} = A^2A^{-1} = A(AA^{-1}) = AI = A

So if A^2 = A and the rank r = n then A = I.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.17

Review exercise 2.17. Suppose that x is a vector in \mathbb{R}^n and that x^Ty = 0 for all y. Show that x must be the zero vector.

Answer: For x = (x_1, x_2, \ldots, x_n) and y = (y_1, y_2, \ldots, y_n) we have

x^Ty = \sum_{j=1}^n x_jy_j

Since  x^Ty = 0 for all y we must have x^Te_i = 0 for each of the elementary vectors e_1, e_2, through e_n. By the definition of the elementary vectors we have e_{i_j}= 1 if i = j and e_{i_j} = 0 otherwise.

So for all 1 \le i \le n we have

0 = x^Te_i = \sum_{j=1}^n x_je_{i_j} = x_ie_{i_i} = x_i \cdot 1 = x_i

Since x_i = 0 for all 1 \le i \le n we have x = 0.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.16

Review exercise 2.16. For each of the cases in review exercise 2.15, describe the relationship among the rank r of A, the number of rows m, and the number of columns n.

Answer: (i) For the matrix

A = \begin{bmatrix} 1&0 \\ 0&1 \\ 1&1 \end{bmatrix}

we have m = 3, n = 2, and r = 2 (since the two columns are linearly independent). Per box 2Q on page 96, since r = n we know that there is at most one solution to Ax = b. However since r = n < m we are not guaranteed that a solution exists. So Ax = b has either 0 or 1  solutions.

ii) For the matrix

A = \begin{bmatrix} 1&0&1 \\ 0&1&1 \end{bmatrix}

we have m = 2, n = 3, and r = 2 (since the matrix is in echelon form with two pivots). Per box 2Q on page 96, since r = m we know that there is at least one solution to Ax = b and since r = m < n we know that there is more than one solution.

iii) For the matrix

A = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 1&-1&0 \end{bmatrix}

we have m = 3, n = 3, and r = 2 (since the third row of the matrix is a linear combination of the first two rows). Per box 2Q on page 96, since r < m we know that there may not be a solution to Ax = b and since r < n we know that there if there is a solution then it is not guaranteed to be unique.

iv) For the matrix

A = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&1 \end{bmatrix}

we have m = 3, n = 3, and r = 3 (since the matrix is in echelon form with three pivots). Per box 2Q on page 96, since r = m we know that there is at least one solution to Ax = b and since r = n we know that the solution is unique.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment