Linear Algebra and Its Applications, Exercise 2.4.12

Exercise 2.4.12. Suppose that for a matrix A the system Ax = 0 has at least one nonzero solution. Show that there exists at least one vector f for which the system A^Ty = f has no solution. Show an example of such a matrix A and vector f.

Answer: Suppose A is an m by n matrix. If there exists a nonzero x = \begin{bmatrix} x_1&x_2&\cdots&x_n \end{bmatrix}^T for which Ax = 0 then the columns of A are linearly dependent. (If they were linearly independent then Ax = 0 would imply that x_i = 0 for all 1 \le i \le n.) We therefore have the rank r < n.

Since r < n and the dimension of the row space \mathcal R(A^T) is equal to r the dimension of \mathcal R(A^T) is less than n. In other words, the rows of A do not span all of \mathbf R^n.

But if that is the case then there exists at least one vector f in \mathbf R^n that is not in \mathcal R(A^T) and cannot be expressed as a linear combination of the rows of A. Therefore for the vector f there is no solution to the system A^Ty = f. (If there were such a solution then its coefficients y_1, y_2, \dotsc, y_m would define a linear combination of the rows of A equal to f.)

For example, suppose that we have the following 2 by 2 matrix

A = \begin{bmatrix} 1&-1 \\ 0&0 \end{bmatrix}

The system Ax = 0 is equivalent to

\begin{bmatrix} 1&-1 \\ 0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

This system has rank r = 1 with x_1 as a basic variable and x_2 as a free variable.

From the first equation of the system we have x_1 - x_2 = 0 or x_1 = x_2. If we set the free variable x_2 to 1 we then have x_1 = 1 and x = \begin{bmatrix} 1 \\ 1 \end{bmatrix} as a (nonzero) solution to Ax = 0.

The vector \begin{bmatrix} 1 \\ -1 \end{bmatrix} is a basis for the row space \mathcal R(A^T), which consists of all vectors of the form \begin{bmatrix} c \\ -c \end{bmatrix}. Geometically this is a line through the origin and the point (1, -1).

Pick a point in the x-y plane not on this line, for example (1, 0), and consider the vector f = \begin{bmatrix} 1 \\ 0 \end{bmatrix} and the system

A^Ty = \begin{bmatrix} 1&0 \\ -1&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = f

The first equation of the system produces y_1 = 1 while the second equation produces y_1 = -1, a contradiction. There is thus no solution to A^Ty = f for the example values of A and f.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.11

Exercise 2.4.11. Suppose that for a matrix A and any b the system Ax = b always has at least one solution. Show that in this case the system A^Ty = 0 has no solution other than y = 0.

Answer: Suppose A is an m by n matrix and b is an m-element vector. If x = \begin{bmatrix} x_1&x_2&\cdots&x_n \end{bmatrix}^T is a solution for Ax = b then b is a linear combination of the n columns of A with the coefficients being x_1, x_2, \dotsc, x_m.

If we can find a solution x for any b in \mathbf R^m then the columns of A span \mathbf R^m since any vector in \mathbf R^m can be expressed as a linear combination of the columns of A. The rank of A is therefore r = m. (The rank cannot be less than m since then there would not be enough linearly independent columns of A to span \mathbf R^m, and cannot be greater than m because it is impossible to have more than m linearly independent vectors in \mathbf R^m.)

Now consider the left nullspace \mathcal N(A^T) consisting of all vectors y such that A^Ty = 0. The dimension of the left nullspace is m - r where r is the rank of A. But from above we have r = m so that the dimension of \mathcal N(A^T) is m - r = m - m = 0. The only element of \mathcal N(A^T) is therefore the zero vector, and there are no  solutions to A^Ty = 0 other than y = 0.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.10

Exercise 2.4.10. Suppose the nullspace of a matrix is the set of all vectors x = \begin{bmatrix} x_1& x_2&x_3 \end{bmatrix}^T in \mathbf R^3 for which x_1 + 2x_2 + 4x_3 = 0. Find a 1 by 3 matrix A with this nullspace. Find a 3 by 3 matrix A' with the same nullspace.

Answer: If A = \begin{bmatrix} a_1&a_2&a_3 \end{bmatrix} and Ax = 0 then we have a_1x_1 + a_2x_2 + a_3x_3 = 0. We know that x_1 + 2x_2 + 4x_3 = 0 so one way to construct a suitable matrix A is to set a_1 = 1, a_2 = 2, and a_3 = 4 so that A = \begin{bmatrix} 1&2&4 \end{bmatrix}.

For a 3 by 3 matrix A', if A'x = 0  each row of A' times x is 0 so that a_{i1}x_1 + a_{i2}x_2 + a_{i3}x_3 = 0 for 1 \le i \le 3. Since x_1 + 2x_2 + 4x_3 = 0 we can construct a suitable matrix A' by setting each row of A' to \begin{bmatrix} 1&2&4 \end{bmatrix} or to any multiple of that vector. For example, one possible choice of A' is

A' = \begin{bmatrix} 1&2&4 \\ 2&4&8 \\ 4&8&16 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.4.9

Exercise 2.4.9. Let A be a matrix representing a system of m equations in n unknowns, and assume that the only solution to Ax = 0 is 0. What is the rank of A? Explain your answer.

Answer: Ax is a linear combination of the columns of A with the coefficients being the entries of x, namely x_1, x_2, \dotsc, x_n. If the only time when Ax = 0 is when x itself is zero (i.e., the coefficients x_1, \dotsc, x_n are all zero) then all the columns of A are linearly independent. Since the rank r of A is the same as the number of linearly independent columns of A we then have r = n.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.8

Exercise 2.4.8. Is it possible for the row space and nullspace of a matrix to both contain the vector \begin{bmatrix} 1&1&1 \end{bmatrix}^T? If not, why not?

Answer: Suppose A is an m by 3 matrix and \begin{bmatrix} 1&1&1 \end{bmatrix}^T is in the nullspace \mathcal N(A). Then we must have

A \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ \vdots&\vdots&\vdots \\ a_{m1}&a_{m2}&a_{m3} \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = 0

which means that a_{i1} + a_{i2} + a_{i3} = 0 for 1 \le i \le m.

Now suppose that \begin{bmatrix} 1&1&1 \end{bmatrix}^T is also in the row space \mathcal R(A^T). Then \begin{bmatrix} 1&1&1 \end{bmatrix}^T can be expressed as a linear combination of the rows of A so that

c_1 \begin{bmatrix} a_{11} \\ a_{12} \\ a_{13} \end{bmatrix} + c_2 \begin{bmatrix} a_{21} \\ a_{22} \\ a_{23} \end{bmatrix} + \cdots + c_m \begin{bmatrix} a_{m1} \\ a_{m2} \\ a_{m3} \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}

for some set of coefficients c_1, c_2, \dotsc, c_m.

We now add the equations on both sides of the system above. For the right side we obtain 1 + 1 + 1 = 3. For the left side we have

(c_1a_{11} + c_2a_{21} + \cdots + c_ma_{m1})

\qquad + (c_1a_{12} + c_2a_{22} + \cdots + c_ma_{m2})

\qquad + (c_1a_{13} + c_2a_{23} + \cdots + c_ma_{m3})

= c_1(a_{11} + a_{12} + a_{13}) + c_2(a_{21} + a_{22} + a_{23})

\qquad + \cdots + c_m(a_{m1} + a_{m2} + a_{m3})

But recall from above that we have a_{i1} + a_{i2} + a_{i3} = 0 for 1 \le i \le m (as a consequence of \begin{bmatrix} 1&1&1 \end{bmatrix}^T being in the nullspace of A) so that the equation above reduces to

c_1 \cdot 0 + c_2 \cdot 0 + \cdots + c_m \cdot 0 = 0

Since the left side of the above system of equations sums to 0 and the right side sums to 3 we have a contradiction and conclude that the vector \begin{bmatrix} 1&1&1 \end{bmatrix}^T cannot be in the row space of A.

So if the vector \begin{bmatrix} 1&1&1 \end{bmatrix}^T is in the nullspace of A then it cannot also be in the row space of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.7

Exercise 2.4.7. If A is an m by n matrix with rank r answer the following:

a) When is A invertible, with A^{-1} existing such that AA^{-1} = A^{-1}A = I?

b) When does Ax = b have an infinite number of solutions for any b?

Answer: a) Per theorem 2Q on page 96, for A to have a right inverse C such that AC = I we must have r = m and m \le n. Per the same theorem, for A to have a left inverse B such that BA = I we must have r = n and m \ge n. For A to have both a left and right inverse we must therefore have r = m = n, in which case A^{-1} = B = C.

b) Per theorem 2Q on page 96, for Ax = b to have at least one solution we must have r = m. For Ax = b to have an infinite number of solutions there must be at least one free variable, i.e., the rank r must be less than the number of columns n.  We therefore must have r = m and m < n.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.6

Exercise 2.4.6. Given a matrix A with rank r show that the system Ax = b has a solution if and only if the matrix A' also has rank r, where A' is formed by taking the columns of A and adding b as an additional column.

Answer: We first show that if the system Ax = b has at least one solution then the rank of A' is equal to the rank r of A:

If Ax = b for some x = (x_1, x_2, \dotsc, x_n) and v_1, v_2, \dotsc, v_n are the columns of A then we have

x_1v_1 + x_2v_2 + \cdots + x_nv_n = b

In other words, b is a linear combination of the columns of A with x_1, x_2, \dotsc, x_n being the coefficients.

Consider the matrix A' formed by adding the vector b to A as an additional column. The rank r of A is equal to the number of columns of A that are linearly independent. From above we know that b is a linear combination of the columns of A and thus of the other columns of A'. The number of linearly independent columns in A' is therefore the same as the number of linearly independent columns in A so that both matrices have the same rank r.

We next show that if the rank of A' is the same as the rank r of A then the system Ax = b has at least one solution:

The rank r of A is the number of linearly independent columns of A. If the rank of A' is the same as that of A then there are only r linearly independent columns of A' as well. But since every column of A is also in A' that means that the column of A' equal to b must be linearly dependent on the other columns of A' and thus linearly dependent on the columns of A. (If this were not the case, i.e., if b were linearly independent of the other columns of A', then the rank of A' would be equal to r+1 not to r.)

Since b is linearly dependent on the columns of A and thus is a linear combination of those columns, we must have

b = c_1v_1 + c_2v_2 + \cdots + c_nv_n

for some set of coefficients c_1, c_2, \dotsc, c_n. But if that is the case then the vector x = (c_1, c_2, \dotsc, c_n) is a solution to Ax = b.

Summing up, we have shown that if Ax = b has at least one solution then the rank of A' is equal to the rank r of A, and also that if the rank of A' is equal to the rank r of A then Ax = b has at least one solution. We have therefore shown that the system Ax = b has a solution if and only if the matrix A' has the same rank as the original matrix A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.5

Exercise 2.4.5. Suppose that AB = 0 for two matrices A and B. Show that the column space \mathcal R(B) is contained within the nullspace \mathcal N(A) and that the row space \mathcal R(A^T) is contained within the left nullspace \mathcal N(B^T).

Answer: Assume that A is an m by n matrix and B is an n by p matrix, and that v_1, v_2, \dotsc, v_p are the columns of B. Since each row of A multiplies each column of B to produce zero, we must have Av_j = 0 for all 1 \le j \le p. In other words, each v_j is in the nullspace of A.

Let v by any vector in the column space of B. We then have

v = c_1v_1 + c_2v_2 + \cdots + c_pv_p

for some set of coefficients c_1, c_2, \dotsc, c_p and thus

Av = Ac_1v_1 + Ac_2v_2 + \cdots + Ac_pv_p

= c_1Av_1 + c_2Av_2 + \cdots + c_pAv_p

= c_1 \cdot 0 + c_2 \cdot 0 + \cdots c_p \cdot 0 = 0

So any vector v in the column space of B is also in the nullspace of A, and thus \mathcal R(B) is contained within \mathcal N(A).

Similarly, let w_1, w_2, \dotsc, w_m be the rows of A. Since each row of A multiplies each column of B to produce zero, we must have w_iB = 0 for all 1 \le i \le m. In other words, each w_i is in the left nullspace of B.

Let w by any vector in the row space of A. We then have

w = c_1w_1 + c_2w_2 + \cdots + c_pw_m

for some set of coefficients c_1, c_2, \dotsc, c_p and thus

wB = c_1w_1B + c_2w_2B + \cdots + c_pw_pB

= c_1 \cdot 0 + c_2 \cdot 0 + \cdots c_p \cdot 0 = 0

So any vector w in the row space of A is also in the left nullspace of B, and thus \mathcal R(A^T) is contained within \mathcal N(B^T).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.4

Exercise 2.4.4. For the matrix

A = \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 0&0&0 \end{bmatrix}

describe each of its four associated subspaces.

Answer: We first consider the column space \mathcal R(A). The matrix A has two pivots (in the second and third columns) and therefore rank r = 2; this is the dimension of \mathcal R(A). The pivot columns

\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}

serve as a basis for \mathcal R(A). The column space is the x-y plane in \mathbf R^3.

The row space \mathcal{R}(A^T) also has dimension r = 2. The two nonzero rows of A, the vectors

\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

form a basis for \mathcal{R}(A). The row space is the y-z plane in \mathbf R^3.

We now turn to the nullspace \mathcal N(A) consisting of the solutions to Ax = 0. Since the pivots of A are in the second and third columns we have x_2 and x_3 as basic variables and x_1 as the free variable. For Ax = 0 we must have

\begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 0&0&0 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = 0

From the second row of the above system we have x_3 = 0 and from the first row we have x_2 = 0. Setting the free variable x_1 to 1 gives us

\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}

as a solution to Ax = 0 and a basis for the null space of A. The null space is the x-axis in \mathbf R^3.

Finally we turn to the left nullspace \mathcal N(A^T) consisting of all solutions to A^Ty = 0 or y^TA = 0. In general we can find the left nullspace by looking at the operations on the rows of A needed to produce zero rows in the echelon matrix produced by Gaussian elimination.

In this case A is already in echelon form, with the third row already zero. In terms of the elimination process this corresponds to taking the third row of A as is, with no contributions from the first and second rows; the coefficients for this operation are thus 0 (for the first row), 0 (for the second row), and 1 (for the third). The vector

\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

is therefore a basis for the left nullspace \mathcal N(A^T) (which has dimension 1). We can test this by multiplying A on the left by the transpose of this vector:

\begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 0&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&0 \end{bmatrix}

We can also compute the left nullspace \mathcal N(A^T) by solving the system A^Ty or

\begin{bmatrix} 0&0&0 \\ 1&0&0 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \\ 0&0 \end{bmatrix}

Since the pivots of A^T are in the first and second columns we have y_1 and y_2 as basic variables and y_3 as the free variable. From the second row of the above system we have y_1 = 0 and from the third row we have y_2 = 0. Setting the free variable y_3 = 1 then gives us the vector

\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

as a basis for the left nullspace of A. The left nullspace is the z-axis in \mathbf R^3.

Note that the column space \mathcal R(A) (the x-y plane) is perpendicular to the left nullspace \mathcal N(A^T) (the z-axis), while the row space \mathcal R(A^T) (the y-z plane) is perpendicular to the nullspace \mathcal N(A) (the x-axis). This is a foreshadowing of the discussion in section 3.1 on orthogonal subspaces.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.3

Exercise 2.4.3. For each of the two matrices below give the dimension and find a basis for each of their four subspaces:

A = \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 1&2&0&1 \end{bmatrix} \qquad U = \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 0&0&0&0 \end{bmatrix}

Answer: We first consider the column spaces \mathcal R(A) and \mathcal R(U). The matrix U has two pivots and therefore rank r = 2; this is the dimension of the column space of U. Since the pivots are in the first and second columns those columns are a basis for \mathcal R(U):

\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}

Note that the third column of U is equal to -2 times the first column plus the second column, and the fourth column is equal to the first column.

Doing Gaussian elimination on the matrix A (i.e., by subtracting the first row from the third) produces U, so the rank of A and the dimension of the column space of A are also 2. Also, since the first and second columns of U (the pivot columns) are a basis for \mathcal R(U) the first and second columns of A are a basis for \mathcal R(A):

\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \qquad \begin{bmatrix} 2 \\ 1 \\ 2 \end{bmatrix}

Note that as with U the third column of A is equal to -2 times the first column plus the second column, and the fourth column is equal to the first column.

Turning to the row spaces, since the rows of U are linear combinations of the rows of A and vice versa, the row spaces \mathcal{R}(U^T) and \mathcal{R}(A^T) are the same. Per the discussion on page 91 the nonzero rows of U, the vectors \begin{bmatrix} 1&2&0&1 \end{bmatrix}^T and \begin{bmatrix} 0&1&1&0 \end{bmatrix}^T, form a basis for \mathcal{R}(U^T). Since \mathcal{R}(U^T) = \mathcal{R}(A^T) these vectors also form a basis for \mathcal{R}(A^T). The dimension of each row space is 2.

We now turn to the nullspaces \mathcal N(A) and \mathcal N(U) consisting of the solutions to the equations Ax = 0 and Ux = 0 respectively. As noted above, if we do Gaussian elimination on A (i.e., by subtracting the first row from the third row) then we obtain the matrix U so that any solution to Ax = 0 is a solution to Ux = 0 and vice versa. We therefore have \mathcal N(A) = \mathcal N(U) and just need to calculate one of the two.

In particular for Ux = 0 we must find x = (x_1, x_2, x_3, x_4) such that

\begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 0&0&0&0 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = 0

Since the pivots of U are in the first and second columns we have x_1 and x_2 as basic variables and x_3 and x_4 as free variables.

From the second row of the system above we have x_2 + x_3 = 0 or x_2 = -x_3. From the first row we then have x_1 + 2x_2 + x_4 = x_1 -2x_3 + x_4 = 0 or x_1 = 2x_3 - x_4. Setting each of the free variables x_3 and x_4 to 1 in turn (and the other free variable to zero) we have the following set of vectors as solutions to the homogeneous equation Ux = 0 and a basis for the null space of U:

\begin{bmatrix} 2 \\ -1 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix}

Since \mathcal N(U) = \mathcal N(A) the above vectors also form a  basis for the nullspace of A. The dimension of the two nullspaces \mathcal N(U) and \mathcal N(A) is 2 (the number of columns of each matrix minus the rank, or n - r = 4 - 2 = 2).

Finally we turn to finding a basis for each of the left nullspaces \mathcal N(A^T) and \mathcal N(U^T). As discussed on page 95 there are two possible approaches to doing this. One way to find the left nullspace of A is to look at the operations on the rows of A needed to produce zero rows in the resulting echelon matrix U in the process of Gaussian elimination; the coefficients used to carry out those operations make up the basis vectors of the left nullspace \mathcal N(A^T).

In particular, the one and only zero row in U is produced by subtracting the first row of A from the third row of A, with no contribution from the second row; the coefficients for this operation are -1 (for the first row), 0 (for the second row), and 1 (for the third). The vector

\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}

is therefore a basis for the left nullspace \mathcal N(A^T) (which has dimension 1). We can test this by multiplying A on the left by the transpose of this vector:

\begin{bmatrix} -1&0&1 \end{bmatrix} \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 1&2&0&1 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \end{bmatrix}

The left nullspace of U can be found in a similar manner: Since U is already in echelon form with a third row of zeroes, the step of Gaussian elimination to produce that row would be equivalent to multiplying the first row by zero and the second row by zero and then adding them to the third (zero) row; the coefficients for this operation are 0 (for the first row), 0 (for the second row), and 1 (for the third row). The vector

\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

is therefore a basis for the left nullspace \mathcal N(U^T) (which also has dimension 1). As with A we can test this by multiplying U on the left by the transpose of this vector:

\begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 0&0&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \end{bmatrix}

An alternate approach to find the left nullspace of U is to explicitly solve U^Ty = 0 or

\begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

Gaussian elimination on U^T proceeds as follows: First, subtract two times the first row from the second:

\begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}

and then subtract the first row from the fourth row:

\begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}

Finally, subtract the second row from the third row:

\begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix}

We thus have y_1 and y_2 as basic variables (since the pivots are in the first and second columns) and y_3 as a free variable. From the first row of the final matrix we have 1 \cdot y_1 + 0 \cdot y_2 + 0 \cdot y_3 = 0 or y_1 = 0 in the homogeneous case, and from the second row of the final matrix we have 0 \cdot y_1 + 1 \cdot y_2 + 0 \cdot y_3 = 0 or y_2 = 0. Setting the free variable y_3 = 1 then gives us the vector

\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

as a basis for the left nullspace of U. The left nullspace of U has dimension 1 (the number of rows of U minus its rank, or m - r = 3 - 2 = 1).

Similarly we can also find the left nullspace of A by solving the homogeneous system A^Ty = 0 or

\begin{bmatrix} 1&0&1 \\ 2&1&2 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

Gaussian elimination on A^T proceeds as follows: First, subtract two times the first row from the second:

\begin{bmatrix} 1&0&1 \\ 2&1&2 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix}

and then subtract the first row from the fourth row:

\begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}

Finally, subtract the second row from the third row:

\begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix}

We thus have y_1 and y_2 as basic variables (since the pivots are in the first and second columns) and y_3 as a free variable. From the first row of the final matrix we have 1 \cdot y_1 + 0 \cdot y_2 + 1 \cdot y_3 = 0 or y_1 = -y_3 in the homogeneous case, and from the second row of the final matrix we have 0 \cdot y_1 + 1 \cdot y_2 + 0 \cdot y_3 = 0 or y_2 = 0. Setting the free variable y_3 = 1 then gives us the vector

\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}

as a basis for the left nullspace of A. As with U the left nullspace of A has dimension 1 (the number of rows of A minus its rank, or m - r = 3 - 2 = 1).

As with exercise 2.4.2, note that the row space of A is equal to the row space of U because the rows of A are linear combinations of the rows of U and vice versa. Similarly the nullspace of A is equal to the nullspace of U for the same reason.

UPDATE: Corrected two typos involving the equations for the left nullspace; thanks to Lucas for finding the errors.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , | 3 Comments