Linear Algebra and Its Applications, Exercise 2.4.2

Exercise 2.4.2. For each of the two matrices below give the dimension and find a basis for each of their four subspaces:

A = \begin{bmatrix} 0&1&4&0 \\ 0&2&8&0 \end{bmatrix} \qquad U = \begin{bmatrix} 0&1&4&0 \\ 0&0&0&0 \end{bmatrix}

Answer: The echelon matrix U has only a single pivot, in the second column. As discussed on page 93, the second column \begin{bmatrix} 1 \\ 0 \end{bmatrix} is therefore a basis for the column space \mathcal{R}(U). (The third column of U is equal to four times the second column.) The dimension of \mathcal{R}(U) is 1 (same as the rank of U).

The echelon matrix U can be derived from A via Gaussian elimination (i.e., by subtracting two times the first row of A from the second row of A). Again per the discussion on page 93, since the second column of U is a basis for the column space of U the second column of A, the vector \begin{bmatrix} 1 \\ 2 \end{bmatrix}, is a basis for the column space of A. (As with U, the third column of A is equal to four times the second column.) The dimension of \mathcal{R}(A) is 1 (the same as the rank of A, which is the same as the rank of U).

Turning to the row spaces, the only nonzero row of the echelon matrix U is the first row, so per the discussion on page 91 the vector \begin{bmatrix} 0,&1&4&0 \end{bmatrix}^T is a basis for the row space \mathcal{R}(U^T). The dimension of \mathcal{R}(U^T) is 1 (again, the same as the rank of U). Since the matrix U can be derived from A using Gaussian elimination the row spaces of the two matrices are identical, so that the vector \begin{bmatrix} 0,&1&4&0 \end{bmatrix}^T is also a basis for the row space \mathcal{R}(A^T). (This basis vector happens to be the first row of A, and the second row of A is equal to two times the first row.) The dimension of \mathcal{R}(A^T) is 1 (same as the rank of A and U).

We now turn to the nullspaces \mathcal N(A) and \mathcal N(U), i.e., the solutions to the equations Ax = 0 and Ux = 0. In particular for Ux = 0 we must find x = (x_1, x_2, x_3, x_4) such that

\begin{bmatrix} 0&1&4&0 \\ 0&0&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = 0

As noted above, if we do Gaussian elimination on A (i.e., by multiplying the first row by 2 and subtracting it from the second row) then we obtain the matrix U. Both matrices thus have rank r = 1 with x_2 being a basic variable (since the pivot is in the second column) and x_1, x_3, and x_4 being free variables.

From the equation above we see that we must have x_2 + 4x_3 = 0 or x_2 = -4x_3. Setting each of the free variables x_1, x_3, and x_4 to 1 in turn (with the other free variables set to zero) we have the following set of vectors as solutions to the homogeneous equation Ux = 0 and a basis for the null space of U:

\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ -4 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}

Since U can be obtained from A by Gaussian elimination any solution to Ax = 0 is also a solution to Ux = 0 and vice versa, so the above vectors also form a  basis for the nullspace of A. The dimensions of the two nullspaces \mathcal N(U) and \mathcal N(A) are both 3 (equal to the number of vectors in the basis) and in fact the nullspaces are identical (since they have the exact same basis).

Finally we turn to finding a basis for each of the left nullspaces \mathcal N(A^T) and \mathcal N(U^T). As discussed on page 95 there are two possible approaches to doing this. One way to find the left nullspace of A is to look at the operations on the rows of A needed to produce zero rows in the resulting echelon matrix U in the process of Gaussian elimination; the coefficients used to carry out those operations make up the basis vectors of the left nullspace \mathcal N(A^T).

In particular, the one and only zero row in U is produced by multiplying the first row of A by two and subtracting it from the second row of A; the coefficients for this operation are -2 (for the first row) and 1 (for the second). The vector \begin{bmatrix} -2 \\ 1 \end{bmatrix} is therefore a basis for the left nullspace \mathcal N(A^T) (which has dimension 1). We can test this by multiplying A on the left by the transpose of this vector:

\begin{bmatrix} -2&1 \end{bmatrix} \begin{bmatrix} 0&1&4&0 \\ 0&2&8&0 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \end{bmatrix}

The left nullspace of U can be found in a similar manner: Since U is already in echelon form the first step of Gaussian elimination would be equivalent to adding nothing to the second row, in other words, multiplying the first row by zero and then adding it to the second (zero) row; the coefficients for this operation are 0 (for the first row) and 1 (for the second). The vector \begin{bmatrix} 0 \\ 1 \end{bmatrix} is therefore a basis for the left nullspace \mathcal N(U^T) (which also has dimension 1). As with A we can test this by multiplying U on the left by the transpose of this vector:

\begin{bmatrix} 0&1 \end{bmatrix} \begin{bmatrix} 0&1&4&0 \\ 0&0&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \end{bmatrix}

An alternate approach to find the left nullspace of U is to explicitly solve U^Ty = 0 or

\begin{bmatrix} 0&0 \\ 1&0 \\ 4&0 \\ 0&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \\ 0&0 \\ 0&0 \end{bmatrix}

Gaussian elimination on U^T proceeds as follows, first by exchanging the first row and second row and then by subtracting 4 times the first row from the third:

\begin{bmatrix} 0&0 \\ 1&0 \\ 4&0 \\ 0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0 \\ 0&0 \\ 4&0 \\ 0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0 \\ 0&0 \\ 0&0 \\ 0&0 \end{bmatrix}

We thus have y_1 as a basic variable (since the pivot is in the first column) and y_2 as a free variable. From the first row of the final matrix we have 1 \cdot y_1 + 0 \cdot y_2 = 0 or y_1 = 0 in the homogeneous case. Setting the free variable y_2 = 1 then gives us the vector \begin{bmatrix} 0 \\ 1 \end{bmatrix} as a basis for the left nullspace of U. Since there is only one vector in the basis the left nullspace of U has dimension 1.

Similarly we can also find the left nullspace of A by solving the homogeneous system A^Ty = 0 or

\begin{bmatrix} 0&0 \\ 1&2 \\ 4&8 \\ 0&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&0 \\ 0&0 \\ 0&0 \end{bmatrix}

Gaussian elimination on A^T proceeds as follows, first by exchanging the first row and second row and then by subtracting 4 times the first row from the third:

\begin{bmatrix} 0&0 \\ 1&2 \\ 4&8 \\ 0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2 \\ 0&0 \\ 4&8 \\ 0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2 \\ 0&0 \\ 0&0 \\ 0&0 \end{bmatrix}

As with U^Ty = 0 we have y_1 as a basic variable and y_2 as a free variable. From the first row of the final matrix we have 1 \cdot y_1 + 2 \cdot y_2 = 0 or y_1 = -2y_1 in the homogeneous case. Setting the free variable y_2 = 1 then gives us the vector \begin{bmatrix} -2 \\ 1 \end{bmatrix} as a basis for the left nullspace of A. Since there is only one vector in the basis the left nullspace of A has dimension 1.

Note that the dimension of the column space of A is the rank r of A, namely 1, while the dimension of the nullspace of A is equal to the number of columns of A minus the rank, or n - r = 4 - 1 = 3. The dimension of the row space of A is also r = 1 while the dimension of the left nullspace of A  is equal to the number of rows of A minus the rank, or m - r = 2 - 1 = 1. These results are in accordance with the Fundamental Theorem of Linear Algebra, Part I on page 95. Similar results hold for U.

Also note that the row space of A is equal to the row space of U; this is because the rows of A are linear combinations of the rows of U and vice versa. Similarly the nullspace of A is equal to the nullspace of U for the same reason.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

This entry was posted in linear algebra and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s