## Linear Algebra and Its Applications, Exercise 2.4.3

Exercise 2.4.3. For each of the two matrices below give the dimension and find a basis for each of their four subspaces:

$A = \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 1&2&0&1 \end{bmatrix} \qquad U = \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 0&0&0&0 \end{bmatrix}$

Answer: We first consider the column spaces $\mathcal R(A)$ and $\mathcal R(U)$. The matrix $U$ has two pivots and therefore rank $r = 2$; this is the dimension of the column space of $U$. Since the pivots are in the first and second columns those columns are a basis for $\mathcal R(U)$:

$\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix}$

Note that the third column of $U$ is equal to -2 times the first column plus the second column, and the fourth column is equal to the first column.

Doing Gaussian elimination on the matrix $A$ (i.e., by subtracting the first row from the third) produces $U$, so the rank of $A$ and the dimension of the column space of $A$ are also 2. Also, since the first and second columns of $U$ (the pivot columns) are a basis for $\mathcal R(U)$ the first and second columns of $A$ are a basis for $\mathcal R(A)$:

$\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \qquad \begin{bmatrix} 2 \\ 1 \\ 2 \end{bmatrix}$

Note that as with $U$ the third column of $A$ is equal to -2 times the first column plus the second column, and the fourth column is equal to the first column.

Turning to the row spaces, since the rows of $U$ are linear combinations of the rows of $A$ and vice versa, the row spaces $\mathcal{R}(U^T)$ and $\mathcal{R}(A^T)$ are the same. Per the discussion on page 91 the nonzero rows of $U$, the vectors $\begin{bmatrix} 1&2&0&1 \end{bmatrix}^T$ and $\begin{bmatrix} 0&1&1&0 \end{bmatrix}^T$, form a basis for $\mathcal{R}(U^T)$. Since $\mathcal{R}(U^T) = \mathcal{R}(A^T)$ these vectors also form a basis for $\mathcal{R}(A^T)$. The dimension of each row space is 2.

We now turn to the nullspaces $\mathcal N(A)$ and $\mathcal N(U)$ consisting of the solutions to the equations $Ax = 0$ and $Ux = 0$ respectively. As noted above, if we do Gaussian elimination on $A$ (i.e., by subtracting the first row from the third row) then we obtain the matrix $U$ so that any solution to $Ax = 0$ is a solution to $Ux = 0$ and vice versa. We therefore have $\mathcal N(A) = \mathcal N(U)$ and just need to calculate one of the two.

In particular for $Ux = 0$ we must find $x = (x_1, x_2, x_3, x_4)$ such that

$\begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 0&0&0&0 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = 0$

Since the pivots of $U$ are in the first and second columns we have $x_1$ and $x_2$ as basic variables and $x_3$ and $x_4$ as free variables.

From the second row of the system above we have $x_2 + x_3 = 0$ or $x_2 = -x_3$. From the first row we then have $x_1 + 2x_2 + x_4 = x_1 -2x_3 + x_4 = 0$ or $x_1 = 2x_3 - x_4$. Setting each of the free variables $x_3$ and $x_4$ to 1 in turn (and the other free variable to zero) we have the following set of vectors as solutions to the homogeneous equation $Ux = 0$ and a basis for the null space of $U$:

$\begin{bmatrix} 2 \\ -1 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix}$

Since $\mathcal N(U) = \mathcal N(A)$ the above vectors also form a  basis for the nullspace of $A$. The dimension of the two nullspaces $\mathcal N(U)$ and $\mathcal N(A)$ is 2 (the number of columns of each matrix minus the rank, or $n - r = 4 - 2 = 2$).

Finally we turn to finding a basis for each of the left nullspaces $\mathcal N(A^T)$ and $\mathcal N(U^T)$. As discussed on page 95 there are two possible approaches to doing this. One way to find the left nullspace of $A$ is to look at the operations on the rows of $A$ needed to produce zero rows in the resulting echelon matrix $U$ in the process of Gaussian elimination; the coefficients used to carry out those operations make up the basis vectors of the left nullspace $\mathcal N(A^T)$.

In particular, the one and only zero row in $U$ is produced by subtracting the first row of $A$ from the third row of $A$, with no contribution from the second row; the coefficients for this operation are -1 (for the first row), 0 (for the second row), and 1 (for the third). The vector

$\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}$

is therefore a basis for the left nullspace $\mathcal N(A^T)$ (which has dimension 1). We can test this by multiplying $A$ on the left by the transpose of this vector:

$\begin{bmatrix} -1&0&1 \end{bmatrix} \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 1&2&0&1 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \end{bmatrix}$

The left nullspace of $U$ can be found in a similar manner: Since $U$ is already in echelon form with a third row of zeroes, the step of Gaussian elimination to produce that row would be equivalent to multiplying the first row by zero and the second row by zero and then adding them to the third (zero) row; the coefficients for this operation are 0 (for the first row), 0 (for the second row), and 1 (for the third row). The vector

$\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$

is therefore a basis for the left nullspace $\mathcal N(U^T)$ (which also has dimension 1). As with $A$ we can test this by multiplying $U$ on the left by the transpose of this vector:

$\begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix} 1&2&0&1 \\ 0&1&1&0 \\ 0&0&0&0 \end{bmatrix} = \begin{bmatrix} 0&0&0&0 \end{bmatrix}$

An alternate approach to find the left nullspace of $U$ is to explicitly solve $U^Ty = 0$ or

$\begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}$

Gaussian elimination on $U^T$ proceeds as follows: First, subtract two times the first row from the second:

$\begin{bmatrix} 1&0&0 \\ 2&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix}$

and then subtract the first row from the fourth row:

$\begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}$

Finally, subtract the second row from the third row:

$\begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix}$

We thus have $y_1$ and $y_2$ as basic variables (since the pivots are in the first and second columns) and $y_3$ as a free variable. From the first row of the final matrix we have $1 \cdot y_1 + 0 \cdot y_2 + 0 \cdot y_3 = 0$ or $y_1 = 0$ in the homogeneous case, and from the second row of the final matrix we have $0 \cdot y_1 + 1 \cdot y_2 + 0 \cdot y_3 = 0$ or $y_2 = 0$. Setting the free variable $y_3 = 1$ then gives us the vector

$\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$

as a basis for the left nullspace of $U$. The left nullspace of $U$ has dimension 1 (the number of rows of $U$ minus its rank, or $m - r = 3 - 2 = 1$).

Similarly we can also find the left nullspace of $A$ by solving the homogeneous system $A^Ty = 0$ or

$\begin{bmatrix} 1&0&1 \\ 2&1&2 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}$

Gaussian elimination on $A^T$ proceeds as follows: First, subtract two times the first row from the second:

$\begin{bmatrix} 1&0&1 \\ 2&1&2 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix}$

and then subtract the first row from the fourth row:

$\begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}$

Finally, subtract the second row from the third row:

$\begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&1 \\ 0&1&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix}$

We thus have $y_1$ and $y_2$ as basic variables (since the pivots are in the first and second columns) and $y_3$ as a free variable. From the first row of the final matrix we have $1 \cdot y_1 + 0 \cdot y_2 + 1 \cdot y_3 = 0$ or $y_1 = -y_3$ in the homogeneous case, and from the second row of the final matrix we have $0 \cdot y_1 + 1 \cdot y_2 + 0 \cdot y_3 = 0$ or $y_2 = 0$. Setting the free variable $y_3 = 1$ then gives us the vector

$\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}$

as a basis for the left nullspace of $A$. As with $U$ the left nullspace of $A$ has dimension 1 (the number of rows of $A$ minus its rank, or $m - r = 3 - 2 = 1$).

As with exercise 2.4.2, note that the row space of $A$ is equal to the row space of $U$ because the rows of $A$ are linear combinations of the rows of $U$ and vice versa. Similarly the nullspace of $A$ is equal to the nullspace of $U$ for the same reason.

UPDATE: Corrected two typos involving the equations for the left nullspace; thanks to Lucas for finding the errors.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

This entry was posted in linear algebra and tagged , , . Bookmark the permalink.

### 3 Responses to Linear Algebra and Its Applications, Exercise 2.4.3

1. Lucas says:

For the solutions of the left nullspace I believe the zero matrix is a typo. The product of a 4×2 and a 3×1 should be 4×1, not 4×2.

• Lucas says:

4×3 and a 3×1*

• hecker says:

Thanks much for finding these errors! I’ve updated the post to correct them.