Linear Algebra and Its Applications, Exercise 2.5.1

Exercise 2.5.1. Describe the incidence matrix A for the following graph:

The graph has three nodes and three edges, with edge 1 going from node 2 to node 1, edge 2 going from node 3 to node 2, and edge 3 going from node 3 to node 1.

Find a solution to Ax = 0. What vectors are in the nullspace \mathcal N(A)? Also find a solution to A^Ty = 0. What vectors are in the left nullspace \mathcal N(A^T)?

Answer: Since the graph has three edges the incidence matrix A has three rows, and since it has three nodes A has three columns. The incidence matrix is

A = \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

The first row represents edge 1 from node 2 to node 1 (i.e., leaving node 2 and entering node 1). The second row represents edge 2 from node 3 to node 2. The third row represents edge 3 from node 3 to node 1.

The sum of the first and second rows equals the third row, so the rank r = 2 and the dimensions of the nullspace and left nullspace are n - r = 3 - 2 = 1 and m - r = 3 - 2 = 1 respectively.

Since the entries in each row sum to zero, the vector x = \begin{bmatrix} 1&1&1 \end{bmatrix}^T is a solution to Ax = 0 and a basis for the (1-dimensional) nullspace \mathcal N(A). The nullspace is then the set of all vectors \begin{bmatrix} c&c&c \end{bmatrix}^T where c is some scalar value.

Since A^Ty = y^TA we can find a vector in the left nullspace by looking for a vector that can multiply each column of A and produce zero. A little experimentation produces the vector y = \begin{bmatrix} 1&1&-1 \end{bmatrix}^T as a solution to y^TA = A^Ty and a basis for the (1-dimensional) left nullspace \mathcal N(A^T). The left nullspace is then the set of all vectors \begin{bmatrix} c&c&-c \end{bmatrix}^T where c is some scalar value.

UPDATE: Added a paragraph to clarify what the rows of A represent.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.4.21

Exercise 2.4.21. Suppose that for two matrices A and B the associated subspaces (column space, row space, null space, and left nullspace) are the same. Does this imply that A = B?

Answer: The answer is no, as shown by the following counterexample:

A = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} \qquad B = \begin{bmatrix} -1&0 \\ 0&1 \end{bmatrix}

We have the column spaces \mathcal R(A) = \mathcal R(B) = \mathbf R^2. We also have the row spaces \mathcal R(A^T) = \mathcal R(B^T) = \mathbf R^2. Since the matrices are nonsingular the nullspaces \mathcal N(A) and \mathcal N(B) contain only the zero vector and are thus equal. For the same reason the left nullspaces \mathcal N(A^T) and \mathcal N(B^T) also contain only the zero vector and are also equal.

Thus equality of the fundamental subspaces of A and B does not imply that A = B. Also note that we cannot even conclude that A = cB for some scalar c, as shown by the counterexample above.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.20

Exercise 2.4.20. For each of the following properties find a matrix with that property. If no such matrix exists, explain why that is the case.

a) The column space of the matrix contains the vectors

\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

and the row space contains the vectors

\begin{bmatrix} 1 \\ 1 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 2 \end{bmatrix}

b) The column space has as a basis the vector

\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}

and the nullspace has as a basis the vector

\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}

c) The column space is all of \mathbf R^4 and the row space is all of \mathbf R^3.

Answer: a) The easiest way to create a matrix A with the specified vectors in the column space \mathcal R(A) is to use the vectors in question as the only columns in the matrix:

A = \begin{bmatrix} 1&0 \\ 0&0 \\ 0&1 \end{bmatrix}

Note that adding the first row of A to the third row of A produces the vector \begin{bmatrix} 1 \\ 1 \end{bmatrix} so that vector is in the row space \mathcal R(A^T). Similarly adding the first  row of A to twice the third row of A produces the vector \begin{bmatrix} 1 \\ 2 \end{bmatrix} so that vector is also in the row space \mathcal R(A^T).

Thus the matrix A specified above has the required property.

b) Since the basis vector for the nullspace is in \mathbf R^3 the number of columns of the matrix must be n = 3 (since each vector in the nullspace must be a solution to Ax = 0 and the elements of x multiply the columns of A).

However for an m by n matrix the dimension of the  nullspace is n - r where r is the rank of the matrix. In this case the column space and nullspace each have only a single vector as a basis, so the dimension of the column space and nullspace are each 1. This implies that the rank r = 1 and the number of columns n = 2 for a matrix with the specified property.

So we must have both n = 3 and n = 2 which is a contradiction. Thus there is no matrix with the required property.

c) If the column space is \mathbf R^4 then its dimension is 4, and if the row space is \mathbf R^3 then its dimension is 3. However the dimensions of the row and columns spaces should both be equal to the rank r of the matrix and therefore should be the same. This is a contradiction, so no matrix exists with the required property.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.19

Exercise 2.4.19. For the following matrix

\begin{bmatrix} 0&1&2&3&4 \\ 0&1&2&4&6 \\ 0&0&0&1&2 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 1&1&0 \\ 0&1&1 \end{bmatrix} \begin{bmatrix} 0&1&2&3&4 \\ 0&0&0&1&2 \\ 0&0&0&0&0 \end{bmatrix}

find a basis for each of the four associated subspaces.

Answer: Per the above equation the matrix A on the left side can be factored into a lower triangular matrix L with unit diagonal and an upper triangular matrix U.

The matrix U is what is obtained from A through the process of Gaussian elimination (with the matrix L containing the multipliers). The row space of A is therefore the same as the row space of U. The two nonzero rows of U

\begin{bmatrix} 0 \\ 1 \\ 2 \\3 \\ 4 \end{bmatrix} \quad \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 2 \end{bmatrix}

are a basis for the row space \mathcal R(A^T) = \mathcal R(U^T).

The matrix U has pivots in the second and fourth columns; those columns are linearly independent and form a basis for the column space \mathcal R(U). The corresponding second and fourth columns of A

\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \quad \begin{bmatrix} 3 \\ 4 \\ 1 \end{bmatrix}

are also linearly independent and form a basis for the column space \mathcal R(A).

The nullspace of A is the same as the nullspace of U, which contains all solutions to

Ux = \begin{bmatrix} 0&1&2&3&4 \\ 0&0&0&1&2 \\ 0&0&0&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix} = 0

Since U has pivots in the second and fourth columns the variables x_2 and x_4 are basic variables with the others being free variables. From the second row above we have x_4 + 2x_5 = 0 or x_4 = -2x_5. Substituting into the first row we have

x_2 + 2x_3 + 3x_4 +4x_5

= x_2 + 2x_3 - 6x_5 +4x_5

= x_2 + 2x_3 - 2x_5 = 0

or x_2 = -2x_3 + 2x_5.

Setting each of the free variables to 1 in turn and the others to zero, a solution for Ux = 0 and thus Ax = 0 is

x = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ 2 \\ 0 \\ -2 \\ 1 \end{bmatrix}

The vectors

\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 2 \\ 0 \\ -2 \\ 1 \end{bmatrix}

are thus a basis for the nullspace \mathcal N(A) = \mathcal N(U).

Finally we consider the left nullspace of A. Since U has two pivots the rank of U and therefore A is r = 2. The dimension of \mathcal N(A^T) is therefore m - r = 3 - 2 = 1.

Since A = LU we have L^{-1}A = U. The last row of L^{-1} is a basis for the (1-dimensional) left nullspace \mathcal N(A^T) consisting of those y such that A^Ty = 0 or y^TA = 0. However we do not need to compute L^{-1}.

Instead to find a suitable y we can find the coefficients that make the rows of A combine to form the zero row of U. Gaussian elimination on A proceeds by subtracting the first row of A from the second row:

\begin{bmatrix} 0&1&2&3&4 \\ 0&1&2&4&6 \\ 0&0&0&1&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 0&1&2&3&4 \\ 0&0&0&1&2 \\ 0&0&0&1&2 \end{bmatrix}

and then subtracting the second row of the resulting matrix from the third row:

\begin{bmatrix} 0&1&2&3&4 \\ 0&0&0&1&2 \\ 0&0&0&1&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 0&1&2&3&4 \\ 0&0&0&1&2 \\ 0&0&0&0&0 \end{bmatrix} = U

The zero row in U therefore is composed of 1 times the first row of A plus -1 times the second row of A plus 1 times the third row of A. The vector

\begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix}

is therefore a basis for the left nullspace \mathcal N(A^T).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.18

Exercise 2.4.18. Given the vectors

\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 5 \\ 0 \end{bmatrix}

and the subspace V that they span, find two matrices A and B such that V = \mathcal R(A) = \mathcal N(B).

Answer: The easiest way to create a matrix whose row space is V is to use the vectors above as the rows of the matrix:

\begin{bmatrix} 1&1&0 \\ 1&2&0 \\ 1&5&0 \end{bmatrix}

However we can simplify things by doing Gaussian elimination on this matrix to obtain another matrix with the same row space V, first subtracting the first row from the second and third rows, and then subtracting four times the second row from the third:

\begin{bmatrix} 1&1&0 \\ 1&2&0 \\ 1&5&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&0 \\ 0&1&0 \\ 0&4&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}

We can then further simplify by subtracting the second row from the first; again, this does not change the row space:

\begin{bmatrix} 1&1&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}

Our final matrix is thus

A = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}

with \mathcal R(A) = V. Note that V is the x-y plane in \mathbf R^3 and the coordinate vectors e_1 = \begin{bmatrix} 1&0&0 \end{bmatrix}^T and e_2 = \begin{bmatrix} 0&1&0 \end{bmatrix}^T are a basis for V.

We now want to find a second matrix B whose nullspace is V. If V = \mathcal N(B) then for any vector v in V we must have Bv = 0. In particular, we must have Be_1 = Be_2 = 0 for the coordinate vectors e_1 and e_2 that form a basis for V. The simplest way to define B is then as the 1 by 3 matrix

B = \begin{bmatrix} 0&0&1 \end{bmatrix}

for which

Be_1 = \begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} = 0

and

Be_2 = \begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} = 0

For any solution x = \begin{bmatrix} x_1&x_2&x_3 \end{bmatrix}^T to Bx = 0 we must then have

\begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = 0

From the above we see that x_3 = 0 and that x_1 and x_2 can take on any value. The nullspace \mathcal N(B) therefore contains all vectors of the form

\begin{bmatrix} c_1 \\ c_2 \\ 0 \end{bmatrix} = c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} = c_1e_1 + c_2e_2

But this is equivalent to all linear combinations of the coordinate vectors e_1 and e_2 that form a basis for V. We thus have \mathcal N(B) = V.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.17

Exercise 2.4.17. Point out the error in the following argument: Suppose B is a right-inverse of A so that AB = I. Then we can multiply both sides by A^T to produce A^TAB = A^T or B = (A^TA)^{-1}A^T. But we then have BA = (A^TA)^{-1}A^TA = I so that B is a left-inverse as well.

Answer: The problem is that we have no guarantee that (A^TA)^{-1} actually exists.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.16

Exercise 2.4.16. Given an m by n matrix A the columns of which are linearly independent, fill in the blanks in the following statements: The rank of A is ____. The nullspace is ____. The row space is ____. There is at least one ____-inverse.

Answer: If the columns of A are linearly independent then the rank of A is r = n, the number of columns.

In the system Ax = 0 the product Ax of A and the vector x = \begin{bmatrix} x_1&x_2&\dotsc&x_n \end{bmatrix}^T is a linear combination of the columns of A with x_1, x_2, \dotsc, x_n as coefficients. Since the columns of A are linearly independent, the only way for the linear combination Ax to be zero is for x_1 = x_2 = \cdots = x_n = 0. The nullspace \mathcal N(A)  is therefore the set containing only the zero vector.

Since the dimension of the row space \mathcal R(A^T)  is the same as the dimension of the column space \mathcal R(A) , the dimension of the row space must be n. Each row has n entries and is thus an element of \mathbf R^n and since it has dimension n the row space \mathcal R(A^T)  itself is equal to \mathbf R^n.

Since the columns of A are linearly independent and the rank r = n there is at least one left-inverse of A.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.15

Exercise 2.4.15. For each of the following matrices

A = \begin{bmatrix} 1&1&0 \\ 0&1&1 \end{bmatrix} \qquad M = \begin{bmatrix} 1&0 \\ 1&1 \\ 0&1\end{bmatrix} \qquad T = \begin{bmatrix} a&b \\ 0&a \end{bmatrix}

find a left inverse and right inverse if they exist.

Answer: We begin with the 2 by 3 echelon matrix A. Since A has two pivots its rank r = 2. Since A has two rows we also have r = m so that A has a (3 by 2) right inverse. It does not have a left inverse since r \ne n = 3.

We now attempt to find a right inverse C. We must have AC = I or

\begin{bmatrix} 1&1&0 \\ 0&1&1 \end{bmatrix} \begin{bmatrix} c_{11}&c_{12} \\ c_{21}&c_{22} \\ c_{31}&c_{32} \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

Multiplying the two rows of A by the first column of C we must have c_{11} + c_{21} = 1 and c_{21} + c_{31} = 0. One way to achieve this is to set c_{11} = 1 and c_{21} = c_{31} = 0.

Multiplying the two rows of A by the second column of C we must have c_{12} + c_{22} = 0 and c_{22} + c_{32} = 1. One way to achieve this is to set c_{12} = c_{22} = 0 and c_{32} = 1. We then have

C = \begin{bmatrix} 1&0 \\ 0&0 \\ 0&1 \end{bmatrix}

so that

AC = \begin{bmatrix} 1&1&0 \\ 0&1&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 0&0 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

We now consider the matrix M. The first and third rows of M are linearly independent, and the second row of M is equal to the sum of the first and third rows. The rank of M is therefore r = 2. Since M has two columns we also have r = n so that M has a (3 by 2) right inverse. It does not have a right inverse since r \ne m = 3.

We now attempt to find a left inverse B of M such that BM = I. We could proceed as we did above, but there is a shortcut: Note that M = A^T so that we are looking for B such that BA^T = I. But BA^T = AB^T and we already have a matrix C such that AC = I. We can therefore choose B such that B^T = C or B = C^T:

B = \begin{bmatrix} 1&0&0 \\ 0&0&1 \end{bmatrix}

We then have

BM = \begin{bmatrix} 1&0&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0 \\ 1&1 \\ 0&1\end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = I

so that B = C^T is a left inverse of M.

Finally we consider the 2 by 2 triangular matrix T. If a \ne 0 then T has two pivots and rank r = 2. In this case since r = m = n both a left inverse B and right inverse C exist and are the same: T is invertible with T^{-1} = B = C.

From the standard formula for the inverse of a 2 by 2 matrix (see page 42) we have

T^{-1} = 1/(a \cdot a - b \cdot 0) \begin{bmatrix} a&-b \\ -0&a \end{bmatrix}

= 1/a^2 \begin{bmatrix} a&-b \\ 0&a \end{bmatrix} = \begin{bmatrix} 1/a&-b/a^2 \\ 0&1/a \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.14

Exercise 2.4.14. Suppose we have the following matrix:

A = \begin{bmatrix} a&b \\ c&d \end{bmatrix}

with a, b, and c given and a \ne 0. For what value of d does A have rank 1? In this case how can A be expressed as the product of a column vector and row vector A = uv^T?

Answer:We can do Gaussian elimination on A by multiplying the first row by c/a (which is permissible since a \ne 0) and subtracting it from the second row:

\begin{bmatrix} a&b \\ c&d \end{bmatrix} \Rightarrow \begin{bmatrix} a&b \\ 0&d-(c/a)b \end{bmatrix}

Since a \ne 0 there is a pivot in the first column. For the rank of A to be 1 that pivot must be the only one; for this to be true we must have d - (c/a)b = 0 or d = (bc)/a.

If d = (bc)/a then the matrix A can be expressed as the product of a column vector and row vector as follows:

A = \begin{bmatrix} a&b \\ c&(bc)/a \end{bmatrix} = \begin{bmatrix} a \\ c \end{bmatrix} \begin{bmatrix} 1&b/a \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.4.13

Exercise 2.4.13. What is the rank of each of the following matrices:

A = \begin{bmatrix} 1&0&0&3 \\ 0&0&0&0 \\ 2&0&0&6 \end{bmatrix} \qquad A = \begin{bmatrix} 2&-2 \\ 2&-2 \end{bmatrix}

Express each matrix as a product of a column vector and row vector, A = uv^T.

Answer: We do Gaussian elimination on the first matrix by subtracting two times the first row from the third:

\begin{bmatrix} 1&0&0&3 \\ 0&0&0&0 \\ 2&0&0&6 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&0&0&3 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix}

The resulting echelon matrix has one pivot, and thus the rank of A is 1. The matrix A can be expressed as the product of a column vector and row vector as follows:

A = \begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix}\begin{bmatrix} 1&0&0&3 \end{bmatrix}

We do Gaussian elimination on the second matrix by subtracting the first row from the second:

\begin{bmatrix} 2&-2 \\ 2&-2 \end{bmatrix} \Rightarrow \begin{bmatrix} 2&-2 \\ 0&0 \end{bmatrix}

The resulting echelon matrix has one pivot, and thus the rank of A is 1. The matrix A can be expressed as the product of a column vector and row vector as follows:

A = \begin{bmatrix} 2 \\ 2 \end{bmatrix} \begin{bmatrix} 1&-1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment