Linear Algebra and Its Applications, Review Exercise 2.15

Review exercise 2.15. For each of the following, find a matrix A for which

i) there is either no solution or one solution to Ax =b depending on b

ii) there are an infinite number of solutions to Ax = b for all b

iii) there is either no solution or an infinite number of solutions to Ax =b depending on b

iv) there is exactly one solution for Ax = b for any b

Answer: i) The following system of two equations in two unknowns

\begin{array}{rcrcr} x_1&&&=&b_1 \\ &&x_2&=&b_2 \end{array}

obviously has a solution no matter the value of b. However if we add a third equation as follows

\begin{array}{rcrcr} x_1&&&=&b_1 \\ &&x_2&=&b_2 \\ x_1&+&x_2&=&b_3 \end{array}

then the resulting system may or may not have a solution depending on the value of b. In particular, in order for the system to have a solution we must have b_3 = b_1 + b_2 in which case the single solution is x = (b_1, b_2).

The system above corresponds to Ax = b where

A = \begin{bmatrix} 1&0 \\ 0&1 \\ 1&1 \end{bmatrix}

ii) In order for a system to have an infinite number of solutions we can specify more unknowns than there are equations, so that some variables are free variables that can take on any value. For example, if we start again with the system of two equations in two unknowns

\begin{array}{rcrcr} x_1&&&=&b_1 \\ &&x_2&=&b_2 \end{array}

we can add another unknown to obtain the following system of two equations in three unknowns:

\begin{array}{rcrcrcr} x_1&&&+&x_3&=&b_1 \\ &&x_2&+&x_3&=&b_2 \end{array}

This system has the general solution x = (b_1-c, b_2-c, c) for any b with c being an arbitrary value. Since c can take on any value the number of solutions to the system is thus infinite.

The system above corresponds to Ax = b where

A = \begin{bmatrix} 1&0&1 \\ 0&1&1 \end{bmatrix}

iii) The system of two equations in three unknowns in (ii) above has an infinite number of solutions. To allow for the possibility of having either an infinite number of solutions or no solution at all, we can follow the example of (i) above and add a third equation which might or might not be satisfiable depending on the value of b (and in particular the  value of b_3).

For example, consider the following system of equations:

\begin{array}{rcrcrcr} x_1&&&+&x_3&=&b_1 \\ &&x_2&+&x_3&=&b_2 \\ x_1&-&x_2&&&=&b_3 \end{array}

From (ii) above we know that x = (b_1 - c, b_2 - c, c) (where c is an arbitrary constant) is a solution to the first two equations. For this to also be a solution to the third equation we must have

x_1 - x_2 = (b_1-c) - (b_2-c) = b_1 - b_2 = b_3

So the system has an infinite number of solutions if b_3 = b_1 - b_2 but has no solution otherwise.

The system above corresponds to Ax = b where

A = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 1&-1&0 \end{bmatrix}

iv) Consider taking the system of three equations in three unknowns from (iii) above and changing the third equation as follows:

\begin{array}{rcrcrcr} x_1&&&+&x_3&=&b_1 \\ &&x_2&+&x_3&=&b_2 \\ &&&&x_3&=&b_3 \end{array}

From the third equation we have x_3 = b_3. From the second equation we have x_2 + x_3 = x_2+b_3 = b_2 or x_2 = b_2 - b_3. From the first equation we have x_1 + x_3 = x_1+b_3 = b_1 or x_1 = b_1 - b_3. So the system has the single solution x = (b_1-b_3, b_2-b_3, b_3).

This corresponds to Ax = b where

A = \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 0&0&1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.14

Review exercise 2.14. Do the three vectors (1, 1, 3), (2, 3, 6), and (1, 4, 3) form a basis for the vector space \mathbb{R}^3?

Answer: In order for the three vectors to be a basis for \mathbb{R}^3 they must be linearly independent. We can test this by doing elimination on the following matrix with the three vectors as the columns:

\begin{bmatrix} 1&2&1 \\ 1&3&4 \\ 3&6&3 \end{bmatrix}

Subtracting 1 times the first row from the second row we have

\begin{bmatrix} 1&2&1 \\ 1&3&4 \\ 3&6&3 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&1 \\ 0&1&3 \\ 3&6&3 \end{bmatrix}

and subtracting 3 times the first row from the third row we have

\begin{bmatrix} 1&2&1 \\ 0&1&3 \\ 3&6&3 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2&1 \\ 0&1&3 \\ 0&0&0 \end{bmatrix}

Since there are only two pivots, the rank of the vector’s column space is r = 2. This means that the three columns are not linearly independent and thus the original three vectors cannot be a basis for \mathbb{R}^3.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.13

Review exercise 2.13. For the matrix

A = \begin{bmatrix} a&a&a&a \\ a&b&b&b \\ a&b&c&c \\ a&b&c&d \end{bmatrix}

find its triangular factors A = LU and describe the conditions under which the columns of A are linearly independent.

Answer: We start elimination by subtracting 1 times the first row from the second row, with l_{21} = 1

\begin{bmatrix} a&a&a&a \\ a&b&b&b \\ a&b&c&c \\ a&b&c&d \end{bmatrix} \Rightarrow \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ a&b&c&c \\ a&b&c&d \end{bmatrix}

L = \begin{bmatrix} 1&&& \\ 1&1&& \\ ?&?&1& \\ ?&?&?&1 \end{bmatrix}

We next subtract 1 times the first row from the third row, with l_{31} = 1

\begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ a&b&c&c \\ a&b&c&d \end{bmatrix}\Rightarrow \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&b-a&c-a&c-a \\ a&b&c&d \end{bmatrix}

L = \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&?&1& \\ ?&?&?&1 \end{bmatrix}

and then subtract 1 times the first row from the fourth row, with l_{41} = 1

\begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&b-a&c-a&c-a \\ a&b&c&d \end{bmatrix} \Rightarrow \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&b-a&c-a&c-a \\ 0&b-a&c-a&d-a \end{bmatrix}

L = \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&?&1& \\ 1&?&?&1 \end{bmatrix}

Turnng to the second column, we subtract 1 times the second row from the third row, with l_{32} = 1

\begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&b-a&c-a&c-a \\ 0&b-a&c-a&d-a \end{bmatrix} \Rightarrow \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&0&c-b&c-b \\ 0&b-a&c-a&d-a \end{bmatrix}

L = \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&1&1& \\ 1&?&?&1 \end{bmatrix}

and then subtract 1 times the second row from the fourth row, with l_{42} = 1

\begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&0&c-b&c-b \\ 0&b-a&c-a&d-a \end{bmatrix} \Rightarrow \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&0&c-b&c-b \\ 0&0&c-b&d-b \end{bmatrix}

L = \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&1&1& \\ 1&1&?&1 \end{bmatrix}

Finally we subtract 1 times the third row from the fourth row, with l_{43} = 1

\begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&0&c-b&c-b \\ 0&0&c-b&d-b \end{bmatrix} \Rightarrow \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&0&c-b&c-b \\ 0&0&0&d-c \end{bmatrix}

L = \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&1&1& \\ 1&1&1&1 \end{bmatrix}

We thus have

A = LU = \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&1&1& \\ 1&1&1&1 \end{bmatrix} \begin{bmatrix} a&a&a&a \\ 0&b-a&b-a&b-a \\ 0&0&c-b&c-b \\ 0&0&0&d-c \end{bmatrix}

In order for the columns of U and thus the columns of A to be linearly independent, the values in the four pivot positions must be nonzero, so we must have a \ne 0, b \ne a, c \ne b, and d \ne c.

Note that this does not mean that all four values must be nonzero, or that all fur values have to be unique. For example, the conditions would be satisfied if a = c = 1 and b = d = 0 so that

A = \begin{bmatrix} 1&1&1&1 \\ 1&0&0&0 \\ 1&0&1&1 \\ 1&0&1&0 \end{bmatrix}

= \begin{bmatrix} 1&&& \\ 1&1&& \\ 1&1&1& \\ 1&1&1&1 \end{bmatrix} \begin{bmatrix} 1&1&1&1 \\ 0&-1&-1&-1 \\ 0&0&1&1 \\ 0&0&0&-1 \end{bmatrix} = LU

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.12

Review exercise 2.12. The matrix A is n by n-1 and has rank n-2. What is the dimension of its nullspace?

Answer: The dimension of the nullspace \mathcal{N}(A) is the number of columns of A minus the rank of A, or (n-1) - (n-2) = 1.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.11

Review exercise 2.11. a) Given the following matrix

A = LU = \begin{bmatrix} 1&&& \\ 2&1&& \\ 2&1&1& \\ 3&2&4&1 \end{bmatrix} \begin{bmatrix} 1&2&0&1&2&1 \\ 0&0&2&2&0&0 \\ 0&0&0&0&0&1 \\ 0&0&0&0&0&0 \end{bmatrix}

find its rank and a basis for the nullspace.

b) Are the first three rows of U a basis for the row space of A? Are the first, third, and sixth columns of U a basis for the column space of A? Are the four rows of A a basis for the row space of A?

c) Find the largest set of linearly independent vectors b that are a solution for Ax = b.

d) In doing elimination on A what value is used to multiply the third row before subtracting it from the fourth row?

Answer: a) The rank of A is the same as the rank of U. Since U has pivots in three columns (1, 3, and 6) the rank of U and thus of A is r = 3.

We now find the nullspace of A. Since U has pivots in columns 1, 3, and 6 the variables x_1, x_3, and x_6 are basic variables and x_2, x_4, and x_5 are free variables.

If we set x_2 = 1 and x_4 = x_5 = 0 then from the third row of U we have x_6 = 0. From the second row of U we have 2x_3 + 2x_4 = 2x_3 + 0 = 0 or x_3 = 0. Finally, from the first row of U we have x_1 + 2x_2 + x_4 + 2x_5 + x_6 = x_1 + 2 + 0 + 0 + 0 = 0 or x_1 = -2. So one solution to Ax = 0 is (-2, 1, 0, 0, 0, 0).

Next we set x_4 = 1 and x_2 = x_5 = 0. From the third row of U we again have x_6 = 0. From the second row of U we have 2x_3 + 2x_4 = 2x_3 + 2 = 0 or x_3 = -1. Finally, from the first row of U we have x_1 + 2x_2 + x_4 + 2x_5 + x_6 = x_1 + 0 + 1 + 0 + 0 = 0 or x_1 = -1. So a second solution to Ax = 0 is (-1, 0, -1, 1, 0, 0).

Finally we set x_5 = 1 and x_2 = x_4 = 0. From the third row of U we again have x_6 = 0. From the second row of U we have 2x_3 + 2x_4 = 2x_3 + 0 = 0 or x_3 = 0. Finally, from the first row of U we have x_1 + 2x_2 + x_4 + 2x_5 + x_6 = x_1 + 0 + 0 + 2 + 0 = 0 or x_1 = -2. So a third solution to Ax = 0 is (-2, 0, 0, 0, 1, 0).

We thus have three solutions to Ax = 0 that together form a basis for the nullspace of A:

\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -1 \\ 0 \\ -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -2 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}

b) The three questions in (b) above are true or false as follows:

True: The rows of U are created by multiplying rows of A by scalars and subtracting them from other rows, and hence the rows of U are linear combination of rows of A. The space spanned by the rows of U (the row space of U) is therefore the same as the space spanned by the rows of A (the row space of A). The first three rows of U are linearly independent and the dimension of the row space of U is 3 (the rank of U). The first three rows of U therefore serve as a basis for the row space of U, and thus for the row space of A as well.

False: Columns 1, 3, and 6 of U are linearly independent and serve as a basis for the column space of U; however the column space of U is not the same as the column space of A and thus the basis for U is not a basis for A. (The corresponding columns 1, 3, and 6 of A do form a basis for the column space of A.)

False: The rank of A is 3; this is the dimension of its column space and its row space. The four rows of A are not linearly independent (otherwise the rank of A would be 4, not 3) and thus cannot be a basis.

c) If Ax = b for some b then b is in the column space of A. From the form of U we know that columns 1, 3, and 6 of A are linearly independent and form a basis for the column space of A.

We have

A = LU = \begin{bmatrix} 1&&& \\ 2&1&& \\ 2&1&1& \\ 3&2&4&1 \end{bmatrix} \begin{bmatrix} 1&2&0&1&2&1 \\ 0&0&2&2&0&0 \\ 0&0&0&0&0&1 \\ 0&0&0&0&0&0 \end{bmatrix}

= \begin{bmatrix} 1&2&0&1&2&1 \\ 2&4&2&4&4&2 \\ 2&4&2&4&4&3 \\ 3&6&4&7&6&7 \end{bmatrix}

So the following are linearly independent vectors b such that there exist solutions to Ax = b:

c_1 \begin{bmatrix} 1 \\ 2 \\ 2 \\ 3 \end{bmatrix} \qquad c_2 \begin{bmatrix} 0 \\ 2 \\ 2 \\ 4 \end{bmatrix} \qquad c_3 \begin{bmatrix} 1 \\ 2 \\ 3 \\ 7 \end{bmatrix}

where c_1, c_2, and c_3 are nonzero.

d) The multipliers used in elimination to reduce A to the echelon form U are in the matrix L. In particular the value multiplying the third row prior to subtracting from the fourth row is l_{43} = 4.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.10

Review exercise 2.10. Given the set of all linear transformations from \mathbb{R}^n to \mathbb{R}^n define operations for scalar multiplication and vector addition that will make the set a vector space. What is the dimension of the resulting vector space?

Answer: Let A and B be linear transformations from \mathbb{R}^n to \mathbb{R}^n and let [A] and [B] be the n by n matrices corresponding to A and B respectively. If c is a scalar define the scalar product cA to be the linear transformation represented by the matrix c[A] and define the vector sum A+B to be the linear transformation represented by the matrix [A]+[B].

The set of linear transformations from \mathbb{R}^n to \mathbb{R}^n is a vector space under the operations thus defined; this follows from the fact that the set of n x n matrices is a vector space under those operations. The dimension of the space is n^2.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Review Exercise 2.9

Review exercise 2.9. Answer the following questions for the vector space of 2 by 2 matrices:

a) Does the set of 2 by 2 matrices with rank 1 form a subspace?

b) What is the subspace spanned by the 2 by 2 permutation matrices?

c) What is the subspace spanned by the 2 by 2 matrices with all positive entries (a_{ij} > 0 for all i and j)?

d) What is the subspace spanned by the 2 x 2 matrices that are invertible?

Answer: a) Consider the following two matrices of rank 1:

\begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \qquad \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}

If we add these two matrices

\begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

we obtain a matrix with rank 2. So the set of rank 1 matrices is not closed under addition and is therefore not a subspace.

b) The two 2 by 2 permutation matrices are

P_1 = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} \qquad P_2 = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}

with P_1 equal to the identity matrix, i.e., leaving the order of rows unchanged.

Linear combinations of P_1 and P_2 are of the form:

c_1P_1 + c_2P_2

= c_1 \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} + c_2 \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}

= \begin{bmatrix} c_1&c_2 \\ c_2&c_1 \end{bmatrix}

Thus the space spanned by P_1 and P_2 is all 2 by 2 matrices for which a_{11} = a_{22} and a_{12} = a_{21}.

c) To start answering this question we begin with the following matrices:

E_1 = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \qquad E_2 = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix}

E_3 = \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} \qquad E_4 = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}

Linear combinations of E_1 through E_4 are of the form:

c_1E_1 + c_2E_2 + c_3E_3 + c_4E_4

= c_1 \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + c_2 \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix}

+ c_3 \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} + c_4 \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}

= \begin{bmatrix} c_1&c_2 \\ c_3&c_4 \end{bmatrix}

Thus the space spanned by E_1 through E_4 is the vector space of all 2 by 2 matrices.

Now clearly E_1 through E_4 are not positive matrices, since they contain zero entries. However each of E_1 through E_4 can be expressed as linear combinations of positive matrices as follows:

E_1 = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 2&1 \\ 1&1 \end{bmatrix} - \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

E_2 = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} = \begin{bmatrix} 1&2 \\ 1&1 \end{bmatrix} - \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

E_3 = \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 1&1 \\ 2&1 \end{bmatrix} - \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

E_4 = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 1&1 \\ 1&2 \end{bmatrix} - \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

Since each of E_1 through E_4 can be expressed as a linear combination of positive matrices, and any 2 by 2 matrix can be expressed as a linear combination of E_1 through E_4, we conclude that any 2 by 2 matrix can be expressed as a linear combination of positive matrices. The set of positive matrices therefore spans the space of 2 by 2 matrices.

Two final notes: First, from above it is clear that the following five positive matrices span the space of 2 by 2 matrices:

A_1 = \begin{bmatrix} 2&1 \\ 1&1 \end{bmatrix} \qquad A_2 = \begin{bmatrix} 1&2 \\ 1&1 \end{bmatrix}

A_3 = \begin{bmatrix} 1&1 \\ 2&1 \end{bmatrix} \qquad A_4 = \begin{bmatrix} 1&1 \\ 1&2 \end{bmatrix}

A_5 = \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix}

However since the space of 2 by 2 matrices is spanned by E_1 through E_4 the dimension of the space is only 4. We therefore conclude that the matrices A_1 through A_5 are linearly dependent and that one of them can be expressed as a linear combination of the others.

If we add A_1 through A_4 we see that

A_1+A_2+A_3+A_4

= \begin{bmatrix} 2&1 \\ 1&1 \end{bmatrix} + \begin{bmatrix} 1&2 \\ 1&1 \end{bmatrix} + \begin{bmatrix} 1&1 \\ 2&1 \end{bmatrix} + \begin{bmatrix} 1&1 \\ 1&2 \end{bmatrix}

= \begin{bmatrix} 5&5 \\ 5&5 \end{bmatrix} = 5A_5

We therefore have A_5 = \frac{1}{5} \left(A_1+A_2+A_3+A_4\right). Note that this implies that A_1 through A_4 by themselves (i.e., without A_5) span the space of 2 by 2 matrices. We also know that A_1 through A_4 are linearly independent: if they were linearly dependent then there would be at most three linearly independent matrices in the set, and three linearly independent matrices would not be sufficient to span the 4-dimensional space of 2 by 2 matrices. The matrices A_1 through A_4 therefore form a basis for the space of 2 by 2 matrices, an alternative basis to E_1 through E_4.

Second, note that any 2 by 2 matrix is equivalent to a vector in \mathbb{R}^4; thus, for example the matrix A_1 is equivalent to the vector v_1 = \left(2, 1, 1, 1\right) and similarly for A_2 through A_4. Since the set of positive matrices spans the space of 2 by 2 matrices, we can also conclude that the set of positive vectors spans \mathbb{R}^4.

(We can extend this argument to show that the set of positive vectors in \mathbb{R}^N spans \mathbb{R}^N, and to derive a basis for \mathbb{R}^N consisting of N positive vectors, analogous to A_1 through A_4 above. However we leave that as an exercise for the reader.)

d) Consider the following two invertible matrices:

\begin{bmatrix} 1&1 \\ 1&0 \end{bmatrix} \qquad \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}

If we subtract the second matrix from the first we have

\begin{bmatrix} 1&1 \\ 1&0 \end{bmatrix} - \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} = E_1

where E_1 is one of the matrices from part (c) above.

We can similarly express each of E_2 through E_4 as the difference between two invertible matrices:

\begin{bmatrix} 1&1 \\ 0&1 \end{bmatrix} - \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} = E_2

\begin{bmatrix} 1&0\\ 1&1 \end{bmatrix} - \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} = E_3

\begin{bmatrix} 0&1\\ 1&1 \end{bmatrix} - \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} = E_4

So we can express each of the matrices E_1 through E_4 as a linear combination of invertible matrices. But from (c) above we know that E_1 through E_4 span the entire space of 2 by 2 matrices. Therefore the set of invertible 2 by 2 matrices also spans the entire space of 2 by 2 matrices.

UPDATE: I corrected the answer to (c); in my original answer I mistakenly identified E_1 through E_4 as positive matrices. (They are instead merely non-negative.)

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.8

Review exercise 2.8. Do the following:

a) Find a matrix whose nullspace contains the vector x = (1, 1, 2).

b) Find a matrix whose left nullspace contains the vector y = (1, 5).

c) Find a matrix with column space spanned by (1, 1, 2) and row space spanned by (1, 5).

d) Given an arbitrary set of three vectors in \mathbb{R}^6 and a second arbitrary set of three vectors in \mathbb{R}^5 determine whether a 6 by 5 matrix exists for which the first three vectors span the column space and the second three vectors span the row space.

Answer: a) One way to find a matrix satisfying this criterion is simply to try different combinations of the entries of (1, 1, 2). For example, if we add the first two entries of (1, 1, 2) and subtract the third entry we get zero. Similarly, if we multiply the second element of (1, 1, 2) by 2 and subtract the third entry we also get zero. So if we have the matrix

A = \begin{bmatrix} 1&1&-1 \\ 0&2&-1 \end{bmatrix}

then the vector (1, 1, 2) is in the nullspace of A:

\begin{bmatrix} 1&1&-1 \\ 0&2&-1 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}

b) We are looking for a matrix B for which (1, 5) is in the left nullspace of B. This means that (1, 5) would be in the nullspace of B^T. An easy way to find a matrix B is thus to find a matrix (call it B^T) for which (1, 5) is in the nullspace and then transpose that matrix to obtain B.

Note that if we multiply the first entry in (1, 5) by 5 and then subtract the second entry we get 0. We also get zero if we multiply the first entry by 10 and subtract 2 times the second entry. So if we have

B^T = \begin{bmatrix} 5&-1 \\ 10&-2 \end{bmatrix}

then the vector (1, 5) is in the nullspace of B^T and thus is in the left nullspace of

B = \begin{bmatrix} 5&10 \\ -1&-2 \end{bmatrix}

c) The easiest way to construct the desired matrix is to make the first row of the matrix (1, 5) and the first column of the matrix (1, 1, 2); since the first entry of each vector is 1 this works out:

C = \begin{bmatrix} 1&5 \\ 1&? \\ 2&? \end{bmatrix}

We then fill out the other entries of C so that the second and third rows are multiples of the first row, and the second column is a multiple of the first column:

C = \begin{bmatrix} 1&5 \\ 1&5 \\ 2&10 \end{bmatrix}

This means that the column space and row space of C depend solely on the first column and first row respectively.

d) There are at least two possible ways to approach this problem: To find a way to construct a 6 by 5 matrix that has the desired property, no matter the choice of vectors, or to find a counter-example that shows that for at least some sets of vectors we cannot construct such a matrix. We’ll take the latter approach.

Note that since the sets of vectors are arbitrary we can choose vectors that are linearly independent or linearly dependent. For example, suppose from \mathbb{R}^6 we choose three linearly independent vectors to span the column space of the matrix. Also suppose that from \mathbb{R}^5 we choose three vectors that are linearly dependent (for example, with the second two vectors multiples of the first vector) and wish to have these vectors span the row space of the matrix.

But this is impossible: If the column space of the matrix is spanned by the first set of three linearly independent vectors then the dimension of the column space will be 3. (Recall that a set of linearly independent vectors that spans a space forms a basis for the space, with the dimension of the space equal to the number of basis vectors.) On the other hand, if the row space of the matrix is spanned by the second set of three linearly dependent vectors then the dimension of the row space will be less than 3. (For example, if the second and third vectors in the set are multiples of the first one then there is only one linearly independent vector in the set. If the set spans the row space then that first vector forms a basis for the space, and the dimension of the space will be 1.)

Since the dimension of the row space of a matrix must equal the dimension of the column space, we have a contradiction.

Here is a concrete example of such a 6 by 5 matrix, with a set of three linearly independent vectors chosen to be columns 1, 2, and 3, and a second set of three linearly dependent vectors chosen to be rows 4, 5, and 6:

\begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 1&1&1&0&0 \\ 2&2&2&0&0 \\ 3&3&3&0&0 \end{bmatrix}

Note that the three linearly independent columns do indeed span the column space, since the last two columns are zero and thus are linearly dependent on the first three. However the last three rows do not span the row space, since the first three rows cannot be expressed as linear combinations of the last three rows. (Instead rows 4 through 6 can be expressed as linear combinations of rows 1 through 3; rows 1 through 3 are linearly independent and form a basis for the row space. The dimension of the row space is thus 3, the same as the dimension of the column space.)

We conclude that we cannot construct a 6 by 5 matrix whose column space is spanned by an arbitrary set of three vectors from \mathbb{R}^6 and whose row space is spanned by an arbitrary set of three vectors from \mathbb{R}^5.

UPDATE: Rewrote the answer to (d) to make it more clear.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Tagged , , , , | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.7

Review exercise 2.7. Find the most general solution to the following system of linear equations:

\begin{array}{rcrcrcr} u&+&v&+&w&=&1 \\ u&&&-&w&=&2 \end{array}

Answer: This system corresponds to the system Ax = b where

A = \begin{bmatrix} 1&1&1 \\ 1&0&-1 \end{bmatrix} \qquad x = \begin{bmatrix} u \\ v \\ w \end{bmatrix} \qquad b = \begin{bmatrix} 1 \\ 2 \end{bmatrix}

The general solution to this system is a combination of a particular solution to Ax = b plus the homogeneous solution to Ax = 0. We first solve the homogeneous system.

We start by multiplying the first row of A by 1 and subtracting it from the second row:

\begin{bmatrix} 1&1&1 \\ 1&0&-1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&1 \\ 0&-1&-2 \end{bmatrix}

The resulting matrix is in echelon form with pivots in columns 1 and 2, u and v as basic variables, and w as a free variable. Setting w to 1 and solving the homogeneous system, from the second row we have -v - 2w = -v - 2 = 0 or v = -2. From the first row we then have u+v+w = u - 2 + 1 = u - 1 = 0 or u=1. So (1, -2, 1) is a solution for the homogeneous system, as is any multiple of that vector.

To find a particular solution we set w = 0 and go back to the original system. From the second equation we have u - w = u - 0 = 2 or u = 2. From the first equation we have u + v + w = 2 + v + 0 = 1 or v = -1. So (2, -1, 0) is a particular solution.

The general solution to the original system is thus all vectors of the form

\begin{bmatrix} 2 \\ -1 \\ 0 \end{bmatrix} + w \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix}

where w can take on any value.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Review Exercise 2.6

Review exercise 2.6. Given the matrices

A = \begin{bmatrix} 1&2 \\ 3&6 \end{bmatrix} \qquad B = \begin{bmatrix} 0&0 \\ 1&2 \end{bmatrix} \qquad C = \begin{bmatrix} 1&1&0&0 \\ 0&1&0&1 \end{bmatrix}

find bases for each of their four fundamental subspaces.

Answer: The second column of A is equal to twice the first column so the rank of A (and the dimension of the column space of A) is 1. The first column (1, 3) is a basis for the column space of A.

The dimension of the row space of A is also 1 (the rank of A), and the first row (1, 2) is a basis for the row space of A.

We can reduce A to echelon form by subtracting 3 times the first row from the second row:

\begin{bmatrix} 1&2 \\ 3&6 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix}

The resulting echelon matrix has one pivot, with x_1 a basic variable and x_2 a free variable. Setting x_2 = 1 we have from the first row of the echelon matrix x_1 + 2x_2 = x_1 + 2 \cdot 1 = 0 or x_1 = -2. The vector (-2, 1) is thus a basis for the nullspace of A.

The left nullspace of A is the nullspace of

A^T = \begin{bmatrix} 1&3 \\ 2&6 \end{bmatrix}

We can reduce A^T to echelon form by subtracting 2 times the first row from the second row:

\begin{bmatrix} 1&3 \\ 2&6 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&3 \\ 0&0 \end{bmatrix}

The resulting echelon matrix has one pivot, with x_1 a basic variable and x_2 a free variable. Setting x_2 = 1 we have from the first row of the echelon matrix x_1 + 3x_2 = x_1 + 3 \cdot 1 = 0 or x_1 = -3. The vector (-3, 1) is thus a basis for the nullspace of A^T and for the left nullspace of A.

As with A, the second column of B is equal to twice the first column, so the rank of B (and the dimension of the column space of B) is 1. The first column (0, 1) is a basis for the column space of B.

The dimension of the row space of B (the rank of B) is also 1, and the second row (1, 2) is a basis for the row space of B. Note that this is the same as the basis for the row space of A so that the row spaces of A and B are identical.

We can transform the matrix B to echelon form simply by exchanging the first and second rows:

\begin{bmatrix} 0&0 \\ 1&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix}

The resulting echelon matrix has one pivot, with x_1 a basic variable and x_2 a free variable. Setting x_2 = 1 we have from the first row of the echelon matrix x_1 + 2x_2 = x_1 + 2 \cdot 1 = 0 or x_1 = -2. The vector (-2, 1) is thus a basis for the nullspace of B. This is the same as the basis for the nullspace of A so that the nullspaces of A and B are identical, like the row spaces.

The left nullspace of B is the nullspace of

B^T = \begin{bmatrix} 0&1 \\ 0&2 \end{bmatrix}

We can reduce B^T to echelon form by subtracting 2 times the first row from the second row:

\begin{bmatrix} 0&1 \\ 0&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix}

The resulting echelon matrix has one pivot, with x_2 a basic variable and x_1 a free variable. Setting x_1 = 1 we have from the first row of the echelon matrix x_2 = 0. The vector (1, 0) is thus a basis for the nullspace of B^T and for the left nullspace of B.

The matrix C is already in echelon form, with pivots in the first and second columns. It has rank 2 and the first and second columns (1, 0) and (1, 1) respectively form a basis for the column space. Note that since the second column is equal to the first column plus the fourth column, an alternate basis for the column space consists of the first and fourth columns (1, 0) and (0, 1) respectively.

The dimension of the row space of C (the rank of C) is also 2, and the two rows (1, 1, 0, 0) and (0, 1, 0, 1) of C are a basis for the row space of C.

In the system Cx = 0 we have x_1 and x_2 as basic variables and x_3 and x_4 as free variables. If we set x_3 = 1 and x_4 = 0 then from the second row of C we have x_2 + x_4 = x_2 + 0 = 0 or x_2 = 0. From the first row of C we then have x_1 + x_2 = x_1 + 0 = 0 or x_1 = 0. Thus one solution to Cx = 0 is (0, 0, 1, 0).

If we set x_3 = 0 and x_4 = 1 then from the second row of C we have x_2 + x_4 = x_2 + 1 = 0 or x_2 = -1. From the first row of C we then have x_1 + x_2 = x_1 - 1 = 0 or x_1 = 1. Thus a second solution to Cx = 0 is (1, -1, 0, 1). The two solutions (0, 0, 1, 0) and (1, -1, 0, 1) are a basis for the nullspace of C.

Finally, since C has rank r = 2 and has m = 2 rows, the dimension of the left nullspace is m - r = 2 - 2 = 0. This means that the only vector in the left nullspace is the zero vector (0, 0, 0, 0).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment