Linear Algebra and Its Applications, Exercise 2.3.5

Exercise 2.3.5. If we test vectors by independence by putting them into the rows of a matrix (rather than the columns), how can we determine whether or not the rows are independent using the elimination process? Determine the independence of the vectors in exercise 2.3.1 using this method.

Answer: My goal in this post is to (as much as possible) answer the question using the material introduced in section 2.3 and prior sections of the book. The material in section 2.4, in particular regarding the relationship of the row space to the column space, can simplify the answer, or at least make it more rigorous.

Assume that we have a set of m vectors, each of which has n elements; the vectors are therefore contained in the vector space \mathbf{R}^n. We create an m by n matrix  with the rows being the vectors in question.

The first thing to note is that if m > n then the row vectors must be linearly dependent, since we cannot have more than n linearly independent vectors in \mathbf{R}^n (see page 88).

The next thing to note is that if any of the rows of the matrix is initially zero then again the row vectors must be linearly dependent, since we can multiply that row by a nonzero weight and all other rows by zero to produce a nontrivial linear combination of the rows that is equal to the zero vector.

If the number of rows m \le n and none of the initial rows is zero then we proceed to perform Gaussian elimination on the matrix. We can assume that we do not need to do row exchanges during elimination, since exchanging rows does not affect whether or not the row vectors are linearly independent, and if row exchanges are indeed required then they can be done prior to elimination, i.e., by multiplying by a permutation matrix.

Suppose at some point in the elimination process a row of all zero values is produced. Since the multipliers in Gaussian elimination are nonzero, the zero row is a linear combination of the original row vectors in which at least one of the weights is nonzero. (For example, if the first entry in the original row vector were nonzero then the first row would be included in the linear combination with weight equal to the negative of the multiplier used to produce a zero in the first position of the row.) Again we have a nontrivial linear combination of rows equaling the zero vector, and thus the rows are linearly dependent.

On the other hand, suppose elimination completes and produces an echelon matrix with no zero rows, i.e., every row has a pivot. The row vectors in this matrix are then linearly independent, as we can see by looking at the pivots:

Start with the pivot in the the first row of the echelon matrix. Since this value is a pivot there are zeros below it, and since this is the first row the pivot is the only nonzero value in this column for any of the rows. Any linear combination of the row vectors that equals zero must therefore multiply the first row by zero. Similarly the pivot in the second row has zeros below it, and since the first row was multiplied by zero it is now the only nonzero value that can contribute to the linear combination. For a linear combination of the rows to equal zero we must therefore also multiply the second row by zero.

Since every row of the echelon matrix has a pivot, looking at each row in turn we can see that for a linear combination of the rows to equal zero we must multiply each row of the echelon matrix by zero. This implies that the rows of the echelon matrix are linearly independent.

What about the rows of the original matrix? If the rows of the echelon matrix are independent were the original rows independent as well? The answer is yes, as can be seen by looking at the spaces spanned by the rows in each case:

The initial set of row vectors spans a vector space; since each row vector has n elements this vector space is a subspace of \mathbf{R}^n or (in some cases) is \mathbf{R}^n itself. If the initial set of row vectors is linearly independent then they are a basis for the space; otherwise (i.e., if the initial set of row vectors in linearly dependent) the basis for the space is a smaller set, with some of the row vectors being linear combinations of the others, and hence not part of the basis.

Performing Gaussian elimination on the matrix takes the initial set of row vectors and at each step replaces one row with a linear combination of that row and another row. More specifically, a given elimination step takes a row vector r_i and replaces it with a new row vector r'_i = r_i - l_{ij}r_j where j < i and l_{ij} is the multiplier used for that step.

Although each elimination step produces a new set of row vectors (since one row is replaced with a new one as described above) that new set spans the exact same space as the original row. To see this, suppose v is a vector in the space spanned by r_1 through r_m so that v can be expressed as a linear combination of the row vectors:

v = c_1r_1 + \cdots + c_jr_j + \cdots + c_ir_i + \cdots + c_nr_n

Now suppose we replace r_i with a new row vector r'_i = r_i - l_{ij}r_j in a given elimination step. We can then form the linear combination

c_1r_1 + \cdots + (c_j + c_il_{ij})r_j + \cdots + c_ir'_i + \cdots + c_nr_n

= c_1r_1 + \cdots + c_jr_j + c_il_{ij}r_j + \cdots + c_i(r_i - l_{ij}r_j) + \cdots + c_nr_n

= c_1r_1 + \cdots + c_jr_j + c_il_{ij}r_j + \cdots + c_ir_i - c_il_{ij}r_j) + \cdots + c_nr_n

= c_1r_1 + \cdots + c_jr_j + \cdots + c_ir_i + \cdots + c_nr_n = v

Since v can be expressed as a linear combination of the vectors r_1, \dotsc, r'_i, \dotsc, r_n any vector in the space spanned by r_1, \dotsc, r_i, \dotsc, r_n is also in the space spanned by r_1, \dotsc, r'_i, \dotsc, r_n.

Now consider a vector v' in the space spanned by r_1, \dotsc, r'_i, \dotsc, r_n so that we have

v' = c'_1r_1 + \cdots + c'_jr_j + \cdots + c'_ir'_i + \cdots + c'_nr_n

In reversing the elimination step we replace r'_i with r_i = r'_i + l_{ij}r_j. We can then form the linear combination

v' = c'_1r_1 + \cdots + (c'_j - c'_il_{ij})r_j + \cdots + c'_ir_i + \cdots + c'_nr_n

= c'_1r_1 + \cdots + c'_jr_j - c'_il_{ij}r_j + \cdots + c'_i(r'_i + l_{ij}r_j) + \cdots + c'_nr_n

= c'_1r_1 + \cdots + c'_jr_j - c'_il_{ij}r_j + \cdots + c'_ir'_i + c'_il_{ij}r_j + \cdots + c'_nr_n

= c'_1r_1 + \cdots + c'_jr_j + \cdots + c'_ir'_i + \cdots + c'_nr_n = v'

Since v' can be expressed as a linear combination of the vectors r_1, \dotsc, r_i, \dotsc, r_n  any vector in the space spanned by r_1, \dotsc, r'_i, \dotsc, r_n is also in the space spanned by r_1, \dotsc, r_i, \dotsc, r_n.

Combined with the previous result we thus conclude that each elimination step does not change the space spanned by the row vectors. The space spanned by the initial set of row vectors is thus exactly the same as the space spanned by the row vectors of the echelon matrix produced by elimination.

If the m rows of the echelon matrix are linearly independent (as discussed above) then they form a basis for the space they span. But the initial row vectors span the exact same space, and there are m such vectors. If that set of vectors were linearly dependent then we could find a larger set of p vectors (with p > m) that would be a basis for the space. But since any basis must have the same number of vectors, and the set of m rows of the echelon matrix is a basis, we must have p = m, a contradiction.

We conclude that if the set of m rows of the echelon matrix is a basis for the space it spans then the initial set of row vectors is also a basis for the same space and is thus linearly independent.

We now apply the above discussion to the vectors of exercise 2.3.1:

v_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad v_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \qquad v_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix} \qquad v_4 = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \end{bmatrix}

We form a matrix containing as rows the vectors above, and then perform Gaussian elimination on this matrix:

\begin{bmatrix} 1&1&0&0 \\ 1&0&1&0 \\ 0&0&1&1 \\ 0&1&0&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&0&0 \\ 0&-1&1&0 \\ 0&0&1&1 \\ 0&1&0&1 \end{bmatrix}

\Rightarrow \begin{bmatrix} 1&1&0&0 \\ 0&-1&1&0 \\ 0&0&1&1 \\ 0&0&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&0&0 \\ 0&-1&1&0 \\ 0&0&1&1 \\ 0&0&0&0 \end{bmatrix}

Since the echelon matrix produced by elimination has a row of zeros its row vectors are linearly dependent, and thus the initial set of row vectors is linearly dependent as well.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.4

Exercise 2.3.4. Suppose the vectors v_1, v_2, and v_3 are linearly independent. (The book says linearly dependent, but I believe this is a typo.) Are the vectors w_1 = v_1 + v_2, w_2 = v_1 + v_3, and w_3 = v_2 + v_3 also linearly independent?

Answer: Consider the linear combination of w_1, w_2, and w_3 with weights c_1, c_2, and c_3. We have

c_1w_1 + c_2w_2 + c_3w_3 = c_1(v_1 + v_2) + c_2(v_1 + v_3) + c_3(v_2 + v_3)

= (c_1+c_2)v_1 + (c_1+c_3)v_2 + (c_2+c_3)v_1

Since the vectors v_1, v_2, and v_3 are linearly independent the above expression can be zero only if c_1 + c_2 = 0, c_1 + c_3 = 0, and c_2 + c_3 = 0.

We can express this as the following system of equations

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}c_1&+&c_2&&&=&0 \\ c_1&&&+&c_3&=&0 \\ &&c_2&+&c_3&=&0 \end{array}

and can solve it via elimination. We first subtract the first equation from the second to obtain the following system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}c_1&+&c_2&&&=&0 \\ &&-c_2&+&c_3&=&0 \\ &&c_2&+&c_3&=&0 \end{array}

and then add the second equation to the third to obtain the final system:

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}c_1&+&c_2&&&=&0 \\ &&-c_2&+&c_3&=&0 \\ &&&&2c_3&=&0 \end{array}

Solving for the weights we have 2c_3 = 0 or c_3 = 0, -c_2 + 0 = 0 or c_2 = 0, and c_1 + 0 = 0 or c_1 = 0.

Since c_1w_1 + c_2w_2 + c_3w_3 = 0 only if c_1 = c_2 = c_3 = 0 we see that w_1, w_2, and w_3 are linearly independent.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.3

Exercise 2.3.3. Given the general triangular matrix

T = \begin{bmatrix} a&b&c \\ 0&d&e \\ 0&0&f \end{bmatrix}

show that the rows of T are linearly dependent if any of the diagonal entries a, d, or f is zero.

Answer: We can test for linear independence of the rows by doing elimination on a second matrix A whose columns are the rows of T:

A = \begin{bmatrix} 0&0&a \\ 0&d&b \\ f&e&c \end{bmatrix}

We start by doing a row exchange to exchange the first row with the third:

\begin{bmatrix} 0&0&a \\ 0&d&b \\ f&e&c \end{bmatrix} \Rightarrow \begin{bmatrix} f&e&c \\ 0&d&b \\ 0&0&a \end{bmatrix}

If a = 0 we have the following matrix:

\begin{bmatrix} f&e&c \\ 0&d&b \\ 0&0&0 \end{bmatrix}

This matrix has at most only two pivots, and might have only one or even zero depending on the values of the other entries, so that the rank r < n. So if a = 0 the  columns of A, and thus the rows of T, are linearly dependent.

if d = 0 we have the following matrix

\begin{bmatrix} f&e&c \\ 0&0&b \\ 0&0&a \end{bmatrix}

Elimination would produce a matrix with two pivots at most, so if d = 0 we again have r < n and the columns of A, and thus the rows of T, are linearly dependent.

Finally if f = 0 we have the following matrix

\begin{bmatrix} 0&e&c \\ 0&d&b \\ 0&0&a \end{bmatrix}

Again elimination would produce a matrix with two pivots at most, so if f = 0 we again have r < n and the columns of A, and thus the rows of T, are linearly dependent.

We conclude that if any of a, d, or f are zero then the rows of T are guaranteed to be linearly dependent.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.3.2

Exercise 2.3.2. State whether the following sets of vectors are linearly independent or not

(a) (1, 1, 2), (1, 2, 1), and (3, 1, 1)

(b) given any four vectors v_1, v_2, v_3, and v_4, the vectors v_1-v_2, v_2-v_3, v_3-v_4, and v_4-v_1

(c) for any x, y, and z, the vectors (1, 1, 0), (1, 0, 0), (0, 1, 1), and (x, y, z)

Answer: (a) We test for linear independence by doing elimination on the matrix with columns consisting of the three vectors in question:

\begin{bmatrix} 1&1&3 \\ 1&2&1 \\ 2&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&3 \\ 0&1&-2 \\ 0&-1&-5 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&3 \\ 0&1&-2 \\ 0&0&-7 \end{bmatrix}

Since the resulting matrix has three pivots and thus rank r = 3 = n the columns (and thus the vectors in question) are linearly independent.

(b) If we simply add the four vectors (which corresponds to a linear combination with c_1 = c_2 = c_3 = c_4 = 1) we have

(v_1-v_2) + (v_2-v_3) + (v_3-v_4) + (v_4-v_1)

= v_1 - v_2 + v_2 - v_3 + v_3 - v_4 + v_4 - v_1 = 0

So the four vectors in question are always linearly dependent no matter whether the original four vectors are linearly independent or not.

(c) From theorem 2G on page 82 we see that any set of four vectors in \mathbf{R}^3 must be linearly dependent, so the four vectors in question are always linearly dependent for any values of x, y, and z.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.3.1

Exercise 2.3.1. State whether the following vectors are linearly independent or not

v_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad v_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \qquad v_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix} \qquad v_4 = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \end{bmatrix}

by solving the equation c_1v_1 + c_2v_2 + c_3v_3 + c_4v_4 = 0. Also, solve c_1v_1 + \cdots + c_4v_4 = (0, 0, 0, 1) to determine whether the vectors span \mathbf{R}^4.

Answer: The equation c_1v_1 + c_2v_2 + c_3v_3 + c_4v_4 = 0 is equivalent to

\begin{bmatrix} 1&1&0&0 \\ 1&0&0&1 \\ 0&1&1&0 \\ 0&0&1&1 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

We can use elimination to solve this system

\begin{bmatrix} 1&1&0&0 \\ 1&0&0&1 \\ 0&1&1&0 \\ 0&0&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&0&0 \\ 0&-1&0&1 \\ 0&1&1&0 \\ 0&0&1&1 \end{bmatrix}

\Rightarrow \begin{bmatrix} 1&1&0&0 \\ 0&-1&0&1 \\ 0&0&1&1 \\ 0&0&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1&1&0&0 \\ 0&-1&0&1 \\ 0&0&1&1 \\ 0&0&0&0 \end{bmatrix}

producing the following system Ux = c

\begin{bmatrix} 1&1&0&0 \\ 0&-1&0&1 \\ 0&0&1&1 \\ 0&0&0&0 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

Note that c_4 is a free variable and c_1, c_2, and c_4 are basic. Setting the free variable c_4 = 1, from the third equation we have c_3 + c_4 = 0 or c_3 = -c_4 = -1. From the second equation we have -c_2 + c_4 = 0 or c_2 = c_4 = 1. Finally, from the first equation we have c_1 + c_2 = 0 or c_1 = -c_2 = -1. This means that we have -v_1 + v_2 - v_3 + v_4 = 0 and thus the vectors are not linearly independent. More specifically, we can express v_4 as a linear combination of the other vectors: v_4 = v_1 - v_2 + v_3.

We next attempt to solve c_1v_1 + \cdots + c_4v_4 = (0, 0, 0, 1). As before we can use elimination to obtain the system Ux = c

\begin{bmatrix} 1&1&0&0 \\ 0&-1&0&1 \\ 0&0&1&1 \\ 0&0&0&0 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}

In this case the fourth equation produces the contradictory result 0 = 1. Thus this system has no solution, and we conclude that the vectors in question do not span \mathbf{R}^4.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.2.14

Exercise 2.2.14. Create a 2 by 2 system of equations that has many homogeneous solutions but no particular solution.

Answer: A trivial example of such a system is Ax = b where A = 0 and b \ne 0; for example

\begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}

The corresponding homogeneous system

\begin{bmatrix} 0&0 \\ 0&0 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

has any vector in \mathbf{R}^2 as a solution, but the general system has no particular solution since 0x = 0 for any x.

For a less trivial example we can have both rows of A equal to each other, but have the corresponding elements of b not be equal to each other. For example, consider the system

\begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}

The corresponding homogeneous system

\begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

has infinitely many solutions of the form

x_{homogeneous} = v \begin{bmatrix} -1 \\ 1 \end{bmatrix}

but there is no particular solution to the general system: Elimination produces the following system

Ux = \begin{bmatrix} 1&1 \\ 0&0 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = c

and the second row produces the contradictory equation 0 = 1.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.2.13

Exercise 2.2.13. What is a 3 by 3 system of equations Ax = b that has the following general solution (the same as in exercise 2.2.12)

x = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}

and that has no solution if b_1 + b_2 \ne b_3?

Answer: As in exercise 2.2.12, the general solution above is the sum of a particular solution and a homogeneous solution, where

x_{particular} = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}

and

x_{homogeneous} = w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}

Also as in exercise 2.2.12, since w is the only variable referenced in the homogeneous solution it must be the only free variable, with u and v being basic. Since u is basic we must have a pivot in column 1, and since v is basic we must have a second pivot in column 2. After performing elimination on A the resulting echelon matrix U must therefore have the form

U = \begin{bmatrix} *&*&* \\ 0&*&* \\ 0&0&0 \end{bmatrix}

Unlike in exercise 2.2.12 we cannot simply assume that A = U because we need to account for the particular condition b_1 + b_2 = b_3 that is required for the system Ax = b to have a solution. This condition arises from the particular sequence of elimination steps that takes the system Ax = b and produces the equivalent system Ux = c with c resulting from the elimination operations applied to b. In particular, we assume that the system Ux = c will look as follows:

\begin{bmatrix} *&*&* \\ 0&*&* \\ 0&0&0 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 - b_1 - b_2 \end{bmatrix}

The third row produces the equation 0 = b_3 - b_1 - b_2 or b_1 + b_2 = b_3. This corresponds to the assumption that there is no solution if b_1 + b_2 \ne b_3.

We can’t assume that A = U but we can assume that the first two rows of A are the same as the first two rows of U. That removes the need to do elimination for the second row of A and corresponds to our assumption that the second element of c is the same as the second element of b (namely b_2). We then have to do elimination operations only for the third row, and those operations will produce the third element of c as listed above.

As stated above the general solution to Ax = b for this exercise is the same as the general solution in exercise 2.2.12, so the homogeneous solution is the same as well. Since we are assuming that the first two rows of A are the same as the first two rows of U we have

Ux_{homogeneous} = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \\ 0&0&0 \end{bmatrix} w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = 0

Note that the first two rows produce the same equations as in exercise 2.2.12. The third row is simply equivalent to the equation 0 = 0 and so adds no new information. We can therefore follow the same process as in exercise 2.2.12 to find values for the first two rows of U and A:

U = \begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ 0&0&0 \end{bmatrix} \qquad A = \begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ *&*&* \end{bmatrix}

so that we have

Ux_{homogeneous} = \begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ 0&0&0 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}

We now turn to determining the last row of A:

A = \begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ a_{31}&a_{32}&a_{33} \end{bmatrix}

In order to produce b_3 - b_1 - b_2 as the last element of c on the right-hand side (i.e., of Ux = c) we must subtract the first row from the third row on the left-hand side to produce a zero as the first element of the third row, and then subsequently subtract the second row from the third row to produce a zero as the second element of the third row. That first operation means we must have a_{31} - 1 = 0 or a_{31} = 1. The second operation means we must have a_{32} - (-1) - 1 = 0 or a_{32} = 0 as well as a_{33} - 1 - (-2) = 0 or a_{33} = -1. The proposed value for A is then

A = \begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ 1&0&-1 \end{bmatrix}

We next turn to the general system Ax = b. We now have a value for A, and we were given the value of the particular solution. We can multiply the two to calculate the value of b:

b = Ax_{particular} = \begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ 1&0&-1 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}

This gives us the following as an example 3 by 3 system that has the general solution specified above:

\begin{bmatrix} 1&-1&1 \\ 0&1&-2 \\ 1&0&-1 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}

or

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}u&-&v&+&w&=&0 \\ &&v&-&2w&=&1 \\ u&&&-&w&=&1 \end{array}

Note that we have b_3 = b_1 + b_2. If this were not the case then the system would have no solution.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.2.12

Exercise 2.2.12. What is a 2 by 3 system of equations Ax = b that has the following general solution?

x = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}

Answer: The general solution above is the sum of a particular solution and a homogeneous solution, where

x_{particular} = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}

and

x_{homogeneous} = w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}

Since w is the only variable referenced in the homogeneous solution it must be the only free variable, with u and v being basic. Since u is basic we must have a pivot in column 1, and since v is basic we must have a second pivot in column 2. After performing elimination on A the resulting echelon matrix U must therefore have the form

U = \begin{bmatrix} *&*&* \\ 0&*&* \end{bmatrix}

To simplify solving the problem we can assume that A also has this form; in other words, we assume that A is already in echelon form and thus we don’t need to carry out elimination. The matrix A then has the form

A = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix}

where a_{11} and a_{22} are nonzero (because they are pivots).

We then have

Ax_{homogeneous} = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix} w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = 0

If we assume that w is 1 and express the right-hand side in matrix form this then becomes

\begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

or (expressed as a system of equations)

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}a_{11}&+&2a_{12}&+&a_{13}&=&0 \\ &&2a_{22}&+&a_{23}&=&0 \end{array}

The pivot a_{11} must be nonzero, and we arbitrarily assume that a_{11} = 1. We can then satisfy the first equation by assigning a_{12} = 0 and a_{13} = -1. The pivot a_{22} must also be nonzero, and we arbitrarily assume that a_{22} = 1 as well. We can then satisfy the second equation by assigning a_{23} = -2. Our proposed value of A is then

A = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix}

so that we have

Ax_{homogeneous} = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

as required.

We next turn to the general system Ax = b. We now have a value for A, and we were given the value of the particular solution. We can multiply the two to calculate the value of b:

b = Ax_{particular} = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}

This gives us the following as an example 2 by 3 system that has the general solution specified above:

\begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}

or

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}u&&&-&w&=&1 \\ &&v&-&2w&=&1 \end{array}

Finally, note that the solution provided for exercise 2.2.12 at the end of the book is incorrect. The right-hand side must be a 2 by 1 matrix and not a 3 by 1 matrix, so the final value of 0 in the right-hand side should not be present.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.2.11

Exercise 2.2.11. Let Ax = 0 be a system of m equations in n unknowns, and suppose that the only solution to this system is x = 0. In this case what is the rank of A?

Answer: If the system Ax = 0 has no solutions then the system has no free variables. If the system has no free variables then all variables must be basic variables. But the number of basic variables is the same as the number of pivots, and if all variables are basic then there must be a pivot in every column. The number of pivots is then equal to the number of columns n and we have the rank r = n.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.2.10

Exercise 2.2.10. (a) Find all possible solutions to the following system:

Ux = \begin{bmatrix} 1&2&3&4 \\ 0&0&1&2 \\ 0&0&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}

(b) What are the solutions if the right side is replaced with (a, b, 0)?

Answer: (a) Since the pivots of U are in columns 1 and 3, the basic variables are x_1 and x_3 and the free variables are x_2 and x_4.

From the second equation we have x_3 + 2x_4 = 0 or x_3 = -2x_4. Substituting the value of x_3 into the first equation we have x_1 + 2x_2 - 6x_4 + 4x_4 = 0 or x_1 = -2x_2 + 2x_4. The solution to Ux = 0 can then be expressed in terms of the free variables x_2 and x_4 as follows:

x_{homogeneous} = x_2 \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} 2 \\ 0 \\ -2 \\ 1 \end{bmatrix}

(b) Replacing f the right side with (a, b, 0) produces the following system:

Ux = \begin{bmatrix} 1&2&3&4 \\ 0&0&1&2 \\ 0&0&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} a \\ b \\ 0 \end{bmatrix}

From the second equation we have x_3 + 2x_4 = b or x_3 = b - 2x_4. Substituting for x_3 into the first equation we have x_1 + 2x_2 + 3(b - 2x_4) + 4x_4 = a or x_1 = a - 3b - 2x_2 - 2x_4. Setting the free variables x_2 and x_4 to zero gives the particular solution

x_{particular} = \begin{bmatrix} a - 3b \\ 0 \\ b \\ 0 \end{bmatrix}

The general solution is then

x = x_{particular} + x_{homogeneous}

= \begin{bmatrix} a-3b \\ 0 \\ b \\ 0 \end{bmatrix} + x_2 \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} 2 \\ 0 \\ -2 \\ 1 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

Posted in linear algebra | Leave a comment