Linear Algebra and Its Applications, Exercise 2.5.11

Exercise 2.5.11. Given the incidence matrix from exercise 2.5.10 with the final column removed and a diagonal matrix C with elements 1, 2, 2, and 1, write the equations for the system

\begin{array}{rcrcr} C^{-1}y&+&Ax&=&0 \\ A^Ty&&&=&f \end{array}

Show that eliminating y gives the system A^TCAx = -f and solve the system for f = (1, 1, 6). If f = (1, 1, 6) represents the currents entering nodes 1, 2, and 3 of the graph, calculate the potentials at each node and the currents on each edge.

Answer: Removing the final column from the matrix in exercise 2.5.10 gives us the new matrix

A = \begin{bmatrix} -1&1&0 \\ -1&0&1 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix}

The diagonal matrix

C = \begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&0&2&0 \\ 0&0&0&1 \end{bmatrix}

has as its inverse

C^{-1} = \begin{bmatrix} 1&0&0&0 \\ 0&\frac{1}{2}&0&0 \\ 0&0&\frac{1}{2}&0 \\ 0&0&0&1 \end{bmatrix}

(Recall from Note 4 of section 1.6, “Inverses and Transposes”, that the inverse of a diagonal matrix D with nonzero entries on the diagonal is itself a diagonal matrix, with the entries on the diagonal of D^{-1} equal to the reciprocals of the corresponding entries on the diagonal of D.)

We can thus express the system C^{-1}y + Ax = 0 in matrix form as

\begin{bmatrix} 1&0&0&0 \\ 0&\frac{1}{2}&0&0 \\ 0&0&\frac{1}{2}&0 \\ 0&0&0&1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ y_4 \end{bmatrix} + \begin{bmatrix} -1&1&0 \\ -1&0&1 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}

= \begin{bmatrix} y_1 \\ \frac{1}{2}y_2 \\ \frac{1}{2}y_3 \\ y_4 \end{bmatrix} + \begin{bmatrix} -x_1+x_2 \\ -x_1+x_3 \\ x_2 \\ -x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

We can express the system A^Ty = f in matrix form as

\begin{bmatrix} -1&-1&0&0 \\ 1&0&1&0 \\ 0&1&0&-1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ y_4 \end{bmatrix}

= \begin{bmatrix} -y_1-y_2 \\ y_1+y_3 \\ y_2-y_4 \end{bmatrix} = \begin{bmatrix} f_1 \\ f_2 \\ f_3 \end{bmatrix}

This corresponds to the following system of equations:

\begin{array}{rcrcrcrcrcrcrcr} y_1&&&&&&&-&x_1&+&x_2&&&=&0 \\ &&\frac{1}{2}y_2&&&&&-&x_1&&&+&x_3&=&0 \\ &&&&\frac{1}{2}y_3&&&&&+&x_2&&&=&0 \\ &&&&&&y_4&&&&&-&x_3&=&0 \\ -y_1&-&y_2&&&&&&&&&&&=&f_1 \\ y_1&&&+&y_3&&&&&&&&&=&f_2 \\ &&y_2&&&-&y_4&&&&&&&=&f_3 \end{array}

Going back to the original two equations

\begin{array}{rcrcr} C^{-1}y&+&Ax&=&0 \\ A^Ty&&&=&f \end{array}

we can eliminate y by multiplying the first equation by A^TC to get A^TCC^{-1}y + A^TCAx = 0 or A^Ty + A^TCAx = 0 and then subtracting the second equation from this to get A^TCAx = -f.

We have

A^TCA = \begin{bmatrix} -1&-1&0&0 \\ 1&0&1&0 \\ 0&1&0&-1 \end{bmatrix} \begin{bmatrix} 1&0&0&0 \\ 0&2&0&0 \\ 0&0&2&0 \\ 0&0&0&1 \end{bmatrix} \begin{bmatrix} -1&1&0 \\ -1&0&1 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix}

= \begin{bmatrix} -1&-2&0&0 \\ 1&0&2&0 \\ 0&2&0&-1 \end{bmatrix} \begin{bmatrix} -1&1&0 \\ -1&0&1 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} = \begin{bmatrix} 3&-1&-2 \\ -1&3&0 \\ -2&0&3 \end{bmatrix}

If f = (1, 1, 6) the system A^TCAx = -f can therefore be expressed in matrix form as

\begin{bmatrix} 3&-1&-2 \\ -1&3&0 \\ -2&0&3 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} -1 \\ -1 \\ -6 \end{bmatrix}

or as the system of equations

\begin{array}{rcrcrcr} 3x_1&-&x_2&-&2x_3&=&-1 \\ -x_1&+&3x_2&&&=&-1 \\ -2x_1&&&+&3x_3&=&-6 \end{array}

We begin elimination by multiplying the first equation by -1/3 and subtracting it from the second equation, and multiplying the first equation by -2/3 and subtracting it from the third equation. The result is the following system:

\begin{array}{rcrcrcr} 3x_1&-&x_2&-&2x_3&=&-1 \\ &&\frac{8}{3}x_2&-&\frac{2}{3}x_3&=&-\frac{4}{3} \\ &&-\frac{2}{3}x_2&+&\frac{5}{3}x_3&=&-\frac{20}{3} \end{array}

We then multiply the second equation by -1/4 and subtract it from the third equation:

\begin{array}{rcrcrcr} 3x_1&-&x_2&-&2x_3&=&-1 \\ &&\frac{8}{3}x_2&-&\frac{2}{3}x_3&=&-\frac{4}{3} \\ &&&&\frac{3}{2}x_3&=&-7 \end{array}

Solving for x_3 we have x_3 = \frac{2}{3}(-7) = -\frac{14}{3}. Substituting the value of x_3 into the second equation we have \frac{8}{3}x_2 - \frac{2}{3}(-\frac{14}{3}) = -\frac{4}{3} or \frac{8}{3}x_2 = -\frac{4}{3} - \frac{28}{9} = -\frac{40}{9} so that x_2 = \frac{3}{8}(-\frac{40}{9}) = -\frac{5}{3}. Finally we substitute the values of x_2 and x_3 into the first equation to obtain 3x_1 -(-\frac{5}{3})-2(-\frac{14}{3}) = -1 or 3x_1 + 11 = -1 so that x_1 = \frac{1}{3}(-12) = -4.

The solution to the system A^TCAx = -f when f = (1, 1, 6) is thus x = (-4, -\frac{5}{3}, -\frac{14}{3}). If f = (1, 1, 6) represents the currents into each of nodes 1, 2, and 3 respectively then x = (-4, -\frac{5}{3}, -\frac{14}{3}) represents the potentials at each of those nodes.

What are the currents along the edges? To determine that we must solve for y. Since C^{-1}y + Ax = 0 we have y = -CAx or

y = \begin{bmatrix} -1&0&0&0 \\ 0&-2&0&0 \\ 0&0&-2&0 \\ 0&0&0&-1 \end{bmatrix} \begin{bmatrix} -1&1&0 \\ -1&0&1 \\ 0&1&0 \\ 0&0&-1 \end{bmatrix} \begin{bmatrix} -4 \\ -\frac{5}{3} \\ -\frac{14}{3} \end{bmatrix}

= \begin{bmatrix} 1&-1&0 \\ 2&0&-2 \\ 0&-2&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} -4 \\ -\frac{5}{3} \\ -\frac{14}{3} \end{bmatrix} = \begin{bmatrix} -\frac{7}{3} \\ \frac{4}{3} \\ \frac{10}{3} \\ -\frac{14}{3} \end{bmatrix}

We can check this answer using the equation A^Ty = f

A^Ty = \begin{bmatrix} -1&-1&0&0 \\ 1&0&1&0 \\ 0&1&0&-1 \end{bmatrix} \begin{bmatrix} -\frac{7}{3} \\ \frac{4}{3} \\ \frac{10}{3} \\ -\frac{14}{3} \end{bmatrix}

= \begin{bmatrix} \frac{3}{3} \\ \frac{3}{3} \\ \frac{18}{3} \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 6 \end{bmatrix} = f

So y = (-\frac{7}{3}, \frac{4}{3}, \frac{10}{3}, -\frac{14}{3}) represents the currents flowing along edges 1, 2, 3, and 4 respectively.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.10

Exercise 2.5.10. Given the incidence matrix

A = \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&1&0&-1 \\ 0&0&-1&1 \end{bmatrix}

draw the graph corresponding to the matrix, and state whether or not it is a tree and the rows are linearly independent. Demonstrate that removing a row produces a spanning tree, and describe the subspace of which the remaining rows form a basis.

Answer: The incidence matrix has four nodes, corresponding to the columns, and four edges, corresponding to the rows. The nodes can be arranged in the form of a square. Put node 1 in the upper left corner of the square. Edge 1 runs from node 1 to node 2; put node 2 in the lower left corner of the square, so that edge 2 forms the left side of the square. Edge 2 runs from node 1 to node 3; put node 3 in the upper right corner of the square, so that edge 2 forms the top side of the square.

Put the remaining node 4 in the lower right corner of the square. Edge 3 runs from node 4 to node 2, and thus forms the bottom side of the square. Edge 4 runs from node 3 to node 4, and thus forms the right side of the square.

Since the four edges form a loop (in the shape of a square) the graph is not a tree. Also, the rows are not linearly independent, since the first row minus the sum of the second and third rows equals the fourth row:

\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} - \left( \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \\ 0 \\ -1 \end{bmatrix} \right)

= \begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} - \begin{bmatrix} -1 \\ 1 \\ 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ -1 \\ 1 \end{bmatrix}

If we remove the fourth row and the corresponding edge (i.e., the bottom side of the square) then the resulting three edges form a spanning tree, since they touch all four nodes and have no loops. The remaining three rows are linearly independent and form a basis for the row space \mathcal R(A^T).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | 1 Comment

Linear Algebra and Its Applications, Exercise 2.5.9

Exercise 2.5.9. Given the incidence matrix A from exercise 2.5.6 (for the graph on page 113 with six edges and four nodes) and the diagonal matrix

C = \begin{bmatrix} c_1&0&0&0&0&0 \\ 0&c_2&0&0&0&0 \\ 0&0&c_3&0&0&0 \\ 0&0&0&c_4&0&0 \\ 0&0&0&0&c_5&0 \\ 0&0&0&0&0&c_6 \end{bmatrix}

compute A^TA and A^TCA. Describe how the diagonal and other entries in A^TA can be predicted from looking at the graph to which A corresponds.

Answer: We have

A^TA = \begin{bmatrix} -1&-1&0&0&-1&0 \\ 1&0&-1&-1&0&0 \\ 0&1&1&0&0&-1 \\ 0&0&0&1&1&1 \end{bmatrix} \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ -1&0&0&1 \\ 0&0&-1&1 \end{bmatrix}

= \begin{bmatrix} 3&-1&-1&-1 \\ -1&3&-1&-1 \\ -1&-1&3&-1 \\ -1&-1&-1&3 \end{bmatrix}

and

A^TCA = \begin{bmatrix} -1&-1&0&0&-1&0 \\ 1&0&-1&-1&0&0 \\ 0&1&1&0&0&-1 \\ 0&0&0&1&1&1 \end{bmatrix} \begin{bmatrix} c_1&0&0&0&0&0 \\ 0&c_2&0&0&0&0 \\ 0&0&c_3&0&0&0 \\ 0&0&0&c_4&0&0 \\ 0&0&0&0&c_5&0 \\ 0&0&0&0&0&c_6 \end{bmatrix} \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ -1&0&0&1 \\ 0&0&-1&1 \end{bmatrix}

= \begin{bmatrix} -c_1&-c_2&0&0&-c_5&0 \\ c_1&0&-c_3&-c_4&0&0 \\ 0&c_2&c_3&0&0&-c_6 \\ 0&0&0&c_4&c_5&c_6 \end{bmatrix} \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ -1&0&0&1 \\ 0&0&-1&1 \end{bmatrix}

= \begin{bmatrix} c_1+c_2+c_5&-c_1&-c_2&-c_5 \\ -c_1&c_1+c_3+c_4&-c_3&-c_4 \\ -c_2&-c_3&c_2+c_3+c_6&-c_6 \\ -c_5&-c_4&-c_6&c_4+c_5+c_6 \end{bmatrix}

Note that c_1 through c_6 in C can be associated with edges 1 through 6 respectively. Each row of the product A^TCA then corresponds to a given node, and the entries in the row reflect the edges connected to that node and the nodes connected to the given node by those edges.

For example, the first row of A^TCA corresponds to node 1. The value of c_1+c_2+c_5 for the (1, 1) entry reflects the fact that edges 1, 2, and 5 all have node 1 as an endpoint. The value of -c_1 for the (1, 2) entry reflects the fact that edge 1 connects node 1 to node 2, the value of -c_2 for the (1, 3) entry reflects the fact that edge 2 connects node 1 to node 3, and the value of -c_5 for the (1, 4) entry reflects the fact that edge 5 connects node 1 to node 4.

Similar interpretations apply to rows 2 through 4, which represent the connections for nodes 2 through 4 respectively. The matrix A^TCA is symmetric because if a given edge connects node i to node j and thus produces an (i, j) entry of the matrix then that same edge also connects node j to node i to produce a (j, i) entry of the same value.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | 2 Comments

Linear Algebra and Its Applications, Exercise 2.5.8

Exercise 2.5.8. State the dimensions of the four fundamental subspaces of the incidence matrix A from exercise 2.5.6 (for the graph on page 113 with six edges and four nodes) and provide a set of basis vectors for each subspace.

Answer: The matrix in question is

A = \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ -1&0&0&1 \\ 0&0&-1&1 \end{bmatrix}

From exercise 2.5.7 we know that Gaussian elimination on A produces the following echelon matrix U:

U = \begin{bmatrix} -1&1&0&0 \\ 0&-1&1&0 \\ 0&0&0&0 \\ 0&0&-1&1 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix}

This matrix has three pivots, in columns 1, 2, and 3, and its rank r is therefore 3. The rank is the dimension of the column space of U. Since the column space of A is the same as \mathcal R(U) the dimension of \mathcal R(A) is also 3. The first, second, and third columns of A form a basis for \mathcal R(A):

\begin{bmatrix} -1 \\ -1 \\ 0 \\ 0 \\ -1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 1 \\ 0 \\ -1 \\ -1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \\ 0 \\ -1 \end{bmatrix}

Since the dimension r of the column space \mathcal R(A) is 3 and the number of columns n is 4, the dimension of the nullspace \mathcal N(A) is n - r = 4 - 3 = 1. Since the sum of the columns of A is zero the vector

\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}

is a solution to Ax = 0 and forms a basis for \mathcal N(A).

The dimension of the row space of A is the same as the dimension of the column space, namely r = 3. The row space of A is also the same as the row space of U (since the process of elimination means that the rows of U are linear combinations of the rows of U). The first, second, and fourth rows of U

\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ -1 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} 0 \\ 0 \\ -1 \\ 1 \end{bmatrix}

form a basis for the row space \mathcal R(U^T) and therefore for the rowspace \mathcal R(A^T) as well.

Since the dimension r of the row space \mathcal R(A^T) is 3 and the number of rows m is 6, the dimension of the left nullspace \mathcal N(A^T) is m - r = 6 - 3 = 3.

From exercise 2.5.6 we know that the vectors

\begin{bmatrix} 1 \\ -1 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -1 \\ 0 \\ 0 \\ -1 \\ 1 \\ 0 \end{bmatrix} \qquad \begin{bmatrix} -1 \\ 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{bmatrix}

are solutions to y^TA = 0 and are therefore in the left nullspace \mathcal N(A^T).

Note also that the vectors are linearly independent: Since the first two vectors have zero are their last entry and the third vector has 1, there is no way to form a linear combination of the first two vectors that is equal to the third. Similarly there is no way to form a linear combination of the second and third vectors that is equal to the first, or a linear combination of the first and third vectors that equals the second. (This can be seen by looking at the third and fifth entries of the vectors respectively.)

Since the three linearly independent vectors are in the left nullspace \mathcal N(A^T) and the dimension of \mathcal N(A^T) is 3, the three vectors form a basis for the space.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.7

Exercise 2.5.7. Suppose that the incidence matrix A from exercise 2.5.6 (for the graph on page 113 with six edges and four nodes) represents six games among four teams, and that the score differences for the six games are b_1 through b_6. When you can assign potentials to the teams so that the potentials agree with the values of b_1 through b_6?

(This corresponds to finding conditions on b_1 through b_6 that make the system Ax = b solvable. These can be found via elimination or by using Kirchoff’s Laws.)

Answer: We first attempt to solve Ax = b using Gaussian elimination, starting with the original system of equations:

\begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ -1&0&0&1 \\ 0&0&-1&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \\ b_5 \\ b_6 \end{bmatrix}

We start by subtracting 1 times the first row from the second and fifth rows:

\begin{bmatrix} -1&1&0&0 \\ 0&-1&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ 0&-1&0&1 \\ 0&0&-1&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 - b_1 \\ b_3 \\ b_4 \\ b_5 - b_1 \\ b_6 \end{bmatrix}

We then subtract 1 times the second row from the third, fourth, and fifth rows:

\begin{bmatrix} -1&1&0&0 \\ 0&-1&1&0 \\ 0&0&0&0 \\ 0&0&-1&1 \\ 0&0&-1&1 \\ 0&0&-1&1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}

= \begin{bmatrix} b_1 \\ b_2 - b_1 \\ b_3 - (b_2 - b_1) \\ b_4 - (b_2 - b_1) \\ b_5 - b_1 - (b_2 - b_1) \\ b_6 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 - b_1 \\ b_3 - b_2 + b_1 \\ b_4 - b_2 + b_1 \\ b_5 - b_2 \\ b_6 \end{bmatrix}

Finally we subtract 1 times the fourth row from the fifth and sixth rows:

\begin{bmatrix} -1&1&0&0 \\ 0&-1&1&0 \\ 0&0&0&0 \\ 0&0&-1&1 \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}

= \begin{bmatrix} b_1 \\ b_2 - b_1 \\ b_3 - b_2 + b_1 \\ b_4 - b_2 + b_1 \\ b_5 - b_2 - (b_4 - b_2 + b_1) \\ b_6 - (b_4 - b_2 + b_1) \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 - b_1 \\ b_3 - b_2 + b_1 \\ b_4 - b_2 + b_1 \\ b_5 - b_4 - b_1 \\ b_6 - b_4 + b_2 - b_1 \end{bmatrix}

From the third row we see that we must have b_3 - b_2 + b_1 = 0. From the fifth row we must have b_5 - b_4 - b_1 = 0. Finally, from the sixth row we must have b_6 - b_4 + b_2 - b_1 = 0. Rearranging the terms to put them in order and multiplying the second and third equations by -1 gives us the following three conditions:

\begin{array}{l} b_1 - b_2 + b_3 = 0 \\ b_1 + b_4 - b_5 = 0 \\ b_1 - b_2 + b_4 - b_6 = 0 \end{array}

Now we try to find conditions on b_1 through b_6 by using Kirchoff’s Law of Voltages. Recall from exercise 2.5.6 that we found three independent loops in the graph, the first running around the outer edges, the second around the interior triangle in the upper left of the graph, and the third including edges 1, 2, 4, and 6.

The outer loop of the graph contains edges 1, 2, and 3, with edge 2 running in the opposite direction from the other two. Since b_1, b_2, and b_3 are the potential differences along each edge, and the sum of the potential differences around the loop must be zero, we must have b_1 - b_2 + b_3 = 0. This gives us the first condition listed above.

The loop around the interior triangle in the upper left contains edges 1, 4, and 5, with edge 5 running in the opposite direction from the other two. Since b_1, b_4, and b_5 are the potential differences along each edge, and the sum of the potential differences around the loop must be zero, we must have b_1 + b_4 - b_5 = 0. This gives us the second condition listed above.

The final loop contains edges 1, 2, 4, and 6, with edges 2 and 6 running in the opposite direction from the other two. Since b_1, b_2, b_4, and b_6 are the potential differences along each edge, and the sum of the potential differences around the loop must be zero, we must have b_1 -b_2 + b_4 - b_6 = 0. This gives us the third and final condition listed above.

Note that by choosing a different set of independent loops we produce a different but equivalent set of conditions. For example, if we choose as the independent loops the three interior triangles of the graph (in the upper left, upper right, and bottom of the graph respectively) then the corresponding conditions are as follows:

\begin{array}{l} b_1 + b_4 - b_5 = 0 \\ b_2 - b_5 + b_6 = 0 \\ b_3 - b_4 + b_6 = 0\end{array}

The first condition is the same as the second condition in the original set.

Subtracting the second condition from the first produces the third condition in the original set:

b_1 + b_4 - b_5 - (b_2 - b_5 + b_6) = 0 - 0

\rightarrow b_1 + b_4 - b_5 - b_2 + b_5 - b_6 = 0

\rightarrow b_1 - b_2 + b_4 - b_6 = 0

Finally, subtracting the second condition from the first and then adding the third produces the first condition in the original set:

b_1 + b_4 - b_5 - (b_2 - b_5 + b_6) + (b_3 - b_4 + b_6) = 0 - 0 + 0

\rightarrow b_1 + b_4 - b_5 - b_2 + b_5 - b_6 + b_3 - b_4 + b_6 = 0

\rightarrow b_1 - b_2 + b_3 = 0

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.6

Exercise 2.5.6. Determine the incidence matrix A for the graph on page 113 with six edges and four nodes, as follows: Edge 1 from node 1 to node 2, edge 2 from node 1 to node 3, edge 3 from node 2 to node 3, edge 4 from node 2 to node 4, edge 5 from node 1 to node 4, and edge 6 from node 3 to node 4.

Find three independent vectors y that satisfy A^Ty = 0. How do these vectors relate to the loops in the graph?

Answer: Since there are six edges in the graph the incidence matrix A has six rows, and since there are four nodes A has four columns; A is therefore a 6 by 4 matrix.

The first row of A corresponds to edge 1. Since edge 1 goes from node 1 to node 2 the entry in the (1, 1) position is -1 (representing “flow” out of node 1) and the entry in the (1, 2) position is 1 (representing flow into node 2). Since edge 2 goes from node 1 to node 3 the entry in the (2, 1) position is -1 (representing flow out of node 1) and the entry in the (2, 3) position is 1 (representing flow into node 3). The values of the entries for the other rows (edges) can be similarly determined, and the final incidence matrix is

A = \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&-1&1&0 \\ 0&-1&0&1 \\ -1&0&0&1 \\ 0&0&-1&1 \end{bmatrix}

We now try to find a solution to A^Ty = 0 or

\begin{bmatrix} -1&-1&0&0&-1&0 \\ 1&0&-1&-1&0&0 \\ 0&1&1&0&0&-1 \\ 0&0&0&1&1&1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}

by doing Gaussian elimination on A^T.

We first subtract -1 times the first row from the second row:

\begin{bmatrix} -1&-1&0&0&-1&0 \\ 1&0&-1&-1&0&0 \\ 0&1&1&0&0&-1 \\ 0&0&0&1&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} -1&-1&0&0&-1&0 \\ 0&-1&-1&-1&-1&0 \\ 0&1&1&0&0&-1 \\ 0&0&0&1&1&1 \end{bmatrix}

We then subtract -1 times the second row from the third row:

\begin{bmatrix} -1&-1&0&0&-1&0 \\ 0&-1&-1&-1&-1&0 \\ 0&1&1&0&0&-1 \\ 0&0&0&1&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} -1&-1&0&0&-1&0 \\ 0&-1&-1&-1&-1&0 \\ 0&0&0&-1&-1&-1 \\ 0&0&0&1&1&1 \end{bmatrix}

Finally, we subtract -1 times the third row from the fourth row:

\begin{bmatrix} -1&-1&0&0&-1&0 \\ 0&-1&-1&-1&-1&0 \\ 0&0&0&-1&-1&-1 \\ 0&0&0&1&1&1 \end{bmatrix} \Rightarrow \begin{bmatrix} -1&-1&0&0&-1&0 \\ 0&-1&-1&-1&-1&0 \\ 0&0&0&-1&-1&-1 \\ 0&0&0&0&0&0 \end{bmatrix}

The resulting echelon matrix has pivots in columns one, two, and four, so y_1, y_2, and y_4 are basic variables and y_3, y_5, and y_6 are free variables.

We first set y_3 = 1 and y_5 = y_6 = 0. From the third row of the echelon matrix we have

-y_4 - y_5 - y_6 = 0 \rightarrow y_4 - 0 - 0 = 0

\rightarrow y_4 = 0

From the second row of the echelon matrix we have

-y_2 - y_3 - y_4 - y_5 = 0 \rightarrow -y_2 - 1 - 0 - 0 = 0

\rightarrow y_2 = -1

and from the first row we then have

-y_1 - y_2 - y_5 = 0 \rightarrow -y_1 - (-1) = 0

\rightarrow y_1 = 1

So one solution to A^Ty = 0 is \begin{bmatrix} 1&-1&1&0&0&0 \end{bmatrix}^T.

We next set y_5 = 1 and y_3 = y_6 = 0. From the third row of the echelon matrix we have

-y_4 - y_5 - y_6 = 0 \rightarrow -y_4 - 1 - 0 = 0

\rightarrow y_4 = -1

From the second row of the echelon matrix we have

-y_2 - y_3 - y_4 - y_5 = 0 \rightarrow -y_2 - 0 - (-1) - 1 = 0

\rightarrow y_2 = 0

and from the first row we then have

-y_1 - y_2 - y_5 = 0 \rightarrow -y_1 - 0 - 1 = 0

\rightarrow y_1 = -1

So a second solution to A^Ty = 0 is \begin{bmatrix} -1&0&0&-1&1&0 \end{bmatrix}^T.

Finally we set y_6 = 1 and y_3 = y_5 = 0. From the third row of the echelon matrix we have

-y_4 - y_5 - y_6 = 0 \rightarrow -y_4 - 0 - 1 = 0

\rightarrow y_4 = -1

From the second row of the echelon matrix we have

-y_2 - y_3 - y_4 - y_5 = 0 \rightarrow -y_2 - 0 - (-1) - 0 = 0

\rightarrow y_2 = 1

and from the first row we then have

-y_1 - y_2 - y_5 = 0 \rightarrow -y_1 - 1 - 0 = 0

\rightarrow y_1 = -1

So a third solution to A^Ty = 0 is \begin{bmatrix} -1&1&0&-1&0&1 \end{bmatrix}^T.

The following set of three vectors u, v, and w are thus solutions to A^Ty = 0. The vectors are independent (as you can verify by inspecting them):

u = \begin{bmatrix} 1 \\ -1 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad v = \begin{bmatrix} -1 \\ 0 \\ 0 \\ -1 \\ 1 \\ 0 \end{bmatrix} \qquad w = \begin{bmatrix} -1 \\ 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{bmatrix}

Since A^Ty = y^TA the vectors are in the left nullspace of A. (In fact they form a basis for the left nullspace.)

The vectors also correspond to independent loops in the graph used to create the incidence matrix A:

The first vector u is a loop around the outside of the graph, and includes edge 1, edge 2, and edge 3. Note that edge 2 runs in a different direction than the other two, and hence its corresponding entry is reversed in sign from the other two.

The second vector v is a loop around an interior triangle of the graph (in the upper left), and includes edge 1, edge 4, and edge 5, with edge 5 running in a different direction than the other two.

The third vector w includes edges 1, 2, 4, and 6, with edges 2 and 6 running in different directions than the other two.

Note that if we add u and w

u + w = \begin{bmatrix} 1 \\ -1 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} -1 \\ 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ -1 \\ 0 \\ 1 \end{bmatrix}

we get a vector representing a loop around the bottom interior triangle of the graph, comprising edges 3, 4, and 6.

Similarly, if we subtract v from w

w - v = \begin{bmatrix} -1 \\ 1 \\ 0 \\ -1 \\ 0 \\ 1 \end{bmatrix} - \begin{bmatrix} -1 \\ 0 \\ 0 \\ -1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ -1 \\ 1 \end{bmatrix}

we get a vector representing the loop around the upper right interior triangle, comprising edges 2, 5, and 6.

These two vectors together with v thus represent independent loops associated with the three interior triangles of the graph.

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.5

Exercise 2.5.5. Given the incidence matrix A from exercise 2.5.1 and the diagonal matrix

C = \begin{bmatrix} c_1&0&0 \\ 0&c_2&0 \\ 0&0&c_3 \end{bmatrix}

compute A^TCA and show that the 2 by 2 matrix resulting from removing the third row and third column is invertible..

Answer: From exercise 2.5.1 we have the incidence matrix

A = \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

so that

A^TCA = \begin{bmatrix} 1&0&1 \\ -1&1&0 \\ 0&-1&-1 \end{bmatrix} \begin{bmatrix} c_1&0&0 \\ 0&c_2&0 \\ 0&0&c_3 \end{bmatrix} \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

= \begin{bmatrix} c_1&0&c_3 \\ -c_1&c_2&0 \\ 0&-c_2&-c_3 \end{bmatrix} \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

= \begin{bmatrix} c_1+c_3&-c_1&-c_3 \\ -c_1&c_1+c_2&-c_2 \\ -c_3&-c_2&c_2+c_3 \end{bmatrix}

Note that the third row of this matrix is equal to -1 times the sum of the first and second rows, so the rows are linearly dependent and the matrix is singular.

If we remove the third row and third column of A^TCA we obtain the following matrix:

\begin{bmatrix} c_1+c_3&-c_1 \\ -c_1&c_1+c_2 \end{bmatrix}

This matrix is nonsingular (except for certain values of c_1, c_2 and c_3 as discussed below); its inverse is

1/((c_1+c_3)(c_1+c_2) - (-c_1)(-c_1)) \begin{bmatrix} c_1+c_2&-(-c_1) \\ -(-c_1)&c_1+c_3 \end{bmatrix}

= 1/(c_1^2+c_1c_2+c_3c_1+c_3c_2-c_1^2) \begin{bmatrix} c_1+c_2&c_1 \\ c_1&c_1+c_3 \end{bmatrix}

1/(c_1c_2+c_1c_3+c_2c_3) \begin{bmatrix} c_1+c_2&c_1 \\ c_1&c_1+c_3 \end{bmatrix}

Note that if c_1c_2+c_1c_3+c_2c_3 = 0 then the 2 by 2 matrix is singular and has no inverse. This would be true, for example, if c_1 = c_2 = 2 and c_3 = -1 so that the 2 by 2 matrix derived from A^TCA is

\begin{bmatrix} 1&-2 \\ -2&4 \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.4

Exercise 2.5.4. Given the incidence matrix A from exercise 2.5.1 show that A^TA is symmetric and singular, and determine its nullspace. Show that the matrix obtained by removing the last row and column of A^TA is nonsingular.

Answer: From exercise 2.5.1 we have the incidence matrix

A = \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

so that

A^TA = \begin{bmatrix} 1&0&1 \\ -1&1&0 \\ 0&-1&-1 \end{bmatrix} \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

= \begin{bmatrix} 2&-1&-1 \\ -1&2&-1 \\ -1&-1&2 \end{bmatrix}

and A^TA is symmetric.

To see whether A^TA is singular we perform Gaussian elimination on it. We start by multplying the first row by -\frac{1}{2} and subtracting it from the second and third rows:

= \begin{bmatrix} 2&-1&-1 \\ -1&2&-1 \\ -1&-1&2 \end{bmatrix} \Rightarrow \begin{bmatrix} 2&-1&-1 \\ 0&\frac{3}{2}&-\frac{3}{2} \\ 0&-\frac{3}{2}&\frac{3}{2} \end{bmatrix}

We then multiply the second row by -1 and subtract it from the third row:

\begin{bmatrix} 2&-1&-1 \\ 0&\frac{3}{2}&-\frac{3}{2} \\ 0&-\frac{3}{2}&\frac{3}{2} \end{bmatrix} \Rightarrow \begin{bmatrix} 2&-1&-1 \\ 0&\frac{3}{2}&-\frac{3}{2} \\ 0&0&0 \end{bmatrix}

The matrix is now in echelon form. Since it has three columns and only two pivots the matrix is singular (with rank r = 2).

In solving (A^TA)x = 0 we have x_1 and x_2 as basic variables and x_3 as a free variable. Using the echelon matrix above we have

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} 2x_1&-&x_2&-&x_3&=&0 \\ &&\frac{3}{2}x_2&-&\frac{3}{2}x_3&=&0 \end{array}

From the second equation we have x_2 = x_3 and substituting into the first equation we have x_1 = x_2 = x_3. The nullspace therefore consists of all vectors of the form \begin{bmatrix} c&c&c \end{bmatrix}^T. It has dimension n - r = 3 - 2 = 1.

If we remove the third row and third column of A^TA we obtain the following matrix:

\begin{bmatrix} 2&-1 \\ -1&2 \end{bmatrix}

This matrix is nonsingular; its inverse is

1/(2 \cdot 2 - (-1) \cdot (-1)) \begin{bmatrix} 2&-(-1) \\ -(-1)&2 \end{bmatrix}

= 1/(4-1) \begin{bmatrix} 2&1 \\ 1&2 \end{bmatrix} = \frac{1}{3} \begin{bmatrix} 2&1 \\ 1&2 \end{bmatrix} = \begin{bmatrix} \frac{2}{3}&\frac{1}{3} \\ \frac{1}{3}&\frac{2}{3} \end{bmatrix}

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.3

Exercise 2.5.3. Given the incidence matrix A from exercise 2.5.1 and any vector f in the row space of A show that f_1 + f_2 + f_3 = 0. Prove the same result based on the linear system A^Ty = f. What is the implication if f_1, f_2, and f_3 are currents into each node?

Answer: From exercise 2.5.1 we have the incidence matrix

A = \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

If f = (f_1, f_2, f_3) is in the row space of A then we have

\begin{bmatrix} f_1 \\ f_2 \\ f_3 \end{bmatrix} = c_1 \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 1 \\ -1 \end{bmatrix} + c_3 \begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}

for some set of scalar coefficients c_1, c_2, and c_3 so that

\begin{bmatrix} f_1 \\ f_2 \\ f_3 \end{bmatrix} = \begin{bmatrix} c_1 + c_3 \\ -c_1 + c_2 \\ -c_2 - c_3 \end{bmatrix}

We then have

f_1 + f_2 + f_3 = (c_1 + c_3) + (-c_1 + c_2) + (-c_2 - c_3)

= (c_1 - c_1) + (c_2 - c_2) + (c_3 - c_3) = 0 + 0 + 0 = 0

We therefore have f_1 + f_2 + f_3 = 0 for all vectors f in the row space of A.

Turning to the system A^Ty = f we have

\begin{bmatrix} 1&0&1 \\ -1&1&0 \\ 0&-1&-1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} f_1 \\ f_2 \\ f_3 \end{bmatrix}

which corresponds to the system of equations

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} y_1&&&+&y_3&=&f_1 \\ -y_1&+&y_2&&&=&f_2 \\ &&-y_2&-&y_3&=&f_3 \end{array}

We then have

f_1 + f_2 + f_3 = (y_1 + y_3) + (-y_1 + y_2) + (-y_2 - y_3)

= (y_1 - y_1) + (y_2 - y_2) + (y_3 - y_3) = 0

The 3 by 3 incidence matrix A represents a graph with three nodes and three edges. The first row represents edge 1 from node 2 to node 1 (i.e., leaving node 2 and entering node 1). The second row represents edge 2 from node 3 to node 2. The third row represents edge 3 from node 3 to node 1.

Each node of the graph is represented by a column of A and thus by a row of A^T. If the vector f represents current sources at each node (f_1 at node 1, f_2 at node 2, and f_3 at node 3) then the fact that f_1 + f_2 + f_3 = 0 means that the net current into each node is zero (Kirchoff’s Current Law).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment

Linear Algebra and Its Applications, Exercise 2.5.2

Exercise 2.5.2. Given the incidence matrix A from exercise 2.5.1 and any vector b in the column space of A show that b_1 + b_2 - b_3 = 0. Prove the same result based on the rows of A. What is the implication for the potential differences around a loop?

Answer: From exercise 2.5.1 we have the incidence matrix

A = \begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix}

If b = (b_1, b_2, b_3) is in the column space of A then we have

\begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} = c_1 \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} + c_2 \begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \\ -1 \\ -1 \end{bmatrix}

for some set of scalar coefficients c_1, c_2, and c_3 so that

\begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} = \begin{bmatrix} c_1 - c_2 \\ c_2 - c_3 \\ c_1 - c_3 \end{bmatrix}

We then have

b_1 + b_2 - b_3 = (c_1 - c_2) + (c_2 - c_3) - (c_1 - c_3)

= (c_1 - c_1) + (c_2 - c_2) + (c_3 - c_3) = 0 + 0 + 0 = 0

We therefore have b_1 + b_2 - b_3 = 0 for all vectors b in the column space of A.

Turning to the rows of A if Ax = b we have

\begin{bmatrix} 1&-1&0 \\ 0&1&-1 \\ 1&0&-1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}

which corresponds to the system of equations

\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcr} x_1&-&x_2&&&=&b_1 \\ &&x_2&-&x_3&=&b_2 \\ x_1&&&-&x_3&=&b_3 \end{array}

We then have

b_1 + b_2 - b_3 = (x_1 - x_2) + (x_2 - x_3) - (x_1 - x_3)

= (x_1 - x_1) + (x_2 - x_2) + (x_3 - x_3) = 0 + 0 + 0 = 0

We also have

(x_1 - x_2) + (x_2 - x_3) - (x_1 - x_3)

= (x_1 - x_2) + (x_2 - x_3) + (x_3 - x_1)

so that

(x_1 - x_2) + (x_2 - x_3) + (x_3 - x_1) = 0

The 3 by 3 incidence matrix A represents a graph with three nodes and three edges and hence one loop. Each node of the graph is represented by a column of A and each edge by a row of A. The first row represents edge 1 from node 2 to node 1 (i.e., leaving node 2 and entering node 1). The second row represents edge 2 from node 3 to node 2. The third row represents edge 3 from node 3 to node 1.

If the vector x represents potentials at the nodes (x_1 at node 1, x_2 at node 2, and x_3 at node 3) then x_1 - x_2 is the potential difference along edge 1 (from node 2 to node 1),  x_2 - x_3 is the potential difference along edge 2 (from node 3 to node 2) and x_3 - x_1 is the potential difference along edge 3 (from node 3 to node 1). From the equations above we see that the sum of the potential differences around the loop is zero (Kirchoff‘s Voltage Law).

NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.

If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.

 Buy me a snack to sponsor more posts like this!

Posted in linear algebra | Leave a comment