I recently read the (excellent) online resource *Quantum Computing for the Very Curious* by Andy Matuschak and Michael Nielsen. Upon reading the proof that all length-preserving matrices are unitary and trying it out myself, I came to believe that there is an error in the proof as written, specifically with trying to show that off-diagonal entries in are zero if is length-preserving.

Using the identity , a suitable choice of with , and the fact that is length-preserving, Nielsen first shows that for .

He then goes on to write “But what if we’d done something slightly different, and instead of using we’d used ? … I won’t explicitly go through the steps – you can do that yourself – but if you do go through them you end up with the equation: .”

I was an undergraduate physics and math major, but either I never worked with bra-ket notation and Hermitian conjugates or I’ve forgotten whatever I knew about them. In any case in working through this I could not get the same result as Nielsen; I simply ended up once again proving that .

After some thought and experimentation I concluded that the key is to choose . Below is my (possibly mistaken!) attempt at a correct proof that all length-preserving matrices are unitary.

Proof: Let be a length-preserving matrix such that for any vector we have . We wish to show that is unitary, i.e., .

We first show that the diagonal elements of , or , are equal to 1.

To do this we start with the unit vectors and with 1 in positions and respectively, and 0 otherwise. The product is then the th column of , and is the th entry of or .

From the general identity we also have . But since is length-preserving we have since is a unit vector.

We thus have . So all diagonal entries of are 1.

We next show that the non-diagonal elements of , or with , are equal to zero.

Let with . Since is length-preserving we have

We also have where . From the definition of the dagger operation and the fact that the nonzero entries of and have no imaginary parts we have .

We then have

since we previously showed that all diagonal entries of are 1.

Since and also we thus have for .

Now let with . Again we have since is length-preserving, so that

Since has an imaginary part for its (single) nonzero entry, in performing the dagger operation and taking complex conjugates we obtain . We thus have

We also have

Since we have or so that .

But we showed above that . Adding the two equations the terms for cancel out and we get for . So all nondiagonal entries of are equal to zero.

Since all diagonal entries of are equal to 1 and all nondiagonal entries of are equal to zero, we have and thus the matrix is unitary.

Since we assumed was a length-preserving matrix we have thus shown that all length-preserving matrices are unitary.

The notation and the terms in this proof are somewhat different from what we have been using in linear algebra courses I have been taking (we call the “dagger”, for example, the complex conjugate or the Hermitian form of a matrix) and sorry that I did not go through all of your proof but it inspired me a great deal in completing my own proof: Here is the one I came up with: https://github.com/apaksoy/alaff/blob/master/Theorem%202_2_4_4.pdf

I have benefited quite a bit from studying Strang’s “Introduction to Linear Algebra, 5E” book when I was taking the introductory linear algebra course in addition to Axler’s “Linear Algebra Done Right, 3E”. I think both are great texts but in quite different ways. Thx!

Thanks for your comment! Yes, the bra-ket notation can be somewhat difficult to understand if you’re not used to it, but it seems to be the prevalent notation for those working with complex matrices in the context of quantum mechanics. I looked at your proof and noted that the key steps are the same as in mine: you choose x = e_j + e_k for one step in the proof and then x = e_j + ie_k for a second step in the proof. That’s the secret to getting two equations that you can add together in order to cancel one of the terms and equate the remaining term to zero.

One of the homework in the linear algebra course I am taking implies there is a much shorter proof for this theorem if you are allowed to use singular value decomposition. See Homework 2.5.1.8. at the following link if you would like to take a look: https://github.com/apaksoy/alaff/blob/master/HW%202_5_1.pdf

Thanks for pointing readers to this!