2D Rotation

I'm hoping with this post to show one way that complex numbers come about and how they are a natural way of representing rotations in 2D. To do this I will introduce a small amount of Clifford algebra and some infinite series expansions. A benefit of understanding rotations in this way will be that it generalizes to arbitrary dimensions, though you probably only care about the two and three dimensional cases.

A friend of mine recently asked me about 2D rotations. He had started with a 2x2 rotation matrix:

\[ \left( \begin{array}{cr} cos(\theta) & -sin(\theta)\\ sin(\theta) & cos(\theta)\\ \end{array} \right) \left( \begin{array}{c} x\\ y\\ \end{array} \right) = \left( \begin{array}{c} cos(\theta) x - sin(\theta) y\\ sin(\theta) x + cos(\theta) y\\ \end{array} \right) \]

After playing with it for a while he concluded that the second row of the matrix was redundant and you could just store the first row, or really any row or column. This reminded him of something I had said a while back about using complex numbers to represent rotations. This prompted me to write a first version of this post as an email in response. Then he wrote again saying that it seemed like the two component thing was the result of \(e^{i\theta}\). So I have added a section on the exponential map and Taylor series.

Complex Numbers

First let's start with a bit of a review of complex numbers. You've probably been introduced to them at some point as a number with two components, a real portion and an imaginary portion, where the imaginary portion is signified by having a lower case \(i\) associated with it.

\[(a + b i)\]

And they were probably being used to fill in a gap when someone tried to take the square root of a negative number. Probably it was right around when you learned the quadratic formula. Some people are introduced to complex numbers when they start playing with fractals; the Mandelbrot set, Julia sets, and others are defined in the complex plane. The complex plane is what you get if you treat the real and imaginary parts of a complex number as the \(x\) and \(y\) components of a 2D vector. A lot of neat results come from that mapping. Later I'll show a different way to associate the components of a complex number with the \(x\) and \(y\) components of a 2D vector, but first, let's remember what happens when you multiply two complex numbers together.

\[(a_1 + b_1 i) (a_2 + b_2 i)\]

You probably remember FOIL (First Outer Inner Last) which tells you how to go about doing this multiplication. First multiple the a's together, then the outer elements \((a_1 \text{and} b_2 i)\), then the inner elements \((b_1 i \text{and} a_2)\), and finally the last elements, the b's.

\[(a_1 a_2) + (a_1 b_2 i) + (b_1 i a_2) + (b_1 i b_2 i)\]

With complex numbers we're free to move the i's around inside each product, so we'll move them to the end of each one, and outside of the parentheses.

\[(a_1 a_2) + (a_1 b_2) i + (b_1 a_2) i + (b_1 b_2) i i\]

The \(i i\) that we get from the multiplication of the two imaginary components \((b_1 \text{and} b_2)\) of the two complex numbers could also be written \(i^2\). The whole reason for inventing \(i\) was to provide an answer to the question of what \(x\) is in the following.

\[x = \sqrt{-1}\]

So, \(i^2 = -1\), and thus the result of multiplying two imaginary components of a complex number together is a real number, just one that has the opposite sign than it would otherwise. So, taking this into account and merging the two terms in the middle that each have a single \(i\) in them we can simplify the result of multiplying two complex numbers down to the following.

\[(a_1 a_2) + (a_1 b_2 + b_1 a_2) i - (b_1 b_2)\]

Finally, moving the imaginary portion to the end we get:

\[(a_1 a_2 - b_1 b_2) + (a_1 b_2 + b_1 a_2) i\]

Which is another complex number \((a_3 + b_3 i)\) where \(a_3\) and \(b_3\) are defined as:

\[\begin{align*} a_3 &= a_1 a_2 - b_1 b_2 \\ b_3 &= a_1 b_2 + b_1 b_2 \end{align*}\]

With that basic review of complex numbers, let's move on to talk about the Clifford algebra.

Clifford Algebra

Above we had said that you could make a 2D plane by associating the real and imaginary components of a complex number with the \(x\) and \(y\) elements of a 2D vector. But we didn't really talk about what that meant or how you could distinguish an \(x\) component from a \(y\) component. There has been an enormous amount written about this subject, and I highly recommend Geometrical Vectors by Gabriel Weinreich. But for now, I'm going to introduce two new things, sort of like \(i\) from the complex numbers, called \(\mathbf{e_1}\) and \(\mathbf{e_2}\). And we will use them to distinguish between the two components of a 2D vector. So, our vectors will look like this. \((x \mathbf{e_1} + y \mathbf{e_2})\) These things, we'll call them bases, have a few properties: a base times itself equals one, and swapping the position of two bases in a term negates the term.

\[ \mathbf{e_i e_i} = 1\\ \mathbf{e_i e_j} = - \mathbf{e_j e_i} \]

These vectors behave just like you would expect a vector to behave. You can add and subtract them.

\[ (x_1 \mathbf{e_1} + y_1 \mathbf{e_2}) + (x_2 \mathbf{e_1} + y_2 \mathbf{e_2}) = ((x_1 + x_2) \mathbf{e_1} + (y_1 + y_2) \mathbf{e_2}) \]

And you can multiply them by scalars.

\[ c (x_1 \mathbf{e_1} + y_1 \mathbf{e_2}) = (c x_1 \mathbf{e_1} + c y_1 \mathbf{e_2}) \]

But they also have another interesting trick, if we multiply two of these vectors together we get an interesting new object.

\[(x_1 \mathbf{e_1} + y_1 \mathbf{e_2}) (x_2 \mathbf{e_1} + y_2 \mathbf{e_2})\]

using FOIL you get

\[(x_1 x_2) \mathbf{e_1 e_1} + (x_1 y_2) \mathbf{e_1 e_2} + (y_1 x_2) \mathbf{e_2 e_1} + (y_1 y_2) \mathbf{e_2 e_2}\]

Since \(\mathbf{e_1 e_1} = 1\) and \(\mathbf{e_2 e_2} = 1\) the first and last terms turn into scalars and you get

\[(x_1 x_2) + (x_1 y_2) \mathbf{e_1 e_2} + (y_1 x_2) \mathbf{e_2 e_1} + (y_1 y_2)\]

Now, the middle two terms can be combined by swapping the \(\mathbf{e_2 e_1}\) in the second term to be \(\mathbf{e_1 e_2}\) and negating the whole term. So the result is

\[(x_1 x_2) + (x_1 y_2 - y_1 x_2) \mathbf{e_1 e_2} + (y_1 y_2)\]

Finally, grouping the first and last term together you get

\[(x_1 x_2 + y_1 y_2) + (x_1 y_2 - y_1 x_2) \mathbf{e_1 e_2}\]

We see that the first term is just the dot product of the two vectors, and the second term is just the 2D cross product 1 times something weird, this \(\mathbf{e_1 e_2}\) term. So we've got some new odd object here: a scalar plus a scalar times a strange pair of bases. If we replace the dot and cross product values with the letters \(a\) and \(b\) we have \(a + b \mathbf{e_1 e_2}\).

As you may know, the dot product of two vectors is equal to the cosine of the angle between them times the length of each vector. Similarly, the length of the cross product of two vectors is equal to the sine of the angle between them times the length of each vector. These sines and cosines allow us to get back to rotation.

2D Rotation Revisited

Now what happens if we multiply one of our original vectors times one of these?

\[(x \mathbf{e_1} + y \mathbf{e_2}) (a + b \mathbf{e_1 e_2})\]

We get:

\[(a x) \mathbf{e_1} + (b x) \mathbf{e_1 e_1 e_2} + (a y) \mathbf{e_2} + (b y) \mathbf{e_2 e_1 e_2}\]

Which after simplification is:

\[(a x - b y) \mathbf{e_1} + (b x + a y) \mathbf{e_2}\]

Which looks a lot like a rotation matrix application. 2 Especially when you remember that if the two vectors we started with are unit length then a and b are equal to the cos and sin of the angles between the two vectors. In which case we have:

\[a = cos(\theta)\\ b = sin(\theta)\]

and after substitution:

\[ \begin{align*} &(cos(\theta) x - sin(\theta) y) \mathbf{e_1} +\\ &(sin(\theta) x + cos(\theta) y) \mathbf{e_2} \end{align*} \]

Complex Numbers Revisited

You can probably see where this is going. Let's try multiplying two of these 'scalar plus scalar times base pairs' together.

\[ (a_1 + b_1 \mathbf{e_1 e_2}) (a_2 + b_2 \mathbf{e_1 e_2}) \]

Again, using FOIL we get:

\[ (a_1 a_2) + (a_1 b_2) \mathbf{e_1 e_2} + b_1 \mathbf{e_1 e_2} a_2 + b_1 \mathbf{e_1 e_2} b_2 \mathbf{e_1 e_2} \]

It is fine to move the scalars around, so let's clean this up:

\[ (a_1 a_2) + (a_1 b_2) \mathbf{e_1 e_2} + (a_2 b_1) \mathbf{e_1 e_2} + (b_1 b_2) \mathbf{e_1 e_2 e_1 e_2} \]

We can combine the two \(\mathbf{e_1 e_2}\) terms:

\[ (a_1 a_2) + (a_1 b_2 + a_2 b_1) \mathbf{e_1 e_2} + (b_1 b_2) \mathbf{e_1 e_2 e_1 e_2} \]

Next we deal with the \(\mathbf{e_1 e_2 e_1 e_2}\) term. If we swap the middle \(\mathbf{e_2 e_1}\) into \(\mathbf{e_1 e_2}\) and negate the whole thing we get \(-\mathbf{e_1 e_1 e_2 e_2}\). And since a base times itself is \(1\) we have \(-1 * 1\) or \(-1\). So, the last term can be replaced with \(-(b_1 b_2)\), causing the entire thing to turn into:

\[ \begin{align*} &(a_1 a_2 - b_1 b_2) +\\ &(a_1 b_2 + a_2 b_1) \mathbf{e_1 e_2} \end{align*} \]

And poof, we have another one of these scalar plus scalar times \(\mathbf{e_1 e_2}\) things. The fact that \(\mathbf{e_1 e_2 e_1 e_2} = -1\) should have given it away: \(\mathbf{e_1 e_2}\) could also be called \(i\) and these are complex numbers (or at least they behave like them, which in some sense means that they are them).

If complex numbers really do represent rotations in 2D then we would expect that there would be a way to combine two rotations that are represented by complex numbers, resulting in a single complex number that represents the combined rotation. And you are probably not surprised to learn that there is, and it is just complex multiplication (which we worked out above). To show this let's assume that \((a_1 + b_1 \mathbf{e_1 e_2})\) represents a rotation by \(\theta\) radians and \((a_2 + b_2 \mathbf{e_1 e_2})\) represents a rotation by \(\phi\) radians. So:

\[ a_1 = cos(\theta)\\ b_1 = sin(\theta)\\ a_2 = cos(\phi)\\ b_2 = sin(\phi) \]

If we rewrite the result of multiplying two complex numbers together and substitute these cosines and sines in we get:

\[ \begin{align*} &(cos(\theta) cos(\phi) - sin(\theta) sin(\phi)) +\\ &(cos(\theta) sin(\phi) + cos(\phi) sin(\theta)) \mathbf{e_1 e_2} \end{align*} \]

Now we have to remember (or re-derive, but this post is getting too long as it is) the sin and cos of the sum of angles formulas. They are, probably not surprisingly:

\[ \begin{align*} cos(\theta + \phi) &= cos(\theta) cos(\phi) - sin(\theta) sin(\phi)\\ sin(\theta + \phi) &= cos(\theta) sin(\phi) + cos(\phi) sin(\theta)\\ \end{align*} \]

And thus, we can rewrite the product of two complex numbers that represent rotations as:

\[cos(\theta + \phi) + sin(\theta + \phi) \mathbf{e_1 e_2}\]

And indeed, the product of two complex numbers that represent rotations, is itself a rotation, one that is the sum of the two independent rotations.

Exponential Map

The exponential map is a special function that is its own derivative. It is fascinating, and I recommend reading more about it. Here I'm just going to quickly show one fantastic result. I'll need to use the words Taylor Series and infinity, but I'm not going to justify them much. It can be thought of as an infinite sum of terms, or as the exponentiation \(e^x\).

\[exp(x) = e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}\]

This function is valid for all complex numbers, but we will limit the input to just the imaginary numbers (the imaginary portion of a complex number). To make the notation a little simpler I'll use \(i\) instead of \(\mathbf{e_1 e_2}\). I'll show that by doing this we effectively map an infinite line to the circle of radius one centered at the origin of the complex plane. Points on the line can be thought of as representing angles in radians. And every time we move \(2 \pi\) along the line we make one full loop around the circle. So, there are many points on the line that map to the same point on the circle. These all represent the same rotation.

\[exp(x i) = e^{x i} = \sum_{k=0}^{\infty} \frac{(x i)^k}{k!}\]

First, let's look at the \((x i)^k\) term. The first few terms in the infinite sum and the generalization for an arbitrary value of \(k\) are:

\[ \begin{align*} (x i)^0 &= x^0 = 1\\ (x i)^1 &= x^1 i = x i\\ (x i)^2 &= x^2 i i = -1 x^2\\ (x i)^3 &= x^3 i i i = -1 x^3 i\\ &\vdots\\ (x i)^k &= x^k i^k = \begin{cases} -1^{\frac{k}{2}} x^k & k\;\text{is even} \\[1ex] -1^{\frac{k-1}{2}} x^k i & k\;\text{is odd} \end{cases}\\ \end{align*} \]

If we group all of the even and all of the odd terms together we get:

\[ e^{x i} = \sum_{k=0,2,4...}^{\infty} \frac{-1^{\frac{k}{2}} x^k}{k!} + \sum_{k=1,3,5...}^{\infty} \frac{-1^{\frac{k-1}{2}} x^k}{k!}i \]

We can turn the sums back into simple sums from zero to infinity by transforming \(k\) in each of them. In the first one we can replace \(k\) with \(2k\), and in the second we can replace \(k\) with \(2k+1\). The result is:

\[ e^{x i} = \sum_{k=0}^{\infty} \frac{-1^k x^{2k}}{(2k)!} + \sum_{k=0}^{\infty} \frac{-1^k x^{2k+1}}{(2k+1)!}i \]

It turns out that the first sum is actually the Taylor series expansion for \(cos(x)\) and the second sum is the Taylor series expansion for \(sin(x)\). So this can be rewritten as:

\[ e^{x i} = cos(x) + sin(x) i \]

This beautiful result is known as Euler's formula.

Further Reading


  1. In 2D the cross product as we normally think of it doesn't exist, but the value above can be thought of as the Z component of a 3D cross product of two vectors that lie in the XY plane. Since the two vectors have zero Z components, the X and Y components of their cross product is zero, so the overall cross product length is just the length of the Z portion.

    [return]
  2. There is a bit more subtlety to this, the actual group action we care about is ATA* 3 where A is a complex number, T is a vector and A* is the conjugate of the complex number. That action works even if the complex number isn't unit, and it generalizes up to higher dimensions (TA, what we did here, doesn't).

    [return]
  3. The construction of a complex number as a multiplication of two vectors using the odd base combination rules gives a cool way to ask for a rotation between two vectors. If you use the full ATA* group operation the rotation is twice the angle between the two vectors though. This turns out to be important in higher dimensions.

    [return]