. 2
( 32 .)


of the initial position and velocity. Beyond this, very little is known is known about the
behaviour of the trajectories for generic V .
Suppose now that the potential function V is rotationally symmetric, i.e. that V
depends only on the distance from the origin and, for the sake of simplicity, let us take
n = 3 as well. This is classically called the case of a central force ¬eld in space. If we let
V (x) = 1 v(|x|2 ), then the equations of motion become

x = ’v |x|2 x.

As conserved quantities, i.e., functions of the position and velocity which stay constant on
any solution of the equation, we still have the energy E = 1 |x|2 + v(|x|2 ) , but is it also
easy to see that the vector-valued function x — x is conserved, since

(x — x) = x — x ’ x — v (|x|2 ) x.
™ ™ ™

Call this vector-valued function µ. We can think of E and µ as functions on the phase
space R6 . For generic values of E0 and µ0 , the simultaneous level set

ΣE0 ,µ0 = { (x, x) | E(x, x) = E0 , µ(x, x) = µ0 }
™ ™ ™

of these functions cut out a surface ΣE0 ,µ0 ‚ R6 and any integral of the equations of motion
must lie in one of these surfaces. Since we know a great deal about integrals of ODEs on

L.1.1 7
surfaces, This problem is very tractable. (see Lecture 4 and its exercises for more details
on this.)
The function µ, known as the angular momentum, is called a ¬rst integral of the
second-order ODE for x(t), and somehow seems to correspond to the rotational symmetry
of the original ODE. This vague relationship will be considerably sharpened and made
precise in the upcoming lectures.

The relationship between symmetry and solvability in di¬erential equations is pro-
found and far reaching. The subjects which are now known as Lie groups and symplectic
geometry got their beginnings from the study of symmetries of systems of ordinary di¬er-
ential equations and of integration techniques for them.
By the middle of the nineteenth century, Galois theory had clari¬ed the relationship
between the solvability of polynomial equations by radicals and the group of “symmetries”
of the equations. Sophus Lie set out to do the same thing for di¬erential equations and
their symmetries.
Here is a “dictionary” showing the (rough) correspondence which Lie developed be-
tween these two achievements of nineteenth century mathematics.
Galois theory in¬nitesimal symmetries
¬nite groups continuous groups
polynomial equations di¬erential equations
solvable by radicals solvable by quadrature

Although the full explanation of these correspondances must await the later lectures, we
can at least begin the story in the simplest examples as motivation for developing the
general theory. This is what I shall do for the rest of today™s lecture.

Classical Integration Techniques. The very simplest ordinary di¬erential equation
that we ever encounter is the equation

(1) x(t) = ±(t)

where ± is a known function of t. The solution of this di¬erential equation is simply
x(t) = x0 + ±(„ ) d„.

The process of computing an integral was known as “quadrature” in the classical literature
(a reference to the quadrangles appearing in what we now call Riemann sums), so it was
said that (1) was “solvable by quadrature”. Note that, once one ¬nds a particular solution,
all of the others are got by simply translating the particular solution by a constant, in this
case, by x0 . Alternatively, one could say that the equation (1) itself was invariant under
“translation in x”.
The next most trivial case is the homogeneous linear equation

(2) x = β(t) x.

L.1.2 8
This equation is invariant under scale transformations x ’ rx. Since the mapping
log: R+ ’ R converts scaling to translation, it should not be surprising that the di¬erential
equation (2) is also solvable by a quadrature:
β(„ ) d„
x(t) = x0 e .

Note that, again, the symmetries of the equation su¬ce to allow us to deduce the general
solution from the particular.
Next, consider an equation where the right hand side is an a¬ne function of x,

(3) x = ±(t) + β(t) x.

This equation is still solvable in full generality, using two quadratures. For, if we set
β(„ )d„
x(t) = u(t)e ,

’ β(„ )d„
then u satis¬es u = ±(t)e 0
™ , which can be solved for u by another quadrature.
It is not at all clear why one can somehow “combine” equations (1) and (2) and get an
equation which is still solvable by quadrature, but this will become clear in Lecture 3.
Now consider an equation with a quadratic right-hand side, the so-called Riccati

x = ±(t) + 2β(t)x + γ(t)x2 .
(4) ™

It can be shown that there is no method for solving this by quadratures and algebraic
manipulations alone. However, there is a way of obtaining the general solution from a
particular solution. If s(t) is a particular solution of (4), try the ansatz x(t) = s(t) +
1/u(t). The resulting di¬erential equation for u has the form (3) and hence is solvable by
The equation (4), known as the Riccati equation, has an extensive history, and we
will return to it often. Its remarkable property, that given one solution we can obtain the
general solution, should be contrasted with the case of

x = ±(t) + β(t)x + γ(t)x2 + δ(t)x3 .
(5) ™

For equation (5), one solution does not give you the rest of the solutions. There is in fact a
world of di¬erence between this and the Riccati equation, although this is far from evident
looking at them.
Before leaving these simple ODE, we note the following curious progression: If x1 and
x2 are solutions of an equation of type (1), then clearly the di¬erence x1 ’ x2 is constant.
Similarly, if x1 and x2 = 0 are solutions of an equation of type (2), then the ratio x1 /x2
is constant. Furthermore, if x1 , x2 , and x3 = x1 are solutions of an equation of type (3),

L.1.3 9
then the expression (x1 ’ x2 )/(x1 ’ x3 ) is constant. Finally, if x1 , x2 , x3 = x1 , and x4 = x2
are solutions of an equation of type (4), then the cross-ratio

(x1 ’ x2 )(x4 ’ x3 )
(x1 ’ x3 )(x4 ’ x2 )

is constant. There is no such corresponding expression (for any number of particular
solutions) for equations of type (5). The reason for this will be made clear in Lecture 3.
For right now, we just want to remark on the fact that the linear fractional transformations
of the real line, a group isomorphic to SL(2, R), are exactly the transformations which
leave ¬xed the cross-ratio of any four points. As we shall see, the group SL(2, R) is closely
connected with the Riccati equation and it is this connection which accounts for many of
the special features of this equation.

We will conclude this lecture by discussing the group of rigid motions in Euclidean
3-space. These are transformations of the form

T (x) = R x + t,

where R is a rotation in E3 and t ∈ E3 is any vector. It is easy to check that the set of
rigid motions form a group under composition which is, in fact, isomorphic to the group
of 4-by-4 matrices
R R = I 3 , t ∈ R3 .

(Topologically, the group of rigid motions is just the product O(3) — R3 .)
Now, suppose that we are asked to solve for a curve x: R ’ R3 with a prescribed
curvature κ(t) and torsion „ (t). If x were such a curve, then we could calculate the
curvature and torsion by de¬ning an oriented orthonormal basis (e1 ,e2 ,e3 ) along the curve,
satisfying x = e1 , e1 = κe2 , e2 = ’κe1 + „ e3 . (Think of the torsion as measuring how e2
™ ™

falls away from the e1 e2 -plane.) Form the 4-by-4 matrix

e1 e2 e3 x
X= ,
0 0 0 1

(where we always think of vectors in R3 as columns). Then we can express the ODE for
prescribed curvature and torsion as
« 
’κ 0
0 1
0 ’„
¬κ 0·

X =X  .
0 „ 0 0
0 0 0 0

We can think of this as a linear system of equations for a curve X(t) in the group of rigid

L.1.4 10
It is going to turn out that, just as in the case of the Riccati equation, the prescribed
curvature and torsion equations cannot be solved by algebraic manipulations and quadra-
ture alone. However, once we know one solution, all other solutions for that particular
(κ(t), „ (t)) can be obtained by rigid motions. In fact, though, we are going to see that one
does not have to know a solution to the full set of equations before ¬nding the rest of the
solutions by quadrature, but only a solution to an equation connected to SO(3) just in the
same way that the Riccati equation is connected to SL(2, R), the group of transformations
of the line which ¬x the cross-ratio of four points.
In fact, as we are going to see, µ “comes from” the group of rotations in three dimen-
sions, which are symmetries of the ODE because they preserve V . That is, V (R(x)) = V (x)
whenever R is a linear transformation satisfying Rt R = I. The equation Rt R = I describes
a locus in the space of 3 — 3 matrices. Later on we will see this locus is a smooth compact
3-manifold, which is also a group, called O(3). The group of rotations, and generalizations
thereof, will play a central role in subsequent lectures.

L.1.5 11
Lecture 2:

Lie Groups and Lie Algebras

Lie Groups. In this lecture, I de¬ne and develop some of the basic properties of the
central objects of interest in these lectures: Lie groups and Lie algebras.
De¬nition 1: A Lie group is a pair (G, µ) where G is a smooth manifold and µ: G—G ’ G
is a smooth mapping which gives G the structure of a group.
When the multiplication µ is clear from context, we usually just say “G is a Lie group.”
Also, for the sake of notational sanity, I will follow the practice of writing µ(a, b) simply as
ab whenever this will not cause confusion. I will usually denote the multiplicative identity
by e ∈ G and the multiplicative inverse of a ∈ G by a’1 ∈ G.
Most of the algebraic constructions in the theory of abstract groups have straightfor-
ward analogues for Lie groups:
De¬nition 2: A Lie subgroup of a Lie group G is a subgroup H ‚ G which is also a
submanifold of G. A Lie group homomorphism is a group homomorphism φ: H ’ G which
is also a smooth mapping of the underlying manifolds.

Here is the prototypical example of a Lie group:
Example : The General Linear Group. The (real) general linear group in dimen-
sion n, denoted GL(n, R), is the set of invertible n-by-n real matrices regarded as an open
submanifold of the n2 -dimensional vector space of all n-by-n real matrices with multipli-
cation map µ given by matrix multiplication: µ(a, b) = ab. Since the matrix product ab is
de¬ned by a formula which is polynomial in the matrix entries of a and b, it is clear that
GL(n, R) is a Lie group.
Actually, if V is any ¬nite dimensional real vector space, then GL(V ), the set of
bijective linear maps φ: V ’ V , is an open subset of the vector space End(V ) = V —V — and
becomes a Lie group when endowed with the multiplication µ: GL(V ) — GL(V ) ’ GL(V )
given by composition of maps: µ(φ1 , φ2 ) = φ1 —¦ φ2 . If dim(V ) = n, then GL(V ) is
isomorphic (as a Lie group) to GL(n, R), though not canonically.
The advantage of considering abstract vector spaces V rather than just Rn is mainly
conceptual, but, as we shall see, this conceptual advantage is great. In fact, Lie groups of
linear transformations are so fundamental that a special terminology is reserved for them:

De¬nition 3: A (linear) representation of a Lie group G is a Lie group homomorphism
ρ: G ’ GL(V ) for some vector space V called the representation space. Such a representa-
tion is said to be faithful (resp., almost faithful ) if ρ is one-to-one (resp., has 0-dimensional

L.2.1 12
It is a consequence of a theorem of Ado and Iwasawa that every connected Lie group
has an almost faithful, ¬nite-dimensional representation. (In one of the later exercises, we
will construct a connected Lie group which has no faithful, ¬nite-dimensional representa-
tion, so almost faithful is the best we can hope for.)

Example: Vector Spaces. Any vector space over R becomes a Lie group when the
group “multiplication” is taken to be addition.

Example: Matrix Lie Groups. The Lie subgroups of GL(n, R) are called matrix Lie
groups and play an important role in the theory. Not only are they the most frequently
encountered, but, because of the theorem of Ado and Iwasawa, practically anything which
is true for matrix Lie groups has an analog for a general Lie group. In fact, for the ¬rst pass


. 2
( 32 .)