<<

. 4
( 32 .)



>>

2 2

γe (’ 1 µ)γe (t + 1 µ) if t ∈ (’ 3 µ, 1 µ);
2 2 2 2
γe (t) =
γe (+ 1 µ)γe (t ’ 1 µ) if t ∈ (’ 1 µ, 3 µ).
2 2 2 2

By our previous arguments, this extended γe is still an integral curve of X, contradicting
the assumption that (’µ, µ) was maximal.
As an example, consider the ¬‚ow of the left-invariant vector ¬elds on GL(n, R) (or any
matrix Lie group, for that matter): For any v ∈ gl(n, R), the di¬erential equation which
γe satis¬es is simply
γe (t) = γe (t) v.
This is a matrix di¬erential equation and, in elementary ode courses, we learn that the
“fundamental solution” is

vk k
tv
γe (t) = e = In + t
k!
k=1


L.2.7 18
and that this series converges uniformly on compact sets in R to a smooth matrix-valued
function of t.
Matrix Lie groups are by far the most commonly encountered and, for this reason,
we often use the notation exp(tv) or even etv for the integral curve γe (t) associated to Xv
in a general Lie group G. (Actually, in order for this notation to be unambiguous, it has
to be checked that if tv = uw for t, u ∈ R and v, w ∈ g, then γe (t) = δe (u) where γe is
the integral curve of Xv with initial condition e and δe is the integral curve of Xw initial
condition e. However, this is an easy exercise in the use of the Chain Rule.)

It is worth remarking explicitly that for any v ∈ g the formula for the ¬‚ow of the left
invariant vector ¬eld Xv on G is simply

¦(t, a) = a exp(tv) = a etv .

(Warning: many beginners make the mistake of thinking that the formula for the ¬‚ow of
the left invariant vector ¬eld Xv should be ¦(t, a) = exp(tv) a, instead. It is worth pausing
for a moment to think why this is not so.)

It is now possible to describe all of the homomorphisms from the Lie group (R, +)
into any given Lie group:

Proposition 5: Every Lie group homomorphism φ: R ’ G is of the form φ(t) = etv
where v = φ (0) ∈ g.

Proof: Let v = φ (0) ∈ g, and let Xv be the associated left-invariant vector ¬eld on G.
Since φ(0) = e, by ode uniqueness, it su¬ces to show that φ is an integral curve of Xv .
However, φ(s + t) = φ(s)φ(t) implies φ (s) = Lφ(s) φ (0) = Xv φ(s) , as desired.


The Exponential Map. We are now ready to introduce one of the principal tools
in the study of Lie groups.
De¬nition 6: For any Lie group, the exponential mapping of G is the mapping exp: g ’ G
de¬ned by exp(v) = γe (1) where γe is the integral curve of the vector ¬eld Xv with initial
condition e .
It is an exercise for the reader to show that exp: g ’ G is smooth and that

exp (0): g ’ Te G = g

is just the identity mapping.

Example: As we have seen, for GL(n, R) (or GL(V ) in general for that matter), the
formula for the exponential mapping is just the usual power series:

ex = I + x + 1 x2 + 1 x3 + · · · .
2 6

L.2.8 19
This formula works for all matrix Lie groups as well, and can simplify considerably in
certain special cases. For example, for the group N3 de¬ned earlier (usually called the
Heisenberg group), we have
±« 

 
0xz
n3 =  0 0 y  x, y, z ∈ R ,
 
000

and v 3 = 0 for all v ∈ n3 . Thus
««  « 
x z + 1 xy
0x z 1 2
exp   0 0 y = 0 .
1 y
00 0 0 0 1


The Lie Bracket. Now, the mapping exp is not generally a homomorphism from
g (with its additive group structure) to G, although, in a certain sense, it comes as close
as possible, since, by construction, it is a homomorphism when restricted to any one-
dimensional linear subspace Rv ‚ g. We now want to spend a few moments considering
what the multiplication map on G “looks like” when pulled back to g via exp.
Since exp (0): g ’ Te G = g is the identity mapping, it follows from the Implicit
Function Theorem that there is a neighborhood U of 0 ∈ g so that exp: U ’ G is a
di¬eomorphism onto its image. Moreover, there must be a smaller open neighborhood
V ‚ U of 0 so that µ exp(V ) — exp(V ) ‚ exp(U). It follows that there is a unique
smooth mapping ν: V — V ’ U such that

µ (exp(x), exp(y)) = exp (ν(x, y)) .

Since exp is a homomorphism restricted to each line through 0 in g, it follows that ν
satis¬es
ν(±x, βx) = (± + β)x
for all x ∈ V and ±, β ∈ R such that ±x, βx ∈ V .
Since ν(0, 0) = 0, the Taylor expansion to second order of ν about (0, 0) is of the form,

ν(x, y) = ν1 (x, y) + 1 ν2 (x, y) + R3 (x, y)
2

where νi is a g-valued polynomial of degree i on the vector space g • g and R3 is a g-valued
function on V which vanishes to at least third order at (0, 0).
Since ν(x, 0) = ν(0, x) = x, it easily follows that ν1 (x, y) = x + y and that ν2 (x, 0) =
ν2 (0, y) = 0. Thus, the quadratic polynomial ν2 is linear in each g-variable separately.
Moreover, since ν(x, x) = 2x for all x ∈ V , substituting this into the above expansion
and comparing terms of order 2 yields that ν2 (x, x) ≡ 0. Of course, this implies that ν2 is
actually skew-symmetric since

0 = ν2 (x + y, x + y) ’ ν2(x, x) ’ ν2 (y, y) = ν2 (x, y) + ν2 (y, x).

L.2.9 20
De¬nition 7: The skew-symmetric, bilinear multiplication [, ]: g — g ’ g de¬ned by

[x, y] = ν2 (x, y)

is called the Lie bracket in g. The pair (g, [, ]) is called the Lie algebra of G.
With this notation, we have a formula

exp(x) exp(y) = exp x + y + 1 [x, y] + R3 (x, y)
2

valid for all x and y in some ¬xed open neighborhood of 0 in g.
One might think of the term involving [, ] as the ¬rst deviation of the Lie group
multiplication from being just vector addition. In fact, it is clear from the above formula
that, if the group G is abelian, then [x, y] = 0 for all x, y ∈ g. For this reason, a Lie algebra
in which all brackets vanish is called an abelian Lie algebra. (In fact, (see the Exercises)
g being abelian implies that G—¦ , the identity component of G, is abelian.)

Example : If G = GL(n, R), then it is easy to see that the induced bracket operation on
gl(n, R), the vector space of n-by-n matrices, is just the matrix “commutator”

[x, y] = xy ’ yx.

In fact, the reader can verify this by examining the following second order expansion:

ex ey = (In + x + 1 x2 + · · ·)(In + y + 1 y 2 + · · ·)
2 2
= (In + x + y + 1 (x2 + 2xy + y 2 ) + · · ·)
2
= (In + (x + y + 1 [x, y]) + 1 (x + y + 1 [x, y])2 + · · ·)
2 2 2

Moreover, this same formula is easily seen to hold for any x and y in gl(V ) where V is any
¬nite dimensional vector space.

Theorem 1: If φ: H ’ G is a Lie group homomorphism, then • = φ (e): h ’ g satis¬es

expG (•(x)) = φ(expH (x))

for all x ∈ h. In other words, the diagram

’’
h g
¦ ¦
¦ ¦exp
expH G

φ
’’
H G
commutes. Moreover, for all x and y in h,

•([x, y]H ) = [•(x), •(y)]G .



L.2.10 21
Proof: The ¬rst statement is an immediate consequence of Proposition 5 and the Chain
Rule since, for every x ∈ h, the map γ: R ’ G given by γ(t) = φ(etx ) is clearly a Lie group
homomorphism with initial velocity γ (0) = •(x) and hence must also satisfy γ(t) = et•(x).
To get the second statement, let x and y be elements of h which are su¬ciently close
to zero. Then we have, using self-explanatory notation:

φ(expH (x) expH (y)) = φ(expH (x))φ(exp H (y)),
so
φ(expH (x + y + 1 [x, y]H + RH (x, y))) = expG (•(x)) expG (•(y)),
3
2
and thus
expG (•(x + y + 1 [x, y]H + RH (x, y))) = expG (•(x) + •(y) + 1 [•(x), •(y)]G + RG (•(x), •(y))),
3 3
2 2
¬nally giving
•(x + y + 1 [x, y]H + RH (x, y)) = •(x) + •(y) + 1 [•(x), •(y)]G + RG (•(x), •(y)).
3 3
2 2

Now using the fact that • is linear and comparing second order terms gives the desired
result.
On account of this theorem, it is usually not necessary to distinguish the map exp
or the bracket [, ] according to the group in which it is being applied, so I will follow this
practice also. Henceforth, these symbols will be used without group decorations whenever
confusion seems unlikely.

Theorem 1 has many useful corollaries. Among them is

Proposition 6: If H is a connected Lie group and φ1 , φ2 : H ’ G are two Lie group
homomorphisms which satisfy φ1 (e) = φ2 (e), then φ1 = φ2 .

Proof: There is an open neighborhood U of e in H so that expH is invertible on this
neighborhood with inverse satisfying exp’1 (e) = 0. Then for a ∈ U we have, by Theorem
H
1,
φi (a) = expG (•i (exp’1 (a))).
H

Since •1 = •2 , we have φ1 = φ2 on U. By Proposition 2, every element of H can be
written as a ¬nite product of elements of U, so we must have φ1 = φ2 everywhere.
We also have the following fundamental result:

Proposition 7: If Ad: G ’ GL(g) is the adjoint representation, then ad = Ad (e): g ’
gl(g) is given by the formula ad(x)(y) = [x, y]. In particular, we have the Jacobi identity

ad([x, y]) = [ad(x), ad(y)].


Proof: This is simply a matter of unwinding the de¬nitions. By de¬nition, Ad(a) = Ca (e)
where Ca : G ’ G is de¬ned by Ca(b) = aba’1 . In order to compute Ca (e)(y) for y ∈ g,

L.2.11 22
we may just compute γ (0) where γ is the curve γ(t) = a exp(ty)a’1 . Moreover, since
exp (0): g ’ g is the identity, we may as well compute β (0) where β = exp’1 —¦γ. Now,
assuming a = exp(x), we compute

β(t) = exp’1 (exp(x) exp(ty) exp(’x))
= exp’1 (exp(x + ty + 1 [x, ty] + · · ·) exp(’x))
2
= exp’1 (exp((x + ty + 1 [x, ty]) + (’x) + 1 [x + ty, ’x] + · · ·)
2 2
= ty + t[x, y] + E3 (x, ty)

where the omitted terms and the function E3 vanish to order at least 3 at (x, y) = (0, 0).
(Note that I used the identity [y, x] = ’[x, y].) It follows that

Ad(exp(x))(y) = β (0) = y + [x, y] + E3 (x, 0)y

where E3 (x, 0) denotes the derivative of E3 with respect to y evaluated at (x, 0) and is
hence a function of x which vanishes to order at least 2 at x = 0. On the other hand,
since, by the ¬rst part of Theorem 1, we have

Ad(exp(x)) = exp(ad(x)) = I + ad(x) + 1 (ad(x))2 + · · · .
2

Comparing the x-linear terms in the last two equations clearly gives the desired result.
The validity of the Jacobi identity now follows by applying the second part of Theorem 1
to Proposition 3.
The Jacobi identity is often presented di¬erently. The reader can verify that the
equation ad [x, y] = ad(x), ad(y) where ad(x)(y) = [x, y] is equivalent to the condition
that
for all z ∈ g.
[x, y], z + [y, z], x + [z, x], y = 0
This is a form in which the Jacobi identity is often stated. Unfortunately, although this is
a very symmetric form of the identity, it somewhat obscures its importance and meaning.
The Jacobi identity is so important that the class of algebras in which it holds is given
a name:
De¬nition 8: A Lie algebra is a pair (g, [ , ]) where g is a vector space and [ , ]: g—g ’ g is a

<<

. 4
( 32 .)



>>