<<

. 2
( 33 .)



>>


[ψ —¦ φ]E,B = [ψ]F ,B [φ]E,F .

If V = W, then [id]F ,E and [id]E,F are inverses of each other and

[φ]F = [id]E,F [φ]E [id]F ,E

for arbitrary φ.
Next, we make choices for the expression of tensor products of matrices, vector spaces,
and transformations. Our choices will correspond to the ordering on

{1, . . . , m} — {1, . . . , n} = {ik : i ∈ {1, . . . , m} , k ∈ {1, . . . , n}}

given by i1 k1 ¤ i2 k2 if i1 < i2 or i1 = i2 , k1 ¤ k2 . Under this ordering, a column vector
v ∈ k mn—1 will be written

v = (v11 , . . . , v1n , v21 , . . . , v2n , . . . , vm1 , . . . , vmn )T .

Given matrices A = (aij ) ∈ k m —m , B = (bkl ) ∈ k n —n , we de¬ne A — B = (cik,jl ), where
cik,jl = aij bkl for ik ∈ {1, . . . , m } — {1, . . . , n } , jl ∈ {1, . . . , m} — {1, . . . , n} . This yields

A—B = (aij bkl )

®
···
a11 B a12 B a1m B




··· a2m B 
 a21 B a22 B
 ∈ k m n —mn .

= (2.1)

 . . .
..

 . . .
.
. . . »
°
···
am 1 B am 2 B am m B

For the vector spaces V and W, de¬ne the ordered basis E — F of V — W by

E —F {e1 — f1 , . . . , e1 — fn , e2 — f1 , . . . , e2 — fn ,
=

. . . , em — f1 , . . . , em — fn }. (2.2)

Suppose V and W are k-vector spaces with bases E = {e1 , . . . , em }, F = {f1 , . . . , fn },
respectively. In the ordered bases E — F and E — F de¬ned according to (2.2), if T ∈
Homk (V, V ) has matrix [T ]E,E = A = (aij ) and U ∈ Homk (W, W ) has matrix [U ]F ,F =
B = (bij ), then T — U ∈ Homk (V — W, V — W ) has matrix [T — U ]E—F ,E = A — B,
—F


4
where A — B is as given in (2.1). Indeed, if T (ej ) = aij ei and U (fl ) = k bkl fk , then
i


(T — U ) (ej — fl ) —
= aij ei bkl fk
i k

aij bkl ei — fk
=
i,k

and therefore the coe¬cient of ei — fk in the expansion of (T — U ) (ej — fl ) is aij bkl , as
desired.
Finally, we recall the following facts about matrices of dual transformations: Given
T : V ’ W, then the dual transformation T — : W — ’ V — is given by T — (ψ) = ψ —¦ T ∈ V —
for ψ ∈ W — . Given an ordered basis E (resp., F) of V (resp., W ), suppose [T ]E,F = A ∈
k m—n . Then we may give V — (resp., W — ) the ordered basis E — (resp., F — ), and we have
[T — ]F — ,E — = AT . In particular, if V = W and T = id, we have T — = id and
’1
[id]F — ,E — = AT , [id]E — ,F — = AT . (2.3)



2.2 Algebraic groups
The following basic facts are taken from [Hum81]. The material on Levi decomposition is
detailed in [Mos56].
We de¬ne the following notation: If G is a group and y ∈ G, then Int y is the inner
automorphism of G de¬ned by (Int y)(x) = yxy ’1 . If H is a normal subgroup of G, then
Int y|H is an automorphism of H. When the context is clear, we will abbreviate Int y|H as
Int y.
Throughout this document, C is an algebraically closed ¬eld of characteristic zero. An
algebraic group over C is an a¬ne algebraic set de¬ned over C, equipped with group opera-
tions which are continuous in the Zariski topology. We suppress the phrase “over C” when
the ¬eld of de¬nition is clear from context. A morphism in the category of algebraic groups
is a Zariski-continuous map which is also a group homomorphism.
Examples of algebraic groups are as follows:

1. C = (C, +), the additive group. Note that this group has no proper nontrivial algebraic
subgroups.

2. An arbitrary ¬nite-dimensional vector space (e.g., C n ) is generated as an additive
algebraic group by an arbitrary vector-space basis; such a group is called a vector
group. The only closed subgroups of such a group are its vector subspaces.

5
3. C — = (C \ {0} , ·), the multiplicative group.

4. GLn = GLn (C), the group of n — n matrices with nonzero determinant. GLn is an open
subset of a¬ne n2 -space with coordinates sij , 1 ¤ i, j ¤ n.

5. Any closed subgroup of GLn . Examples:

(a) SLn , the group of n — n matrices with determinant 1.

(b) PSLn , the quotient of SLn by its center.

(c) Tn , the group of upper-triangular nonsingular n — n matrices. We have Tn =
{(aij ) ∈ GLn : aij = 0 if j < i} .

(d) Un , the group of unipotent upper-triangular n — n matrices. We have Tn =
{(aij ) ∈ Tn : aii = 1 for all i} .

(e) Dn , the group of diagonal nonsingular n — n matrices.

An arbitrary algebraic group G is called a linear algebraic group if it is isomorphic to
a subgroup of GLn (C) for some n.

Given an n-dimensional C-vector space V, let E0 = {e1 , e2 , . . . , en } be a ¬xed basis
of V. Then there is a one-to-one correspondence between GL(V ) and GLn whereby
φ ∈ GL(V ) corresponds with [φ]E0 ∈ GLn . This correspondence induces an algebraic
group structure on GL(V ). One checks that this structure is independent of choice of
basis of V, so that GL(V ) is given a unique structure of linear algebraic group.

We say that a subgroup G ⊆ GLn is the expression of the subgroup G ⊆ GL(V ) in the
basis F, and we write G = [G]F , if G = {[φ]F : φ ∈ G} . It follows from the results of
˜
the previous section that two subgroups G, G ⊆ GLn are conjugate if and only if there
˜
exists a subgroup G ⊆ GL(V ) and a basis B of V such that G = [G]E0 , G = [G]B .

Note that the matrix P = [id]E,F centralizes G = [G]E (i.e., P M P ’1 = M for all
M ∈ G) if and only if [φ]E = [φ]F for all φ ∈ G.

6. G H = G φ H, the semidirect product of G by H via φ, where G and H are algebraic
groups and φ : G — H ’ G is the mapping corresponding to an algebraic group action
of H on G (cf. [Hu, Sec. 8.2]) having the property that φ(•, y) is an automorphism of
G for all y ∈ H. As a set, we have G H = G — H. The structure of G H is given
φ φ

by
(x1 , y1 )(x2 , y2 ) = (x1 φ(x2 , y1 ), y1 y2 )

6
for (xi , yi ) ∈ G — H, i = 1, 2. It is easy to see that G H includes a copy of G as a
normal subgroup and a copy of H as a subgroup and, moreover, that

(x, 1)(1, y) = (x, y), (Int(1, y)) (x, 1) = (φ(x, y), 1)

for all x ∈ G, y ∈ H.

If the algebraic group A has normal subgroup R and subgroup S with R © S =
{1} , RS = A, then one can construct a semidirect product R S using inner auto-
S to q given by (r, s) ’ rs is an isomorphism
morphisms of A. If the map from R
of algebraic groups, then we say that A = RS is the semidirect product of R by S.

φH is isomorphic to G φ H, where
Note that if ψ is an automorphism of H, then G ˜

φ(x, y) = φ(x, ψ(y)), via the map (x, y) ’ (x, ψ ’1 (y)).
˜

C— = C C — for some φ, then φ is given by
C
Remark: One can show that if G φ

C—
(Int y)(x) = φ(x, y) = y d x for some ¬xed integer d. One can also show that C φ

C — if φ(x, y) = y d x and φ(x, y) = y ’d x for all x ∈ C, y ∈ C — .
˜
C ˜
φ


7. The group closure of a subset S (resp., of an element g) of an algebraic group G is
the smallest closed subgroup of G including S (resp., containing g), and we denote it
closG (S) (resp., closG (g)). We will omit the subscript when the context is clear.

8. The centralizer of an element g (resp., an algebraic subset S) of an algebraic group G
is an algebraic subgroup of G, and we denote it CenG (g) (resp., CenG (S)).

9. The normalizer of an algebraic subgroup H in an algebraic group G is an algebraic
subgroup of G, and we denote it NorG (H).

Let G be an algebraic group. The commutator (x, y) of two elements x, y in G is the
element (x, y) = xyx’1 y ’1 . The commutator subgroup of G is the group generated by all
(x, y), x, y ∈ G, and is denoted (G, G). It is a normal algebraic subgroup of G. De¬ne the
derived series of G to be the series of subgroups

G ⊇ D1 G ⊇ D2 G ⊇ · · · ,

where Di+1 G = (Di G, Di G) for i ≥ 0. G is solvable if its derived series terminates in {1} .
The Lie-Kolchin theorem states that if G is a connected solvable subgroup of GL(V ) for
some nontrivial ¬nite-dimensional C-vector space V, then G has a common eigenvector in V.

7
A corollary states that if G has these properties, then G can be embedded in Tn (C), where
n = dim V.
Let V be a nontrivial ¬nite-dimensional vector space over C. A linear transformation
φ ∈ End(V ) is nilpotent if φd = 0 for some d > 0. We say φ is unipotent if φ = idV +ψ
for some nilpotent linear transformation ψ ∈ End(V ). An algebraic subgroup G of GL(V )
is unipotent if all of its elements are unipotent. Kolchin™s Theorem, an analogue of Engel™s
theorem for Lie algebras (see [Hum81], Theorem 17.5), states that a unipotent subgroup
of GL(V ) has a common eigenvector having eigenvalue 1. A corollary states that such a
subgroup can be embedded in U(n, C).
The radical (resp., unipotent radical) of an algebraic group G is the maximal connected
solvable normal subgroup (resp., the maximal connected unipotent normal subgroup) of G.
It is denoted R(G) (resp., Ru (G)); it is an algebraic subgroup of G. We say G is semisimple
(resp., reductive) if R(G) (resp., Ru (G)) is trivial.
A linear algebraic group G admits a Levi decomposition G = Ru (G)P (semidirect prod-
uct), where P is a maximal reductive subgroup of G. P is called a Levi subgroup of G.
An algebraic group G is a torus if it is isomorphic to Dn (C) for some n. A reductive
group is the product of its commutator subgroup (which is semisimple) and a torus.
A Borel subgroup of an algebraic group G is a maximal closed connected solvable sub-
group. All Borel subgroups of G are conjugate to each other.
Given an algebraic group G and a vector space V, a representation of G on V is a
morphism from G to GL(V ).



2.3 Di¬erential algebra

The development of the following basic facts is based on [Sin96] and [CS99].
In what follows, unless otherwise speci¬ed, all rings are commutative, contain a unit
element, and have characteristic zero.
A derivation on a ring R is a map D : R ’ R such that D(a + b) = D(a) + D(b) and
D(ab) = D(a) b + a D(b) for all a, b ∈ R. We also write a or ‚(a) for D(a).
A di¬erential ring is a pair (R, D), where R is a ring and D a derivation on R. A
di¬erential ¬eld is a di¬erential ring (k, D) such that k is a ¬eld. When the derivation is
clear from context, we will often abbreviate (R, D) (resp., (k, D)) to R (resp., k). We will
¯
often work with the di¬erential ¬eld (Q(x), dx ).
d



8
A constant in a di¬erential ¬eld k is an element c ∈ k such that c = 0. One checks that
the set C = Ck ⊆ k of constants forms a sub¬eld of k.
In what follows, we assume that k = (k, D) is a di¬erential ¬eld and that C = Ck is
a computable ¬eld with factorization algorithm ” i.e., that we have algorithms to carry
out addition, subtraction, multiplication, division and equality testing in C and polynomial
¯
factorization in C[x]. We moreover assume that char(C) = 0 and C = C. For example, the
algebraic closure of a ¬nitely generated extension of Q has the above properties; see [dW53].
When appropriate, we will assume that C k.
The ring of di¬erential operators over k, written D = k[D], is the set of polynomials in
the indeterminate D with coe¬cients in k, with a noncommutative multiplication operation
—¦ determined by the following rule:

D —¦ f = f —¦ D + f for all f ∈ k.

Given L1 , L2 ∈ D, we will write either L1 —¦ L2 or L1 L2 for their product. The ring D acts
on the ¬eld k as follows: Given a typical element

L = an Dn + an’1 Dn’1 + · · · + a1 D + a0 ∈ D, ai ∈ k,

<<

. 2
( 33 .)



>>