. 4
( 61 .)


˜ ˜
8 1. Basic Properties of Solutions

for A(z) = T ’1 (z) [A(z) T (z) ’ T (z)]. Show that the ¬rst µ columns
of A(z) vanish identically. Compare this to Exercise 4 on p. 14.

1.3 Systems in General Regions
We now consider a system (1.1) in a general region G. Given a fundamen-
tal solution X(z), de¬ned near some point z0 ∈ G, we can holomorphically
continue X(z) along any path γ in G beginning at z0 and ending, say, at
z1 . Clearly, this process of holomorphic continuation produces a solution
of (1.1) near the point z1 . Since the path can be split into ¬nitely many
pieces, such that each of them is contained in a simply connected subregion
of G to which the results of the previous section apply, det X(z) cannot
vanish. Thus, X(z) remains fundamental during holomorphic continuation.
According to the monodromy theorem, for a di¬erent path from z0 to z1
the resulting fundamental solution near z1 will be the same provided the
two paths are homotopic. In particular, if γ is a Jordan curve whose inte-
rior region belongs to G, so that γ does not wind around exterior points
of G, then holomorphic continuation of X(z) along γ reproduces the same
fundamental solution that we started with. However, if the interior of γ
contains points from the complement of G, then simple examples in the
exercises below show that in general we shall obtain a di¬erent one. Hence
Theorem 1 (p. 4) fails for multiply connected G, since holomorphic con-
tinuation may not lead to a fundamental solution that is holomorphic in
G, but rather on a Riemann surface associated with G. We shall not go
into details about this, but will be content with the following result for a
punctured disc R(z0 , ρ) = {z : 0 < |z ’ z0 | < ρ}, or slightly more general,
an arbitrary ring R = {z : ρ1 < |z ’ z0 | < ρ}, 0 ¤ ρ1 < ρ:

Proposition 2 Let a system (1.1), with G = R as above, be given. Let
X(z) denote an arbitrary fundamental solution of (1.1) in a disc D =
D(z1 , ρ) ‚ R, ρ > 0. Then there exists a matrix M ∈ C ν—ν such that, for
˜ ˜
a ¬xed but arbitrary choice of the branch of (z ’ z0 )M = exp[M log(z ’ z0 )]
in D, the matrix
S(z) = X(z) (z ’ z0 )’M
is single-valued in R.

Proof: Continuation of X(z) along the circle |z ’ z0 | = |z1 ’ z0 | in the
positive sense will produce a fundamental solution, say, X(z), of (1.1) in D.
According to Exercise 1 on p. 7, there exists an invertible matrix C ∈ C ν—ν
so that X(z) = X(z) C for z ∈ D. Choose M so that C = exp[2πi M ], e.g.,
2πi M = log M . Then, continuation of (z ’z0 )M along the same circle leads
to exp[M (log(z ’ z0 ) + 2πi)] = (z ’ z0 )M C, which completes the proof. 2
1.3 Systems in General Regions 9

While the matrix C occurring in the above proof is uniquely de¬ned by
the fundamental solution X(z), the matrix M is not! We call any such M
a monodromy matrix for X(z). The unique matrix C is sometimes called
the monodromy factor for X(z). Observe that M can be any matrix with
exp[2πi M ] = C. So, in general, 2πi M may have eigenvalues di¬ering by
nonzero integers, and then 2πi M is not a branch for the matrix function
log C.
For G = R, it is convenient to think of solutions of (1.1) as de¬ned on the
Riemann surface of the natural logarithm of z ’ z0 , as described on p. 226
in the Appendix. This surface can best be visualized as a spiraling staircase
with in¬nitely many levels in both directions. For simplicity, take z0 = 0,
then traversing a circle about the origin in the positive, i.e., counterclock-
wise, direction, will not take us back to the same point, as it would in the
complex plane, but to the one on the next level, directly above the point
where we started. Thus, while complex numbers zk = r ei•k , r > 0, are the
same once their arguments •k di¬er by integer multiples of 2π, the corre-
sponding points on the Riemann surface are di¬erent. So strictly speaking,
instead of complex numbers z = r e2πi• , we deal with pairs (r, •). On this
surface, the matrix z M = exp[M log z] becomes a single-valued holomorphic
function by interpreting log z = log r + i•.
The above proposition shows that, once we have a monodromy matrix
M , we completely understand the branching behavior of the corresponding
fundamental solution X(z). It pays to work out the general form of z M for
M = J in Jordan form, in order to understand the various cases that can
occur for the branching behavior of X(z).
The computation of monodromy matrices and/or their eigenvalues is
a major task in many applications. In principle, it should be possible to
¬nd them by ¬rst computing a fundamental solution X(z) by means of
the recursions (1.9), and then iteratively re-expanding the resulting power
series to obtain the analytic continuation. In reality there is little hope of
e¬ectively doing this. So it will be useful to obtain other representations for
fundamental solutions providing more direct ways for ¬nding monodromy
matrices. For singularities of the ¬rst kind, which are discussed in the next
chapter, this can always be done, while for other cases this problem will
prove much more complicated.

Exercises: Throughout these exercises, let M ∈ C ν—ν .
1. Verify that X(z) = z M = exp[M log z] is a fundamental solution of
x = z ’1 M x near, e.g., z0 = 1, if we select any branch for the mul-
tivalued function log z, e.g., its principal value, which is real-valued
along the positive real axis.

2. Verify that X(z) = z M in general cannot be holomorphically contin-
ued (as a single-valued holomorphic function) into all of R(0, ∞).
10 1. Basic Properties of Solutions

3. Verify that M is a monodromy matrix for X(z) = z M .

4. Let Mk ∈ C ν—ν , 1 ¤ k ¤ µ, be such that they all commute with one
another, let A(z) = k=1 (z ’ zk )’1 Mk , with all distinct zk ∈ C ,

and G = C \ {z1 , . . . , zµ }. For each k, 1 ¤ k ¤ µ, and ρ su¬ciently
small, show the existence of a fundamental solution of (1.1) in R(zk , ρ)
having monodromy matrix Mk .
5. For M as above and any matrix-valued S(z), holomorphic and single-
valued with det S(z) = 0 in z ∈ R(0, ρ), for some ρ > 0, ¬nd A(z) so
that X(z) = S(z) z M is a fundamental solution of (1.1).

6. For G = R(0, ρ), ρ > 0, show that monodromy factors for di¬erent
fundamental solutions of (1.1) are similar matrices. Show that the
eigenvalues of corresponding monodromy matrices always are con-
gruent modulo one in the following sense: If M1 , M2 are monodromy
matrices for fundamental solutions X1 (z), X2 (z) of (1.1), then for ev-
ery eigenvalue µ of M1 there exists k ∈ Z so that k+µ is an eigenvalue
of M2 .
7. Under the assumptions of the previous exercise, let M1 be a mon-
odromy matrix for some fundamental solution. Show that one can
choose a monodromy matrix M2 for another fundamental solution
so that both are similar. Verify that, for a given fundamental solu-
tion, one can always choose a unique monodromy matrix that has
eigenvalues with real parts in the half-open interval [0, 1).
8. Under the assumptions of the previous exercises, show the existence
of at least one solution vector of the form x(z) = s(z) z µ , with µ ∈ C
and s(z) a single-valued vector function in G.
9. Consider the scalar ODE (1.6) for ak (z) holomorphic in G = R(0, ρ),
ρ > 0. Show that (1.6) has at least one solution of the form y(z) =
s(z) z µ , with µ ∈ C and a scalar single-valued function s(z), z ∈ G.

1.4 Inhomogeneous Systems
We now return to a simply connected region G, but will consider an inho-
mogeneous system
x = A(z) x + b(z), z ∈ G, (1.10)
where b(z) is a vector-valued holomorphic function on G. We then refer to
(1.1) as the corresponding homogeneous system. As in the real variable case,
we can solve (1.10) as soon as a fundamental solution of the corresponding
homogeneous system (1.1) is known:
1.4 Inhomogeneous Systems 11

Theorem 3 (Variation of Constants Formula) For a simply con-
nected region G, and A(z), b(z) holomorphic in G, all solutions of (1.10)
are holomorphic in G and given by the formula
X ’1 (u) b(u) du , z ∈ G,
x(z) = X(z) c + (1.11)

where z0 ∈ C and c ∈ C ν can be chosen arbitrarily.

Proof: It is easily checked that (1.11) represents solutions of (1.10). Con-
versely, if x0 (z) is any solution of (1.10), then for c = X ’1 (z0 ) x0 (z0 ) the
solution x(z) given by (1.11) satis¬es the same initial value condition at z0
as x0 (z). Their di¬erence satis¬es the corresponding homogeneous system,
hence is identically zero, owing to Theorem 1 (p. 4).
The somewhat strange name for (1.11) results from the following obser-
vation: For constant c ∈ C ν , the vector X(z) c solves the homogeneous
system (1.1), so we try an “Ansatz” for the inhomogeneous one by re-
placing c by a vector-valued function c(z). Di¬erentiation of X(z) c(z) and
insertion into (1.10) then leads to (1.11).
While (1.11) represents all solutions of (1.10), it requires that we know
a fundamental solution of (1.1), and this usually is not the case. In the
exercises below, we shall obtain at least local representations, in the form
of convergent power series, of solutions of (1.10) without knowing a funda-
mental solution of (1.1).

Exercises: If nothing else is said, let G be a simply connected region in
C and consider an inhomogeneous system (1.10).

1. Expanding A(z) and b(z) into power series about a point z0 ∈ G, ¬nd
the recursion formula for the power series coe¬cients of solutions.

2. In the case of a constant matrix A(z) ≡ A, ¬nd a necessary and
su¬cient condition on A, so that for every vector polynomial b(z) a
solution of (1.10) exists that is also a polynomial of the same degree
as b(z).

3. For
0 0
B(z) = ,
b(z) A(z)
show that x(z) solves (1.10) if and only if x(z) = [1, x(z)]T solves the
homogeneous system x = B(z) x (of dimension ν + 1). Compare this
˜ ˜
to the next section on reduced systems.

4. For G = R(0, ρ), ρ > 0, let X(z) be a fundamental solution of (1.1)
with monodromy matrix M . Show that (1.10) has a single-valued
12 1. Basic Properties of Solutions

solution x(z), z ∈ G, if and only if we can choose a constant vector c
such that for some z0 ∈ G
z0 e2πi
X ’1 (u)b(u) du,
’ I)c =
(e (1.12)

integrating, say, along a circle centered at the origin. Show that a
su¬cient condition for this to be true is that no nontrivial solution
of the homogeneous system (1.1) is single-valued in G.

1.5 Reduced Systems
In this section we shall be concerned with a system (1.1) whose coe¬cient
matrix is triangularly blocked. While corresponding results hold true for
upper triangularly blocked matrices, we choose A(z) to have the following
lower triangular block structure:
® 
A11 (z) 0 ... 0
 
A21 (z) A22 (z) ... 0
 
A(z) =  , (1.13)
. . .
° »
. . .
. . .
Aµ1 (z) Aµ2 (z) . . . Aµµ (z)
with µ ≥ 2, blocks Ajk (z) that are holomorphic in a (common) simply
connected region G, and such that the diagonal blocks are all square ma-
trices of arbitrary sizes. Such systems will be called reduced. If the diagonal
blocks of (1.13) are of type νk — νk , we sometimes say that (1.1) is reduced
of type (ν1 , . . . , νµ ).
Along with the “large” system (1.1), it is natural to consider the smaller
1 ¤ k ¤ µ.
xk = Akk (z) xk , (1.14)
We show that the computation of a fundamental solution of (1.1) is, aside
from ¬nitely many integrations, equivalent to computing fundamental so-
lutions for (1.14), for every such k:

Theorem 4 Given a matrix A(z) as in (1.13), the system (1.1) has a
fundamental solution of the form
® 
X11 (z) 0 ... 0
 X21 (z) X22 (z) 
... 0
 
X(z) =  ,
. . .
° »
. . .
. . .
Xµ1 (z) Xµ2 (z) ... Xµµ (z)
1.5 Reduced Systems 13

with Xkk (z) being fundamental solutions of (1.14), and the o¬-diagonal
blocks Xjk (z), for 1 ¤ k < j ¤ µ, recursively given by
z ∈ G,
Xjk (z) = Xjj (z) Cjk + Xjj (u) Yjk (u) du , (1.15)


. 4
( 61 .)