rational matrix function with poles only at the origin and in¬nity. Moreover,

we shall make the following additional assumptions upon the nature of the

singularities at the origin resp. in¬nity:

1. The origin is supposed to be a regular-singular point of the system,

but may not be a singularity of ¬rst kind. Moreover, assume that a

194 12. Other Related Questions

∞

fundamental solution of the form X(z) = S(z) z M , S(z) = 0 Sn z n ,

has been computed. Note that, owing to the absence of other ¬nite

singular points, the power series automatically has an in¬nite radius

of convergence; hence S(z) is an entire function, and det S(z) = 0 for

every z = 0.

2. In¬nity is supposed to be an essentially irregular singularity with an

ˆ

HLFFS (F (z), Y (z)), satisying the assumptions in Section 9.3. Note

that these restrictions are without loss of generality, since some easy

normalizing transformations can be used to make them hold. Also,

recall from Section 9.3 the de¬nition of the associated functions and

their behavior in the cut plane C d , for every nonsingular direction d.

Under these assumptions, let Xj (z) = Fj (z) Y (z) be the normal solutions

of highest level. Then there exist unique invertible matrices „¦j , so that

X(z) = Xj (z) „¦j , j ∈ Z. What we are going to show is how the central

connection matrices „¦j can be computed via an analysis of some functions

Ψ(u; s; k), corresopnding to the HLNS via Laplace transform.

Let k ∈ Z, and recall the de¬nition of j — (k) from p. 147. For ± with

dj — (k)’1 ’ π/(2r) < ± < dj — (k) + π/(2r), consider the integral

∞(±)

r r

z s’1 X(z) ez u

Ψ(u; s; k) = dz.

2πi 0

The assumptions made above imply that X(z) is of moderate growth at the

origin, and of exponential growth at most r in arbitrary sectors at in¬nity.

Therefore, the integral converges absolutely and locally uniformly for Re s

su¬ciently large and u in a sectorial region (near in¬nity) of opening π

and bisecting direction π ’ r ±. We have X(z) = S(z) z M , with an entire

function S(z) of exponential growth at most r, so that it is justi¬ed to

termwise integrate the power series expansion of S(z). Making a change of

variable z r u = eiπ x in the above integral, we obtain for s and u as above

∞

1

Sn “([(n + s) I + M ]/r) (eiπ /u)[(n+s) I+M ]/r , (12.2)

Ψ(u; s; k) =

2πi n=0

where the matrix Gamma function “(A) here is de¬ned by the integral

∞

xA’I e’x dx.

“(A) =

0

Observe that this integral converges absolutely for every matrix A whose

eigenvalues all have positive real parts. Integrating by parts, we can show

A “(A) = “(I +A). Using this, it is possible to extend the de¬nition of “(A)

to matrices having no eigenvalue equal to a nonpositive integer. Therefore,

the expansion (12.2) can serve as holomorphic continuation of Ψ(u; s; k),

12.5 Central Connection Problems 195

with respect to s, to become a meromorphic function of s with poles at the

points

s + µ = ’j, j ∈ N0 .

This will, however, not be needed here.

For the auxiliary functions, de¬ned on p. 147, we may choose z0 = 0,

whenever Re s is large enough. Doing so, we ¬nd

µ

¦— (u; s; k) „¦(k) ,

Ψ(u; s; k) = m m

m=1

where „¦m (k) denotes the mth row of blocks of „¦j — (k) , when this matrix is

blocked of type (s1 , . . . , sµ ). This shows that Ψ(u; s; k) is holomorphic in

C d . For its singular behavior at the points un , recall that ¦— (u; s; k) =

m

hol(u ’ un ) for m = n, while we have shown in the proof of Theorem 46

— ’2πi (sI+Ln )

r’1

=0 ¦n (u; s; k + ) = ¦n (u; s; k) (I ’ e ) + hol(u ’

(p. 149) that

un ). This shows:

Theorem 62 Under the assumptions made above, we have for every s with

Re s large and so that (I ’ e2πi (sI+Ln ) )’1 exists,

r’1

¦n (u; s; k + ) (I ’ e’2πi (sI+Ln ) )’1 „¦(k) + hol(u ’ un ),

Ψm (u; s; k) = n

=0

for every n = 1, . . . , µ.

This identity shows that the central connenction problem can theoreti-

cally be solved as follows: Compute the matrix Ψ(u; s; k), either by its in-

tegral representation or the convergent power series (12.2), for u and Re s

su¬ciently large. Then, continue the function with respect to u to the sin-

(k)

gularities un , and there use the above identity to compute „¦n . Doing this

for every n allows to compute the matrix „¦j — (k) , linking the fundamental

solution X(z) to the corresponding normal solution of highest level.

Without going into detail, we mention that given two matrices „¦j — (k) and

„¦j — (k+1) , one can compute the Stokes multipliers Vj , for j — (k + 1) + 1 ¤

j ¤ j — (k), by factoring „¦j — (k+1) „¦’1 as in the exercises in Section 9.2.

j — (k)

Consequently, if we have computed „¦j — (k+ ) , for = 0, . . . , r ’ 1, then all

Stokes™ multipliers of highest level can be found explicitly. However, the

knowledge of the Stokes multipliers is not su¬cient to ¬nd the central con-

nection matrices: Assume that we had chosen X(z) so that exp[2πi M ] were

in Jordan normal form. Moreover, assume all Stokes™ multipliers of highest

level are known. Then the monodromy factor exp[2πi Mj ] for Xj (z) is given

by (9.5). So by continuation of the relation X(z) = Xj (z) „¦j about in¬n-

ity we obtain „¦j exp[2πi M ] = exp[2πi Mj ] „¦j . This then shows that „¦j

is some matrix which transforms exp[2πi Mj ] into Jordan form. However,

such a matrix is not uniquely determined. In the generic situation where

196 12. Other Related Questions

exp[2πi Mj ] is diagonalizable, „¦j is determined up to a right-hand diagonal

matrix factor, and this is exactly the degree of freedom we have in choosing

a fundamental solution X(z) consisting of Floquet solutions. Therefore, the

knowledge of the Stokes multipliers alone does not determine „¦j .

13

Applications in Other Areas, and

Computer Algebra

In this chapter we shall brie¬‚y describe applications of the theory of multi-

summability to formal power series solutions of equations other than linear

ODE. The e¬orts to explore such applications are far from being complete

and shall provide an excellent chance for future research. In a ¬nal section

we then mention recent results on ¬nding formal solutions for linear ODE

with help of computer algebra.

Suppose that we are given some class of functional equations having

solutions that are analytic functions in one or several variables. Roughly

speaking, we shall then address the following two questions:

• Does such a functional equation admit formal solutions that, aside

from elementary functions such as exponentials, logarithms, general

powers, etc., involve formal power series in one or several variables?

Can one, perhaps, even ¬nd a family of formal solutions that is com-

plete in some sense?

• Given such a formal solution, are the formal power series occurring,

if not already convergent, summable in some sense or another, and

if so, are the functions obtained by replacing the formal series with

their sums then solutions of the same functional equation?

Two general comments should be made beforehand: First, it is not at all

clear whether one should in all cases consider formal solutions involving

formal power series “ e.g., one could instead consider formal factorial se-

ries. One one hand, each formal power series can be formally rewritten

as a formal factorial series and vice versa, so both approaches may seem

198 13. Applications in Other Areas, and Computer Algebra

equivalent. On the other hand, however, it may be easier to ¬nd the coe¬-

cients for a factorial series solution directly from the underlying functional

equation, and the question of summation for a factorial series may have an

easier answer than for the corresponding power series. Since we have only

discussed summation of formal power series, we shall here restrict to that

case, but mention a paper by Barkatou and Duval [48], concerning sum-

mation of formal factorial series. The second comment we wish to make

concerns the question of summation of formal power series in several vari-

ables: In the situations we are going to discuss we shall always treat all

but one variable as (temporarily ¬xed) parameters, thus leaving us with a

series in the remaining variable, with coe¬cients in some Banach space. It

is for this reason that we have developed the theory of multisummability in

such a relatively general setting. While this approach is successful in some

situations, there are other cases indicating that one should better look for

a summation method that treats all variables simultaneously, but so far

nobody has found such a method!

13.1 Nonlinear Systems of ODE

Throughout this section, we shall be concerned with nonlinear systems of

the following form:

z ’r+1 x = g(z, x), (13.1)

where r, the Poincar´ rank, is a nonnegative integer, x = (x1 , . . . , xν )T ,

e

ν ≥ 1, and g(z, x) = (g1 (z, x), . . . , gν (z, x))T is a vector of power series

in x1 , . . . , xn . Let p = (p1 , . . . , pν )T be a multi-index, i.e., all pj are non-

negative integers, and de¬ne xp = xp1 · . . . · xpν . Then such a power se-

1 ν

∞

ries can be written as gj (z, x) = gj (z, x1 , . . . , xn ) = p≥0 gj,p (z) xp . The

coe¬cients gj,p (z) are assumed to be given by power series in z ’1 , say,

∞

gj,p (z) = m=0 gj,p,m z ’m . Throughout, it will be assumed that all these

series converge for |z| > ρ, with some ρ ≥ 0 independent of p, while the

series for gj (z, x), for every such z, converges in the ball x < ρ.

As is common for multi-indices, we write |p| = p1 + . . . + pν . If it so

happens that gj,p (z) ≡ 0 whenever |p| ≥ 2, for every j, then (13.1) obviously

becomes an inhomogeneous linear system of ODE, whose corresponding

homogeneous system is as in (3.1) (p. 37). If gj,0 (z) ≡ 0 for every j, then

(13.1) obviously has the solution x(z) ≡ 0, and we then say that (13.1) is

a homogeneous nonlinear system.

It is shown in [24] that homogeneous nonlinear systems have a formal

solution x(z), sharing many of the properties of FFS in the linear case:

ˆ

1. The formal solution x(z) is a formal power series x(z) = x(z, c) =

ˆ ˆ ˆ

p

|p|≥0 xp (z) c in parameters c1 , . . . , cν . Its coe¬cients xp (z) are ¬-

ˆ ˆ

ˆ

nite sums of expressions of the form f (z) z » logk z exp[p(z)], with a

13.2 Di¬erence Equations 199

formal power series f in z ’1 , a complex constant », a nonnegative

ˆ

integer k, and a polynomial p in some root of z. In case of a linear

system, the coe¬cients xp (z) are zero for |p| ≥ 2, so that then x(z, c)

ˆ ˆ

is a linear function of c1 , . . . , cν and corresponds to a FFS.

2. The proof of existence of x(z) follows very much the same steps as

ˆ

in the linear case: One introduces nonlinear analytic, resp. meromor-

phic, transformations and shows that by means of ¬nitely many such

transformations one can, step by step, simplify the system in some

clear sense so that in the end one can “solve” it explicitly. For details,

see [22, 24].

3. The formal power series occurring in the coe¬cients xp (z) all are

ˆ

multisummable, as is shown in [26].

Despite all the analogies to the linear situation, there are two new di¬-

culties for nonlinear systems: For one, it is not clear whether the formal

solution x(z, c) is complete in the sense that every other formal expression

ˆ

solving (13.1) can be obtained from x(z, c) by a suitable choice of the pa-

ˆ

rameter vector c. Moreover, suppose that all the formal power series in

x(z, c) are replaced by their multisums, so that we obtain a formal power

ˆ

series in the parameter vector c, with coe¬cients which are holomorphic

functions in some sectorial region G at in¬nity. It is a well-known fact,

called the small denominator phenomenon, that in general this series di-

verges. Su¬cient conditions for convergence are known; see, e.g., the papers

of Iwano [139, 140] and the literature quoted there. For an analysis of the

nonlinear Stokes phenomenon under a nonresonance condition, compare

Costin [83]. In general, however, it is still open how this series is to be

interpreted.

A related but simpler problem for nonlinear systems is as follows: Sup-

pose that (13.1) has a solution in the form of a power series in z ’1 , then is

this series multisummable? By now, there are three proofs for the answer

being positive, using quite di¬erent methods: Braaksma [70] investigated

the nonlinear integral equations of the various levels which correspond to

(13.1) via Borel transform. Ramis and Sibuya [230] used cohomological

methods, while in [21] a more direct approach is taken, very much like the

proof of Picard-Lindel¨f ™s existence and uniqueness theorem.

o

13.2 Di¬erence Equations

While multisummability is a very appropriate tool for linear and non-

linear ODE, things are more complicated for di¬erence equations, as we

now shall brie¬‚y explain. For a more complete presentation of the theory

of holomorphic di¬erence equations, see the recent book of van der Put

200 13. Applications in Other Areas, and Computer Algebra

and Singer [224]. In his Ph.D. thesis, Faber [102] considers extensions to

di¬erential-di¬erence equations and more general functional equations that

we do not wish to include here. For simplicity we restrict to linear systems

of di¬erence equations, although much of what we say is known to extend

to the nonlinear situation: Let us consider a system of di¬erence equations

of the form