While in principle the computation of a formal fundamental solution of a

system (3.1) presents no problem, it can in practice become very tiresome,

even for relatively small dimensions. Thus it may be good to know that

there are computer algebra packages available that perform these com-

putations. Others are being developed right now, aiming at even better

performance and/or giving more detailed information on the structure of

the formal fundamental solution. One should note, however, that one cer-

tainly cannot hope for expressing all the terms in such a FFS in closed

form. However, it is possible to compute explicitly the exponential poly-

nomials and the formal monodromy matrix, and even set up the recursion

equations from which one then may obtain any ¬nite number of power

series coe¬cients. In the same sense, formal fundamental solutions of dif-

ference equations may be obtained with help of a computer, but we shall

not discuss this here.

As we have seen previously, scalar equations of arbitrary order can be

rewritten as systems. Vice versa, systems can, with help of a cyclic vector,

13.5 Computer Algebra Methods 205

be transformed into scalar equations (in Exercise 1.8 on p. 5 we have brie¬‚y

discussed cyclic vectors near a regular point of an ODE, but one can also

show existence of cyclic vectors in a neighborhood of a singular point [84]).

So in principle, a computer algebra package written for systems may also

be used for scalar equations, or vice versa, but there may be di¬erences in

performance.

Algorithms for the scalar case are usually based on the Newton polygon

construction and Newton™s algorithm [44, 123, 263]. Available programs in

Maple are the package DIFFOP (see van Hoeij [122“124]), which can be

down-loaded at

http://klein.math.fsu.edu/ hoeij/maple.html,

the ELISE package of Dietrich [88] at

www.math2.rwth-aachen.de/∼dietrich/elise.html

and versions of the package DESIR developed at Grenoble in Maple

and Reduce.

Much work has been done at the last institute for the system case with

the emphasis on the algorithmical e¬cient resolution [45“47, 49, 79, 117,

118]. The package ISOLDE recently developed by Barkatou and P¬‚¨gel in u

Maple contains several functions computing formal solutions and other

applications for systems. It is, together with DESIR, available at

http://www-lmc.imag.fr/CF/logiciel.html.

14

Some Historical Remarks

Since the middle of the last century, mathematicians and physicists alike

have observed that simple ODE may have solutions in terms of power

series whose coe¬cients grow at such a rate that the series has a radius of

convergence equal to zero. In fact, it became clear later that every linear

meromorphic system, in the vicinity of a so-called irregular singularity,

has a formal fundamental solution of a certain form. This “solution” can

be relatively easily computed, in particular nowadays, where one can use

computer algebra packages, but it will, in general, involve power series

diverging everywhere. It was quite an achievement when it became clear

that formal, i.e., everywhere divergent, power series solving even nonlinear

meromorphic systems of ODE can be interpreted as asymptotic expansions

of certain solutions of the same system. This by now classical theory has

been presented in many books on di¬erential equations in the complex

plane or related topics. The presentations I am most familiar with are the

monographs of Wasow [281] and Sibuya [251]. The most important result

in this context is, in Wasow™s terminology, the Main Asymptotic Existence

Theorem: It states that to every formal solution of a system of meromorphic

di¬erential equations, and every sector in the complex plane of su¬ciently

small opening, one can ¬nd a solution of the system having the formal

one as its asymptotic expansion. This solution, in general, is not uniquely

determined, and the proofs given for this theorem, in various degrees of

generality, do not provide a truly e¬cient way to compute such a solution,

say, in terms of the formal solution. In view of these de¬ciencies of the

Main Asymptotic Existence Theorem, the following two questions have

been investigated in recent years:

208 14. Some Historical Remarks

1. Among all solutions of the linear system of ODE that are asymptotic

to a given formal solution in some sector, can we characterize one

uniquely by requiring certain additional properties?

2. If so, can we ¬nd a representation for this unique solution, in terms of

the formal one, using an integral operator, or applying a summability

method to the formal power series?

Parallel to these developments in the theory of ODE, scientists also founded

a general theory of asymptotic expansions. Here, the analogue to the Main

Asymptotic Existence Theorem is usually called Ritt™s theorem, and is much

easier to prove: Given any formal power series and any sector of arbitrary,

but ¬nite, opening, there exists a function that is holomorphic in this sec-

tor and has the formal power series as its asymptotic expansion. This

function is never uniquely determined “ not even when the power series

converges, because it may not converge toward this function! To overcome

this nonuniqueness, Watson [282, 283] and Nevanlinna [201] introduced a

special kind of asymptotic expansions, now commonly called of Gevrey or-

der s > 0. These have the property that the analogue to Ritt™s theorem

holds for sectors of opening up to sπ, in which cases the function again

is not uniquely determined. If the opening is larger than sπ, however, a

function that has a given formal power series as expansion of Gevrey or-

der s > 0 may not exist, but if it does, then it is uniquely determined. In

case of existence, the function can be represented as Laplace transform of

another function that is holomorphic at the origin, and whose power series

expansion is explicitly given in terms of the formal power series.

This achievement in the general theory of asymptotic expansions obvi-

ously escaped the attention of specialists for di¬erential equations in the

complex domain for quite some time: In a series of papers, Horn [125“128]

showed the following result for linear systems of ODE: Let the leading

term of the coe¬cient matrix, at a singularity of second kind, have all

distinct eigenvalues. Then for sectors with su¬ciently large opening one

has uniqueness in the Main Asymptotic Existence Theorem, and the so-

lution can be represented as a Laplace integral, or equivalently, in terms

of (inverse) factorial series. However, he did not relate his observations to

the general results of Watson and Nevanlinna. Later, Trjitzinsky [266] and

Turrittin [268] treated somewhat more general situations, and they also

pointed out the limitation of this approach to special cases.

In 1978/80, Ramis [225, 226] introduced his notion of k-summability of

formal power series, which may best be interpreted as a formalization of the

ideas of Watson and Nevanlinna. In this terminology, we may interpret the

results of Horn, Trjitzinsky, and Turrittin as saying that, under additional

assumptions, formal solutions of linear meromorphic systems of ODE are

k-summable, for a suitable value of k > 0. In the general case, Ramis proved

that every formal fundamental solution of every such system can be factored

into a ¬nite product of matrix power series (times some explicit functions),

14. Some Historical Remarks 209

so that each factor is k-summable, but with the number k depending upon

the factor. In my treatment of ¬rst-level formal solutions [8“10, 12, 40]

I had, more or less by accident, independently obtained the same result.

However, this factorization of formal solutions is not truly e¬ective, and

the factors are not uniquely determined, so that this result does not really

allow us to compute the resulting matrix function from the formal series.

More recently, Ecalle [94“96] presented a way to achieve this computa-

tion, introducing his de¬nition of multisummability. In a sense, his method

di¬ers from Ramis™s de¬nition of k-summability by cleverly enlarging the

class of functions to which the Laplace operator, in some generalized form,

can be applied. Another way of interpreting his method is by realizing that

Ramis™s k-summability in essence is a special case of what is called a Mo-

ment Summability Method. The rapidly growing coe¬cients of the formal

power series are divided by a moment sequence to produce a convergent

power series; then, the resulting holomorphic function is inserted into an

integral operator, and the function so obtained is regarded as the sum of

the original power series. Multisummability may be seen as the concept of

iterating certain Moment Summability Methods. Given a power series with

a radius of convergence equal to zero, we divide by a moment sequence

as in the case of k-summability. The resulting series, however, will in gen-

eral not be convergent, but instead is summable by some other Moment

Method, which then produces a function to which the integral operator can

be applied.

The idea of iteration of summability methods was already familiar to

Hardy [112] and Good [110], but it was Ecalle who realized its potential

with respect to formal solutions of ODE. He stated without proofs a large

number of results concerning properties and applications of multisumma-

bility, e.g., to formal solutions of nonlinear systems of ODE. Based upon

the described factorization of formal solutions in the linear case, it was

more or less evident that multisummability applied to all formal solutions

of linear systems. However, for nonlinear ones, the ¬rst complete proof for

this fact was given by Braaksma [70]. Independent proofs, using di¬erent

techniques, are due to Ramis and Sibuya [230] and myself [21].

Before Ecalle founded the theory of multisummability, researchers at-

tacked the ¬rst one of the two questions raised above following di¬erent

approaches: The main reason for wanting to have a unique solution that,

say, near the point in¬nity, is asymptotic to the formal fundamental so-

lution in some (small) sector lies in the fact that one wanted to have a

clear description of what is called Stokes™ phenomenon. This term refers

to the fact that in di¬erent sectors one ¬nds di¬erent solutions with the

same asymptotic behavior as the variable tends to in¬nity. One way out

of the dilemma was to just select a solution for each sector, study their

interrelations, i.e., their dependence upon the sector, and then investigate

how these relations change when one picks a di¬erent set of solutions. This

line of approach leads to a description of Stokes™ phenomenon in terms

210 14. Some Historical Remarks

of equivalence classes of matrices; see Sibuya™s monograph [251] and the

literature quoted there.

A somewhat more explicit approach was investigated by Jurkat, Lutz,

and myself in [33, 34, 36, 143]: We proved the existence and uniqueness of a

family of normal solutions that in certain sectors are asymptotic to a formal

fundamental solution and that have a Stokes™ phenomenon of an especially

simple form. Meanwhile, a di¬erent characterization of normal solutions as

solutions of certain integral equations has also been obtained [23].

Both of the approaches mentioned above did not lead to a direct represen-

tation formula for the corresponding solutions. The theory of multisumma-

bility, however, provides such a representation for the normal solutions [31].

In addition, it also allows a detailed analysis of the Stokes phenomenon,

if not to say the computation of the Stokes matrices, through what Ecalle

has called analysis of resurgent functions. This general theory is very use-

ful for many problems other than the computation of Stokes™ matrices.

Aside from Ecalle™s work, an introduction to this area, along with some of

its applications, can be found in a book by Candelbergher, Nosmas, and

Pham [78].

For further presentations of the recent history of linear systems in the

complex plane, partially taking a more geometrical or algebraical point

of view, see Bertrand [51], Majima [181], Babbitt and Varadarajan [5], or

Varadarajan [274]. Some historical remarks on asymptotic power series and

summability can be found in Ramis™s booklet [229].

Appendix A

Matrices and Vector Spaces

We assume the reader to be familiar with the basic theory of matrices and

vectors with complex entries, but we shall outline some notation and state

some additional results on matrix algebra resp. systems of linear equations

used in the book.

Matrices, resp. vectors, will normally be denoted by upper case, resp.

lower case, letters, while their entries will be corresponding lower case let-

ters bearing two indices, resp. one. We expect the reader to know about

the concepts of characteristic polynomials, eigenvalues and eigenvectors

of square matrices, triangular resp. diagonal matrices, and we shall write

diag [»1 , . . . »ν ] for the diagonal matrix with diagonal entries »µ . We denote

by I, or Iν , the unit matrix of type ν — ν, and write ek for its kth column,

i.e., the kth unit vector. The symbol AT shall indicate the transpose of a

matrix A. The spectrum of A will be the set of all eigenvalues of A.

As usual, we say that two square matrices A and B are similar provided

that an invertible matrix T exists for which A T = T B. A well-known

important result from matrix theory states that every square matrix A is

similar to one in Jordan canonical form, or to a Jordan matrix. Here, Jor-

dan matrices are assumed to be lower triangular matrices. Their diagonal

entries are equal to the eigenvalues, with equal ones appearing consecu-

tively. Nonzero entries o¬ the diagonal are always equal to one and occur

only in such places in the ¬rst subdiagonal where the two corresponding

eigenvalues coincide. In other words, a Jordan matrix is the direct sum, in

the sense of Section A.2, of so-called Jordan blocks; such a block has the

form J = »I + N , » ∈ C , where N is the nilpotent matrix with ones in

all places of the ¬rst subdiagonal, and zeros elsewhere. However, observe

212 Appendix A. Matrices and Vector Spaces

that the letters J and N are not only used for such Jordan blocks, but for

general Jordan resp. nilpotent matrices as well! We emphasize that every

Jordan matrix J can be written as J = Λ + N , with a diagonal matrix

Λ and a nilpotent N , so that both commute. For nilpotent matrices N ,

the smallest natural number m with N m = 0 will be called the order of

nilpotency of N . This order equals the size of the largest Jordan block of

N.

In estimates we use a ¬xed, but arbitrary, norm for matrices and vectors,

which for both will be denoted by the same symbol · , and which is

assumed to obey the following additional rules:

AB ¤ A Ax ¤ A

I = 1, B, x,

whenever the products give sense. One can take A = supk j |akj |, x =

supk |xk |, but nowhere in the book will we use any concrete choice.

A.1 Matrix Equations

In dealing with systems of ODE in the complex domain, one frequently is

required to solve matrix equations of the following form:

• Suppose that A ∈ C k—k , B ∈ C j—j , and C ∈ C k—j , for k, j ∈ N, are

given. Find some, or the set of all, X ∈ C k—j such that

A X ’ X B = C. (A.1)

Note that (A.1) is nothing but a system of linear equations for the entries

of X, since one can form a vector x ∈ C j·k , containing the entries of X in

˜ ˜

any order, and rewrite (A.1) in the form A x = c with a quadratic matrix A

built from A and B, and the vector c containing the entries of C. We shall,

however, not do this here, but instead work with (A.1) directly. The theory

of such equations is well understood, and we here require two well-known

lemmas ([281] et al.):

Lemma 24 Suppose that A ∈ C k—k and B ∈ C j—j , for k, j ∈ N, have

disjoint spectra, i.e., do not have an eigenvalue in common. Then for every

C ∈ C k—j the matrix equation (A.1) has a unique solution X ∈ C k—j .

Proof: Let T be such that J = T ’1 B T is in (lower triangular) Jordan

˜ ˜ ˜ ˜ ˜

form. Then (A.1) holds if and only if A X ’ X J = C with X = X T , C =

C T . Hence we may restrict to the case of B = J = diag [»1 , . . . , »j ] + N ,

with the elements of the nilpotent matrix N equal to zero except for some

ones directly below the diagonal. Denoting the columns of X resp. C by

xµ resp. cµ , we see that (A.1) is equivalent to

(A ’ »µ I) xµ = cµ + δµ xµ+1 , 1 ¤ µ ¤ j,

A.1 Matrix Equations 213

with δµ either zero or one, and in particular δj = 0. According to our

assumption A ’ »µ I always is invertible; hence these equations allow us to

2