S( ) = (’ ) (7)

0

j=0

where

∞ exp(’t)(’ t)N +1

EN ( ) ≡ dt (8)

1+ t

0

The integrals in (3) are special cases of the integral de¬nition of the

“-function and so can be performed explicitly to give

N

(’1)j j! j

S( ) = + EN ( ) (9)

j=0

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.15

16 John P. Boyd

Eqs. (5)-(9) are exact. If the integral EN ( ) is neglected, then the

summation is the ¬rst (N+1) terms of an asymptotic series. Both Van

Dyke™s principle and Dyson™s argument forecast that this series is diver-

gent.

The exponential exp(’t) varies on a length scale of O(1) where

O() is the usual “Landau gauge” or “order-of-magnitude” symbol. In

contrast, the denominator depends on t only as t, that is, varies on a

“slow” length scale which is O(1/ ). Dependence on two independent

scales, i. e., t and ( t), is van Dyke™s “Mark of Divergence”.

When is negative, the integrand of the Stieltjes function is singular

on the integration interval because of the simple pole at t = ’ 1/ .

This strongly (and correctly) suggests that S( ) is not analytic at = 0

as analyzed in detail in [19]. Just as for Dyson™s quantum problems,

the radius of convergence of the power series must be zero.

A deeper reason for the divergence of the “series is that Taylor“

expanding 1/(1 + t) in the integrand of the Stieltjes function is an

act of inspired stupidity. The inspiration is that an integral which can-

not be evaluated in simple closed form is converted to a power series

with explicit, analytic coe¬cients. The stupidity is that the domain of

convergence of the geometric series is

|t| < 1/ (10)

because of the simple pole of 1/(1 + t) at t = ’ 1/ . Unfortunately,

the domain of integration is semi-in¬nite. It follows that the Taylor

expansion is used beyond its interval of validity. The price for this math-

ematical crime is divergence.

The reason that the asymptotic series is useful anyway is because

the integrand is exponentially small in the region where the expansion

of 1/(1 + t) is divergent. Split the integral into two parts, one on

the interval where the denominator expansion is convergent, the other

where it is not, as

S( ) = Scon ( ) + Sdiv ( ) (11)

where

∞

1/ exp(’t) exp(’t)

Scon ( ) ≡ dt, Sdiv ( ) ≡ dt (12)

1+ t 1+ t

0 1/

Since exp(’t)/(1 + t) is bounded from above by exp(’t)/2 for all

t ≥ 1 / , it follows that

exp( ’1 / )

Sdiv ( ) ¤ (13)

2

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.16

17

Exponential Asymptotics

Thus, one can approximate the Stieltjes function as

S( ) ≈ Scon ( ) + O(exp(’1 / ) ) (14)

The magnitude of that part of the Stieltjes function which is inaccesi-

ble to a convergent expansion of 1/(1+ t) is proportional to exp(’1 / ).

This suggests that the best one can hope to wring from the asymptot-

ic series is an error no smaller than the order-of-magnitude of Sdiv ( ),

that is, O(exp(’1 / )).

5. Hyperasymptotics for the Stieltjes Function

It is possible to break the superasymptotic constraint to obtain a more

accurate “hyperasymptotic” approximation by inspecting the error inte-

grals EN ( ), which are illustrated in Fig 5 for a particular value of .

The crucial point is that the maximum of the integrand shifts to larg-

er and larger t as N increases. When N ¤ 2, the peak (for = 1/3)

is still within the convergence disk of the geometric series. For larger

N, however, the maximum of the integrand occurs for T > 1, that is,

for t > 1 / . (Ignoring the slowly varying denominator 1/(1 + t), one

can show by di¬erentiating exp(’t)tN +1 that the maximum occurs at

t = 1/(N + 1).) When (N + 1) ≥ 1/ , the geometric series diverges in

the very region where the integrand of EN has most of its amplitude.

Continuing the asymptotic expansion to larger N will merely accumu-

late further error.

The key to a hyperasymptotic approximation is to use the informa-

tion that the error integral is peaked at t = 1/ . Just as asymptotic

series can be derived by several di¬erent methods, similarly “hyper-

asymptotics” is not a single algorithm, but rather a family of siblings.

Their common theme is to append a second asymptotic series, based on

di¬erent scaling assumptions, to the “superasymptotic” approximation.

One strategy is to expand the denominator of the error integral

ENoptimum ( ) in powers of (t’1 / ) instead t. In other words, expand the

integrand about the point where it is peaked (when N = Noptimum ( ) ≈

1 / ’ 1). The key identity is

1 1

= (15)

2 {1 + 2 ( t ’ 1)}

1

1+ t

1M 1

(’ )k ( t ’ 1)k

=

2 k=0 2

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.17

18 John P. Boyd

0.025

0.08

0.03 0.02

0.06

0.015

0.02

0.04 0.01

0.01

0.02 0.005

0 0 0

0 1 2 0 1 2 0 1 2

T T T

0.025

0.05

0.03

0.02

0.04

0.02

0.015 0.03

0.01 0.02

0.01

0.005 0.01

0 0 0

0 1 2 0 1 2 0 1 2

T T T

Figure 2. The integrands of the ¬rst six error integrals for the Stieltjes function,

E0 , E1 , . . . , E5 for = 1/3, plotted as functions of the “slow” variable T ≡ t .

N

1M ∞ 1’ t k

exp(’t)(’ t)N +1 (

j j

S( ) = (’1) j! + ) dt+ HN M ( )

2 k=0 2

0

j=0

(16)

where the hyperasymptotic error integral is

∞

1 exp(’t) 1

HN M ( ) ≡ (’ t)N +1 (’ )M +1 ( t ’ 1)M +1 dt (17)

2 1+ t 2

0

A crucial point is that the integrand of each term in the hyperasymp-

totic summation is exp(’t) multiplied by a polynomial in t. This means

that the (NM)-th hyperasympotic expansion is just a weighted sum of

the ¬rst (N + M + 1) terms of the original divergent series. The change

of variable made by switching from ( t) to ( t ’ 1) is equivalent to

the “Euler sum-acceleration” method, an ancient and well-understood

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.18

19

Exponential Asymptotics

method for improving the convergence of slowly convergent or divergent

series.

Let

aj ≡ (’ )j j! (18)

[1/ ’1]

Superasymptotic

≡

SN aj (19)

0

where [m] denotes the integer nearest m for any quantity m and where

the upper limit on the sum is

Noptimum ( ) = 1/ ’ 1 (20)

Then the Euler acceleration theory [318, 70] shows

1

Hyperasymptotic Superasymptotic

≡ SN

S0 + aN +1 (21)

2

3 1

Hyperasymptotic Superasymptotic

≡ SN

S1 + aN +1 + aN +2

4 4

7 1 1

Hyperasymptotic Superasymptotic

≡ SN

S2 + aN +1 + aN +2 + aN +3

8 2 8

The lowest order hyperasymptotic approximation estimates the error

in the superasymptotic approximation as roughly one-half aN +1 or

explicitly

EN ∼ (1/2)(’1)N +1 (N + 1)! N +1 [ ≈ 1/(N + 1)] (22)

π 1

∼ exp ’ [ = 1/(N + 1)]

2

This con¬rms the claim, made earlier, that the superasymptotic error

is an exponential function of 1 / .

Fig. 3 illustrates the improvement possible by using the Euler trans-

form. A minimum error still exists; Euler acceleration does not elimi-

nate the divergence. However, the minimum error is roughly squared,

that is, twice as many digits of accuracy can be achieved for a given

[273, 274], [249], [77].

However, a hyperasymptotic series can also be generated by a com-

pletely di¬erent rationale. Fig. 4 shows how the integrand of the error

integral EN changes with when N = Noptimum ( ): the integrand

becomes narrower and narrower. This narrowness can be exploited by

Taylor“expanding the denominator of the integrand in powers of 1 ’ t,

which is equivalent to the Euler acceleration of the regular asymptotic

series as already noted. However, the narrowness of the integrand also

implies that one may make approximations in the numerator, too.

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.19

20 John P. Boyd

0

10

-1

10

-2

10

-3

10

Errors