hyperasymptotic approximation of even smaller error ” more fool we!

Because we are approximating f ( ) (at best!) by the Stieltjes function

S( ), the error is actually the magnitude of the second term ” exponen-

tially larger than exp(’1/ ). Quick to defend the honor of hyperasymp-

totics, a reviewer argued that this is merely a problem of de¬nition. An

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.58

59

Exponential Asymptotics

engineer™s answer: a wrong answer is never just a matter of de¬nition,

but rather a good reason to lie awake at night, and retain a lawyer.

Weakly nonlocal solitary waves are a nontrivial example of phenom-

ena with exponentially small “ghosts”: the solitons can be expanded

in non-trivial power series in , but the amplitude ± of the sinusoidal

oscillations of the soliton for large | x | is proportional to exp(’µ/ )

for some constant µ. The terms in the power series are, in the sim-

plest cases, powers of sech( x) and therefore each term individually

decays exponentially fast as | x |’ ∞. It follows that standard accel-

eration methods must fail because reweighting the terms of the power

series still gives nothing at in¬nity, and thus misses the far ¬eld oscilla-

tions completely. The Dingle terminants method, which is based on the

asymptotics of the power series coe¬cients, has never been successfully

applied to this sort of problem either.

Fortunately, the PKKS method, spectral algorithms, the spectral

space asymptotics of Akylas and Yang [5, 325, 326] and the hyper-

asymptotic scheme of Boyd [68] all work well for nonlocal solitary

waves. Nevertheless, the failure of some alternative schemes for this

class of problems because the quantity of interest is invisible to the

power series are vivid reminders of the truism: Fear and caution are

healthy character traits in an applied mathematician!

Another pitfall is extending the interval of integration to in¬nity.

For example, the Bessel function I0 (z) has a representation that is an

integral over a ¬nite interval:

π

1

I0 (z) ≡ exp (z[cos(t) ’ 1]) dt

exp(z) (103)

2π ’π

Without approximation, we can make the change of variable 2w2 =

1 ’ cos(t) to obtain

1

1 1

exp ’2zw2

I0 (z) = exp(z) dw (104)

(1 ’ w2 )1/2

π ’1

√

The usual procedure is to expand the 1/ 1 ’ w2 as a power series,

extend the interval of integration to w ∈ [’∞, ∞], and integrate term-

by-term to obtain the expansion given in most references:

1/2

1 1 9 ’2

1 + z ’1 +

I0 (z) ∼ exp (z) z + ... (105)

2πz 8 128

The sole reason for the divergence of this series is the extension of

the interval. If we expand and integrate term-by-term on the original

interval w ∈ [’1, 1], the result is a series that converges “ albeit rather

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.59

60 John P. Boyd

slowly because the radius of convergence of the power series under the

integrand is just equal to the limit of integration so that the terms

of the integrated series decrease only as O(1/j 3/2 ) for the coe¬cient

of 1/z j in the series. Why, then, is interval-extension so ubiquitous in

asymptotics?

The answer is two-fold. First, the terms of the series are greatly

simpli¬ed at the price of divergence. The one-term approximation is

simpli¬ed from an error function to a Gaussian, for example, and the

higher order terms of the convergent series become more and more

complicated; to the same order as above, the convergent series is

√

1 π

exp(z) {

I0 (z) = erf 2z

π 2z

√

erf 2z

π 1 exp(’2 z)

’

+

2 8 z 3/2 4 z

√ √

√

3

’12 z exp(’2 z) ’ 16 z 3/2 exp(’2z) + 3 2π erf 2z }

+

256 z 5/2

(106)

The second reason is that because the convergent series is only slowly

convergent, it is far from obvious that the error can be reduced much

below the superasymptotic limit unless one uses a very large number

of terms in the convergent series. It is more practical to restrict z to

such large values that the superasymptotic approximation is acceptably

accurate, and use the ordinary Taylor series for smaller z.

Another snare is that exponentially small quantities, when paired

with a non-trivial powers, are often extremely sensitive to small changes

in parameters. For example, the solution of the di¬erential equation

equation

d( ) (2n ’ 1)! sech2n ( x)

2n’1

uxx + u = sech( x) + (107)

asymptotes to a sinusoidal oscillation with an amplitude ±( ), which is

an exponential function of 1/ . One can choose an O(1) function d( )

such that ±( ) is zero. And yet if = 1/10 and n = 3, the second term

in the forcing is more than a thousand times smaller than the ¬rst!

The moral is that in physical applications, it is not su¬cient merely

to calculate the exponentially small e¬ects. One must also look at how

small perturbations of the idealized problem might drastically change

the exponentially small contributions.

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.60

61

Exponential Asymptotics

16. Asymptotics as Hyperasymptotics for Chebyshev,

Fourier and Other Spectral Methods

One important but neglected area of asymptotics is numerical analysis,

speci¬cally, approximations to the error as a function of the grid spac-

ing h (or other discretization parameters). For the familiar numerical

integration scheme known as the trapezoidal rule, for example, which

is de¬ned by

M ’1

1 11 1 j

I≡ g(y)dy ∼ T ≡ { g(0) + g(1) + }

g (108)

M2 2 M

0 j=1

where the grid spacing h is

h ≡ 1 /M, (109)

the Euler-Maclaurin sum formula gives the following asymptotic series

for the error [143]

N

B2j

I ’T ∼’ g (2j’1) (1) ’ g (2j’1) (0) + EN

h2j (110)

(2j)!

j=1

where the B2j are the Bernoulli numbers and the error is

B2N +2 (2N +2)

EN ≡ ’h2N +2 g (ξ) (111)

(2N + 2)!

with ξ a point somewhere on the interval [0, 1].

Elementary numerical analysis courses usually stop with the obser-

vation that since the leading term is proportional to h2 , the trapezoidal

rule is a “second order” method. However, the series in powers of h con-

tains hidden depths. First, if the rule is applied twice with di¬erent grid

spacings h and 2h, one can take a weighted di¬erence of T(h) and T(2h)

such that the quadratic terms cancel in (110):

4 T (h) ’ T (2h)

∼ I + O(h4 ) (112)

3

This improved integration scheme is “Simpson™s Rule”. This strategy

can be iterated to obtain methods of progressively higher accuracy and

complexity not only for quadrature schemes, but for ¬nite di¬erence

approximations, too. The generalization to arbitrary order is known as

“Richardson extrapolation” [270, 271] or to use his own term, “deferred

approach to the limit”.

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.61

62 John P. Boyd

Second, if the integrand g(x) is a periodic function, then g 2j’1 (1) =

g 2j’1 (0) to all orders j, and the error expansion reduces to the trivial

one:

I ∼ T + 0h2 + 0h4 + 0h6 + . . . (113)

Within the Poincar´ asymptotic framework, this suggests the trape-

e

zoidal rule is exact for all periodic functions ” Wrong!

The integral of g(x) is proportional to the constant in the Fourier

expansion of g(x); the usual formula for the error in computing Fourier

coe¬cients through the trapezoidal rule [55] gives

∞

I ’T =’ ajN (114)

j=1

where the {aj } are the exact Fourier cosine coe¬cients of g(x),i. e.,

1

aj ≡ 2 g(x) cos (2π j[x ’ 1/2]) dx (115)

0

(Note that in (114), the degree of the Fourier coe¬cient is the product

of the sum variable j with N so that only every N -th coe¬cient appears

in the error series.) For a periodic function, it is known that

aj ∼ ν(j) exp(’2πµj) (116)

where µ is the absolute value of that singularity in the complex x-

plane which is closest to the real axis and where the prefactor ν is an

algebraic rather than exponential function of j which depends on the

type of singularity (simple pole, logarithm, etc.). Inserting this into

Eq. (114) gives the correct conclusion that for periodic functions, free

of singularity for real x, the error in the trapezoidal rule is

I ’ T ∼ ν exp(’2πµ/h). (117)

In other words, the error lies beyond all orders in powers of the grid

spacing h.

Presumably such exponential dependence on 1/h lurks in the Euler-

Maclaurin series even for non-periodic functions. We can con¬rm this

suspicion by considering a particular case: a function g(x) whose sin-

gularity nearest the interval x ∈ [0, 1] is a simple pole of unit residue at

x = ’σ. In the limit that the order j ’ ∞, the (2j ’ 1)-st deriva-

tive will be more and more dominated by this singularity. (Recall

that the convergence of the power series in x about the origin has

a radius of convergence σ controlled by this nearest singularity, and

that the coe¬cients of the power series are the derivatives of g(x) at

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.62

63

Exponential Asymptotics

the origin.) Using the known asymptotics of the Bernoulli numbers,

B2j ∼ (’1)j’1 (2j)!/(22j’1 π 2j ), we ¬nd

h2j

B2j

(1) ’ g (0) ∼ 2(’1) j’∞

2j (2j’1) (2j’1) j

h g j!,

22j π 2j σ 2j

(2j)!

(118)

This implies one can obtain an improved trapezoidal rule by subtracting

the leading term of the hyperasymptotic approximation to the error in

the ordinary trapezoidal rule. This is obtained by optimally-truncating

the Euler-Maclaurin series and approximating the error by a Dingle

terminant:

h2

EN ∼ Λ N (119)

4π 2 σ 2

One important implication of the factorial divergence of the Euler-

Maclaurin series is that it shows that Richardson extrapolation will

diverge, too, if applied to arbitrarily high order for ¬xed h. Romberg

integration, which is a popular and robust algorithm for numerical

integration, does in fact employ Richardson extrapolation of arbitrary