<< Предыдущая стр. 3(из 24 стр.)ОГЛАВЛЕНИЕ Следующая >>
It can be rigorously justiп¬Ѓed for some classes of asymptotic series
[158, 241, 169, 106, 107, 285].
To replace the lengthy, jaw-breaking phrase вЂњoptimally-truncated
asymptotic seriesвЂќ, Berry and Howls coined a neologism [35, 30] which
is rapidly gaining popularity: вЂњsuperasymptoticвЂќ. A more compelling
reason for new jargon is that the standard deп¬Ѓnition of asymptoticity
(Def. 1 above) is a statement about powers of , but the error in an
optimally-truncated divergent series is usually an exponential function
of the reciprocal of .

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.10
11
Exponential Asymptotics

Deп¬Ѓnition 3 (Superasymptotic). An optimally-truncated asymp-
totic series is a вЂњsuperasymptoticвЂќ approximation. The error is typically
O(exp( в€’ q / )) where q > 0 is a constant and is the small parameter
of the asymptotic series. The degree N of the highest term retained in
the optimal truncation is proportional to 1/ .

Fig. 1 illustrates the errors in the asymptotic series for the Stieltjes
function (deп¬Ѓned in the next section) as a function of N for п¬Ѓfteen
diп¬Ђerent values of . For each , the error dips to a minimum at N в‰€
1 / as the perturbation order N increases. The minimum error for each
N is the вЂњsuperasymptoticвЂќ error.
Also shown is the theoretical prediction that the minimum error
for a given is ( ПЂ/ (2 ))1/2 exp(в€’1 / ) where Noptimum ( ) в€ј 1/ в€’ 1.
For this example, both the exponential factor and the proportionality
constant will be derived in Sec. 5.
The deп¬Ѓnition of вЂњsuperasymptoticвЂќ makes a claim about the expo-
nential dependence of the error which is easily falsiп¬Ѓed. Merely by
redeп¬Ѓning the perturbation parameter, we could, for example, make
the minimum error be proportional to the exponential of 1/ Оі where Оі
is arbitrary. Modulo such trivial rescalings, however, the superasymp-
totic error is indeed exponential in 1/ for a wide range of divergent
series [30, 72].
The emerging art of вЂњexponential asymptoticsвЂќ or вЂњbeyond-all-ordersвЂќ
perturbation theory has made it possible to improve upon optimal trun-
cation of an asymptotic series, and calculate quantities вЂњbelow the radar
screenвЂќ, so to speak, of the superasymptotic approximation. It will not
do to describe these algorithms as the calculation of exponentially small
quantities since the superasymptotic approximation, too, has an accu-
racy which is O(exp( в€’ q / ) for some constant q. Consequently, Berry
and Howls coined another term to label schemes that are better than
mere truncation of a power series in :

Deп¬Ѓnition 4. A hyperasymptotic approximation is one that achieves
higher accuracy than a superasymptotic approximation by adding one
or more terms of a second asymptotic series, with diп¬Ђerent scaling
assumptions, to the optimal truncation of the original asymptotic expan-
sion . (With another rescaling, this process can be iterated by
adding terms of a third asymptotic series, and so on.)

All of the methods described below are вЂњhyperasymptoticвЂќ in this
sense although in the process of understanding them, we shall acquire
a deeper understanding of the mathematical crimes and genius that
underlie asymptotic expansions and the superasymptotic approxima-
tion.

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.11
12 John P. Boyd

0
10
Оµ=1 Оµ=1/2
Оµ=1/3 Оµ=1/4 Оµ=1/5 Оµ=1/6 Оµ=1/7 Оµ=1/8
-1
10
Оµ=1/9
-2
10
Оµ=1/10
Errors

-3
10 Оµ=1/11

Оµ=1/12
-4
10
Оµ=1/13
-5
Оµ=1/14
10

Оµ=1/15
-6
10
0 5 10 15 20
N (perturbation order)

1 6 11 16 21
1/Оµ
Figure 1. Solid curves: absolute error in the approximation of the Stieltjes func-
tion up to and including the N-th term. Dashed-and-circles: theoretical error
in the optimally-truncated or вЂњsuperasymptoticвЂќ approximation: ENoptimum ( ) в‰€
( ПЂ/ (2 ))1/2 exp(в€’1 / ) versus 1 / . The horizontal axis is perturbative order N for
the actual errors and 1 / for the theoretical error

But when does a series diverge? Since all derivatives of exp(в€’1/ )
vanish at the origin, this function has only the trivial and useless power
series expansion whose coeп¬ѓcients are all zeros:

exp(в€’q/ ) в€ј 0 + 0 + 0 2
+ ... (2)
for any positive constant q. This observation implies the п¬Ѓrst of our
four heuristics about the non-convergence of an вЂ“power series.

Proposition 2 (Exponential Reciprocal Rule). If a function f ( )
contains a term which is an exponential function of the reciprocal of ,
then a power series in will not converge to f ( ).

We must use phrase вЂњnot converge toвЂќ rather than the stronger
вЂњdivergeвЂќ because of the possibility of a function like

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.12
13
Exponential Asymptotics

в€љ
h( ) в‰Ў 1 + + exp(в€’1/ ) (3)
The power series of h( ) will converge for all | | < 1, but it converges
to a number diп¬Ђerent from the true value of h( ) for all except = 0.
Fortunately, this situation вЂ“ a convergent series for a function that
contains a term exponentially small in 1/ , and therefore invisible to
the power series вЂ“ seems to be rare in applications. (The author would
be interested in learning of exceptions.)
Milton van Dyke, a п¬‚uid dynamicist, oп¬Ђered another useful heuristic
in his slim book on perturbation methods :

Proposition 3 (Principle of Multiple Scales). Divergence should
be expected when the solution depends on two independent length
scales.

We shall illustrate this rule later.
The physicist Freeman Dyson  published a note which has been
widely invoked in both quantum п¬Ѓeld theory and quantum mechanics
for more than forty years [164, 165, 166], [44, 45, 43]. However, with
appropriate changes of jargon, the argument applies outside the realm
of the quantum, too. Terminological note: a вЂњbound stateвЂќ is a spatially
localized eigenfunction associated with a discrete, negative eigenvalue
of the stationary SchrВЁdinger equation and the вЂњcoupling constantвЂќ
o
is the perturbation parameter which multiplies the potential energy
perturbation.

Proposition 4 (Dyson Change-of-Sign Argument). If there are
no bound states for negative values of the coupling constant , then a
perturbation series for the bound states will diverge even for > 0.

A simple example is the one-dimensional anharmonic quantum oscil-
lator, whose bound states are the eigenfunctions of the stationary Schroedinger
equation:

П€xx + {E в€’ x2 в€’ x4 }П€ = 0 (4)
When в‰Ґ 0, Eq.(4) has a countable inп¬Ѓnity of bound states with pos-
itive eigenvalues E (the energy); each eigenfunction decays exponen-
tially with increasing | x |. However, the quartic perturbation will grow
faster with | x | than the unperturbed potential energy term, which is
quadratic in x. It follows that when is negative, the perturbation will
reverse the sign of the potential energy at x = В±1/(в€’ )1/2 . Because
of this, the wave equation has no bound states for < 0, that is, no

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.13
14 John P. Boyd

eigenfunctions which decay exponentially with | x | for all suп¬ѓciently
large | x |.
Consequently, the perturbation series cannot converge to a bound
state for negative , be it ever so small in magnitude, because there is
no bound state to converge to. If this non-convergence is divergence (as
opposed to convergence to an unphysical answer), then the divergence
must occur for all nonвЂ“zero positive , too, since the domain of conver-
gence of a power series is always | | < ПЃ for some positive ПЃ as reviewed
in elementary calculus texts.
This argument is not completely rigorous because the perturbation
series could in principle converge for negative to something other
than a bound state. Nevertheless, the Change-of-Sign Argument has
been reliable in quantum mechanics .
Implicit in the very notion of a вЂњsmall perturbationвЂќ is the idea
that the term proportional to is indeed small compared to the rest of
the equation. For the anharmonic oscillator, however, this assumption
always breaks down for | x | > 1/| |1/2 . Similarly, in high Reynolds
number п¬‚uid п¬‚ows, the viscosity is a small perturbation everywhere
except in thin layers next to boundaries, where it brings the velocity
to zero (вЂњno slipвЂќ boundary condition) at the wall. This and other
examples suggests our fourth heuristic:
Proposition 5 (Principle of Non-Uniform Smallness). Divergence
should be expected when the perturbation is not small, even for arbi-
trarily small , in some regions of space.
When the perturbation is not small anywhere, of course, it is impos-
sible to apply perturbation theory. When the perturbation is small
uniformly in space, the power series usually has a п¬Ѓnite radius of con-
vergence. AsymptoticвЂ“butвЂ“divergent is the usual spoor of a problem
where the perturbation is smallвЂ“butвЂ“notвЂ“everywhere.
We warn that these heuristics are just that, and not theorems. Coun-
terexamples to some are known, and probably can be constructed for
all. In practice, though, these empirical predictors of divergence are
quite useful.
Pure mathematics is the art of the provable, but applied mathemat-
ics is the description of what happens. These heuristics illustrate the
gulf between these realms. The domain of a theorem is bounded by
extremes, even if unlikely. Heuristics are descriptions of what is prob-
able, not the full range of what is possible.
For example, the simplex method of linear programming can con-
verge very slowly because (it can be proven) the algorithm could visit
every one of the millions and millions of vertices that bound the fea-
sible region for a large problem. The reason that DantzigвЂ™s algorithm

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.14
15
Exponential Asymptotics

has been widely used for half a century is that in practice, the simplex
method п¬Ѓnds an acceptable solution after visiting only a tiny fraction
of the vertices.
Similarly, Hotellier proved in 1944 that (in the worst case) the round-
oп¬Ђ error in Gaussian elimination could be 4N times machine epsilon
where N is the size of the matrix, implying that a matrix of dimension
larger than 50 is insoluble on a machine with sixteen decimal places
of precision. What happens in practice is that the matrices generated
by applications can usually be solved even when N > 1000 . The
exceptions arise mostly because the underlying problem is genuinely
singular, and not because of the perversities of roundoп¬Ђ error.
In a similar spirit, we oп¬Ђer not theorems but experience.

4. Optimal Truncation and Superasymptotics for the
Stieltjes Function

The п¬Ѓrst illustration is the Stieltjes function, which, with a change of
variable, is the вЂњexponential integralвЂќ which is important in radiative
transfer and other branches of science and engineering. This integral-
depending-on-a-parameter is deп¬Ѓned by
в€ћ exp(в€’t)
S( ) = dt (5)
1+ t
0
The geometric series identity, valid for arbitrary integer N,
N
(в€’ t)N +1
1
(в€’ t)j +
= (6)
1 + t j=0 1+ t

allows an exact alternative deп¬Ѓnition of the Stieltjes function, valid for
any п¬Ѓnite N :
N в€ћ
j
 << Предыдущая стр. 3(из 24 стр.)ОГЛАВЛЕНИЕ Следующая >>