. 16
( 24 .)


Exponential Asymptotics

Boyd[54] used rational Chebyshev functions on the semi-in¬nite
interval to approximate the J0 Bessel function. This asymptotes to a
sinusoidal oscillation rather than decaying, so a naive expansion in basis
functions appropriate to an unbounded interval will fail disastrously.
Nevertheless, he wrote, in a form mimicking the large x asymptotics,

J0 (x) ≈ {P (x) cos(x ’ π/4) + Q(x) sin(x ’ π/4)} (134)
(1 + x)

Using a total of only 17 coe¬cients for P (x) and Q(x) combined gave a
maximum absolute error of less than 2 — 10’7 uniformly on x ∈ [0, ∞].
Numerical methods to replace divergent power series do demand
some technology which is not otherwise widely used. It is encouraging
that now this technology mostly exists and has been tested in applica-

18. Numerical Methods, II: Sequence Acceleration and
Pad´ and Hermite-Pad´ Approximants
e e

Sequence acceleration or “summability” methods have a long histo-
ry [139, 318, 315, 316]. The Euler sum acceleration is an elderly but
still potent algorithm as already shown for the Stieltjes function. It is,
however, but one of many schemes in the literature. We must refer to
specialized reviews [139, 318, 315, 316] for an in depth discussion, but
it is important to note one principle and one algorithm.
The principle is that acceleration methods are slaves to the oscilla-
tions of the j-th term in the series with respect to j. For alternating
series, that is, those for which the sign of the (j + 1)-st term is opposite
the sign of the j-th term, and for nearly alternating series, acceleration
methods are very e¬ective. For monotonic series, that is, expansions
whose terms are all of the same sign, some di¬erent but e¬ective accel-
eration schemes are also known [315, 316]. However, when the series
is slowly oscillating in degree j but not strictly monotonic, sequence
acceleration algorithms tend to perform very badly [70].
The [p/q] Pad´ approximant to a function f ( ) is a polynomial of
degree p divided by a polynomial of degree q which is chosen so that
the leading terms of the power series of the approximant match the ¬rst
(p + q + 1) terms of the power series of f ( ). The good news is that the
Pad´ approximation usually converges even when the power series from
whence it came diverges. For example, it has been rigorously proved
that the [N/N ] approximant to the Stieltjes function S( ) converges
with an error that decreases proportional to exp(’4N 1/2 / 1/2 ) ” in

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.73
74 John P. Boyd

other words, exponential but subgeometric convergence, similar to the
Chebyshev series for this function[19].
Unfortunately, the Pad´ approximant fails along the branch cut for
the Stieltjes function, which is the negative real ’ axis. Because the
integral that de¬nes the Stieltjes function has a pole on the integration
path when is real and negative, the function is not completely speci¬ed
until we choose how the pole is treated. The Stieltjes function is real-
valued if the integral is interpreted as a Principal Value integral, but
has an imaginary part which is exponentially small in 1/ if the path
of integration is indented above or below the pole in the t-plane. That
∞ 1 1
dt ± i π exp(’1/ )
S(’ ) = P V exp(’t) (135)
1’ t

where the sign of the imaginary part depends on whether the contour
is idented above or below the real t ’ axis. Since the terms in the
Stieltjes power series, S( ) ∼ ∞ (’1)j j!, are all real-valued, one can
prove that the coe¬cients of the numerator and denominator polyno-
mials in the Pad´ approximant are real, too. Even if the approximants
converged, they would inevitably miss the imaginary part of S(’| |).
The same di¬culty arises in quantum mechanics. For the quartic
oscillator, for example, the eigenvalue E of the stationary Schr¨dinger
equation is complex-valued when the coupling constant is negative;
physically, the imaginary part is the inverse of the lifetime of a metastable
bound state, which eventually radiates away from the local minimum
of the potential energy. The exact (E) is not analytically known, but
it does decrease exponentially fast with 1/| |. Because (E) gives the
lifetime of the state, and therefore the rate of radiation, it is a very
important quantity even when small. Ordinary Pad´ approximants fail
when is real and negative, however, just as for the Stieltjes function.
(In fact, it has been proved that both the eigenvalue of the quartic oscil-
lator and S( ) belong to a class of functions called Stieltjes fuctions,
and thus are close cousins.)
Shafer[282] developed a generalized Pad´ approximant which has
been used successfully by several groups to calculate exponentially
small imaginary parts of quantum eigenvalues[300, 131, 280, 287, 281].
The approximant f [K/L/M ] is de¬ned to be the solution of the quadrat-
ic equation

P (f [K/L/M ])2 + Q f [K/L/M ] + R = 0 (136)

where the polynomials P , Q and R are of degrees K, L and M , respec-
tively. These polynomials are chosen so that the power series expansion

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.74
Exponential Asymptotics

of f [K/L/M ] agrees with that of f through the ¬rst N = K +L+M +1
terms. (The constants in P and Q can be set equal to one without loss
of generality since these choices do not alter the root of the equation, so
the total number of degrees of freedom is as indicated.) As for ordinary
Pad´ approximants, the coe¬cients of the polynomials can be comput-
ed by solving a matrix equation and the most accurate approximations
are obtained by choosing the polynomials to be of equal degree, so-
called “diagonal” approximants.
Fig. 17 shows the diagonal approximant of the Stieltjes function for
negative real ; the polynomials for the [4/4/4] approximation are

P = 1 ’ (160/3) + (3680/3) 2 + (3680/3) 3 + 7240 4
Q = 1 + (2239/9) ’ (2698/9) 2 + (43658/3) 3 + 5056 4
R = ’2 ’ (1732/9) ’ (7126/9) 2 ’ (41504/3) 3 ’ (2452/9) 4 ;

The two roots of the quadratic equation give us the result of choosing
to indent the contour either above or below the path of integration;
discarding the imaginary part gives the Principal Value of the integral.
The [10/10/10 approximant gives a maximum relative error for both
parts of less than 5 — 10’6 .
Shafer™s idea can be generalized to polynomials of higher degree
in the approximation. The result is usually called a “Hermite-Pad´” e
approximant. The quadratic or “Shafer” approximants seem to be quite
successful for most quantum problems[300, 131, 280, 287]. However,
Sergeev and Goodson describe fast algorithms for computing approxi-
mants of higher degree and also solve some problems where such high-
er degree approximants, capable of representing cube roots and higher
branch points, are very useful[281].
Pad´ approximants have been generalized in several other direc-
tions, too.[129]. Reinhardt [269] has developed a procedure using double
Pad´ approximants which works well even for monotonic, factorially-
diverging series, including the computation of exponentially small (E),
although it has been largely displaced by Shafer approximants. Anoth-
er generalization is to approximate f by the solution of an ordinary
di¬erential equation([10] and earlier references cited there) or a partial
di¬erential equation ([97] and earlier articles therein) where again the
coe¬cients of the di¬erential equation are polynomials in chosen so
the approximant has a power series matching f up to some ¬nite order.
Pad´ methods have limitations; they seem to be quite useless for
calculating the exponentially small radiation coe¬cient of a weakly
nonlocal solitary wave, for example. Nonetheless, many problems in
quantum mechanics have fallen to Hermite-Pad´ approximants.

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.75
76 John P. Boyd

Stieltjes func. S on the cut Abs. Errors: Shafer Approx.
1.5 10




0 10
0 1 2 0.5 1 1.5 2
-µ -µ
Figure 17. Left panel: the real and imaginary parts of the Stieltjes function on
the negative real ’ axis. Right: the absolute value of the absolute error for the
real part (thin dashed) and imaginary part (thick solid) part in the [4/4/4] Shafer
approximant (top pair of curves) and the [8/8/8] approximant (bottom solid and
dashed curves).

19. High Order Hyperasymptotics versus Chebyshev and
Hermite-Pad´ Approximations

“I wrote an analytical solution to a sixth order di¬erential equation
as a hypergeometric integral, derived asymptotic approximations,
matched the boundary conditions, and ¬nally went to a computer
to make graphs. The machine took about a minute. Then I solved
the whole problem numerically, and the same machine took about
two seconds. That was the last analytical work I ever did!”
” R. E. Dickinson [115]

As illustrated by Dickinson™s amusing experience, very complicated
analytical answers are really just another form of numerical solution.
High order asymptotic and hyperasymptotic solutions are usually in
this category because a long string of terms adds little insight to the

ActaApplFINAL_OP92.tex; 21/08/2000; 16:16; no v.; p.76
Exponential Asymptotics

lowest term, only greater numerical accuracy. Consequently, a proverb
in perturbation theory is: One term: insight; several terms: numerics.
If many terms, as opposed to three or four terms, are available, it is
possible to deduce some non-numerical information from the series such
as the approximate location of the convergence-limiting singularities in
the complex plane (another use of Darboux™s Principle!) However, this
analytic information is mostly used only to transform the series to
improve the rate of convergence as described in van Dyke™s book[298],
for example. Fluid dynamicists are not too interested in branch points
at complex values of the spatial coordinates!
In the rest of this section, we shall focus on answering the question
suggested by these considerations: How useful are high order hyper-
asymptotics numerically in comparison to other numerical methods?
Although many books and articles on beyond-all-orders methods
o¬er numerical tables, head-to-head comparisons between hyperasymp-
totics and other numerical algorithms are rare. Most of the work cata-
logued in Table IV has a decidely pre-computer spirit.
One reason for this lack of e¬ciency contests is that most software
libraries already have quite e¬cient routines for computing special func-
tions. Typically, the algorithms for computing Bessel and Airy func-
tions, the Error Integral, and so on employ two expansions for each
function. Through experimentation and theory, a breakpoint ζ is cho-
sen for each function. A convergent power series is used for | z |¤ ζ
whereas a divergent expansion in form of an exponential or trigonomet-
ric pre-factor multiplied by a series of inverse powers of z is employed
for | z |> ζ. (The divergent series may actually be di¬erent expansions
in di¬erent parts of the complex plane because of Stokes™ phenomenon.)
Hyperasymptotics might seem like a natural way to add a few decimal
places of additional accuracy.
Unfortunately, the extension beyond the superasymptotic approxi-
mation is a series of terminants like Eq.(73). Olde Daalhuis[240, 242]
has developed good numerical methods for evaluating terminants, but
the approximations are series of hypergeometric functions which can
only be evaluated by recurrence. This implies that each terminant is
itself as expensive to evaluate as the special function it helps to approx-
imate. Thus, terminant series are numerically ine¬cient. It is probably
more sensible to simply increase the “break point” where the subrou-
tine switches from the power series to the inverse power series and also
increase the number of terms in each series.
In practice, both series are replaced by the equivalent Chebyshev


. 16
( 24 .)