. 77
( 136 .)


the straight line rate-of-decrease-of-error that is the spoor of geometric convergence on a
log-linear plot.
This example serves to remind us that theoretical ideas about orders and rates of con-
vergence are almost always asymptotic estimates. In this case, the concept of “subgeomet-

Figure 17.10: The base-10 logarithm of the error bound as a function of n, the degree of
the highest T Bn (y), in the representation of the function f (r) ≡ r K1 (r) on r ∈ [0, ∞]. To
resolve the logarithmic singularity at the origin, r is mapped into the in¬nite interval y ∈
[’∞, ∞] via r = arcsinh(exp[y]) and the result is expanded using the rational Chebyshev
functions in the new coordinate y. The n-th error bound is the sum of the absolute values
of all coef¬cients am with m > n; this is a slight overestimate of the actual maximum
pointwise error on r ∈ [0, ∞].

ric” convergence gives a misleading picture of how rapidly the error decreases for small
and moderate N . Boyd (1978a, Table VIII) offers another amusing illustration.

17.13 Functions Which Oscillate
Without Exponential Decay at In¬nity
Despite the effectiveness of the mappings and basis sets illustrated above and in the journal
articles mentioned earlier, there are still classes of problems where further research is badly
needed. When a function f (y) oscillates as y ’ ∞, there is no dif¬culty as long as the
function also decays exponentially fast with y. It is unnecessary for the basis functions to

Table 17.9: A Bibliography of Spectral Solutions to Oscillate-at-In¬nity Problems

References Comments
Boyd (1987b) Theory & T Ln expansions of amplitude and phase
for J0 Bessel function
Boyd (1990b,1991a,1991d, T Bn /radiation function basis for nonlocal solitons
Boyd (1990d) T Bn /radiation function basis for quantum scattering
(continuous spectrum)
Yang&Akylas(1996) Weakly nonlocal solitary waves

resolve thousands and thousands of wavelengths since the function is negligibly small
outside a region that includes only a few alternations in sign.
When f (y) decays algebraically with y, the T Bn (y) and T Ln (y) still converge rapidly
as long as either (i) the function does not oscillate or (ii) the frequency of the oscillations
dies out as y ’ ∞. However, if the function decays slowly, but the rate of oscillation does
not, then a conventional expansion is in big trouble.
A prototype is the Bessel function

2 9 π
+ O y ’4
J0 (y) ∼ 1’ cos y ’
πy 128 y 4
1 π
+ O y ’3
+’ sin y ’
8y 4

for y 1. It is not possible for a ¬nite number of basis functions to resolve an in¬nite

number of oscillations. Because J0 (y) decays in magnitude only as 1/ y, if we fail to
resolve √ oscillation between y = 200π and y = 202π, for example, we will make an error
of O(1/ 200π) = 0.04. To accurately approximate the ¬rst hundred wavelengths, however,
would take several hundred T Ln (y). So many basis functions for only 4% accuracy!
Library software avoids this problem by using two approximations: the asymptotic
expansion (17.69) for large y and a power series for small y. It is not possible to extend
the large y expansion to y = 0 since the series in 1/y are divergent. If we wish to obtain a
“global” approximation, that is to say, a single expansion accurate for all of y ∈ [0, ∞], we

Figure 17.11: The functions a(y) and φ(y) in the approximation

1 + y J0 (y) ≈ a(y) cos(y ’ π/4 ’ φ[y])
which is uniformly accurate for y ∈ [0, ∞].

must mimic the asymptotic expansion (17.69). Boyd (1987b) assumed the double series

π π
1 + y J0 (y) ≈ cos y ’ an T Ln (y) + sin y ’ (17.70)
bn T Ln (y)
4 4
n=0 n=0

and computed the coef¬cients by interpolation at (M + N + 2) points, spaced as in (17.63).
With M = 15 and N = 5, the maximum absolute error was less than 0.00001 everywhere
on the entire semi-in¬nite interval.
In some respects, there is nothing mysterious about this: the expansion (17.70) is a
faithful mirror of the asymptotic series and the T Ln (y) converge rapidly for functions with
asymptotic series in negative powers of y. However, the success is also rather surpris-
ing (and encouraging) because (17.70) is a blind leap off a cliff. There is no convergence
theory whatsoever for basis sets composed of mixtures of sines and cosines multiplied
by the T Ln (y). Since the cos(y ’ π/4) and sin(y ’ π/4) factors are still rapidly oscillat-
ing for large y while the rational Chebyshev functions have each smoothly asymptoted
to 1, there is no reason to suppose that a distribution of points that is optimum for the
T Ln (y) alone will capture the much more complicated (and rapidly varying) behavior of
the rational-Chebyshev-multiplied-by-a-cosine used to compute the coef¬cients in (17.70).
We can make a strong case, via the WKB approximation from whence (17.69) came, that
J0 (y) has an accurate approximation in the form of (17.70). We have no theory, however, to
indicate precisely how to compute those coef¬cients.
In point of fact, the simple-minded interpolation is somewhat ill-conditioned, and it
was not possible to obtain more than ¬ve decimal places of accuracy when computing in
sixteen decimal place arithmetic. A Galerkin-type procedure (although it, too, was some-
what ill-conditioned) achieved 2 — 10’7 accuracy. Perhaps the ill-conditioning could be
removed by using the Euler summation method to more accurately evaluate the Galerkin™s
integrals, which have highly oscillatory integrands. Fig. 17.11 shows the “amplitude” a(y)
and “phase” φ(y) when elementary trigonometric identities are used to rewrite (17.70) as
1 + y J0 (y) = a(y) cos y ’ ’ φ[y] (17.71)
Similar dif¬culties arise in quantum scattering (Boyd, 1990d), in which the incoming,
transmitted, and re¬‚ected waves all propagate even at spatial in¬nity, and in the theory of
weakly nonlocal solitary waves, which decay from a large central core to small amplitude
oscillations of constant amplitude (Boyd, 1995b, 1998b). An alternative strategy, similar in
spirit to the method described above but different in detail, is to use a standard rational
Chebyshev basis (or whatever) and augment it with one or two special “radiation func-
tions”, which have the correct oscillations-at-in¬nity built into them. Thisis discussed in
the Chap. 19, Secs. 4 and 5.

17.14 Weideman-Cloot Sinh Mapping
Weideman and Cloot(1990) introduced the mapping

y ∈ [’∞, ∞] & t ∈ [’π, π] (17.72)
y = sinh(Lt),

They apply a Fourier pseudospectral method in the new coordinate t. Since the image of
t ∈ [’π, π] is y ∈ [’ymax , ymax ] where ymax is ¬nite, their scheme combines both mapping
and domain truncation in a single algorithm. Weideman and Cloot(1990), Cloot(1991),
Cloot and Weideman (1992) and Boyd (1994b) offer both theoretical and empirical evidence

that this Fourier-domain-truncation-with-mapping is as effective as the rational Chebyshev
basis, or better, in many applications.
The crucial point is that because ymax ≡ sinh(Lπ) grows exponentially fast with L,
very small increases in L produce large decreases in the domain truncation error EDT ≡
|u(ymax )|. There is also a series truncation error ES (N ) that arises because the Fourier se-
ries in t must be truncated. Since the Fourier coef¬cients decrease geometrically for most
functions, it follows that log |ES | is roughly O(N ) for ¬xed L. To keep the domain trun-
cation error as small as the series truncation error, however, we must increase L with N
so that log |EDT | is also O(N ). However, because the sinh-map increases the size of the
computational domain exponentially fast with L, it follows that increasing L logarithmically
with N will ensure that the domain truncation error falls off geometrically with N .
The bad news is that increasing L will move the singularities of u(y[t]) closer to the real
t-axis. As explained in Chapter 2, the pole or branch point whose location has the smallest
imaginary part controls the rate of convergence of the Fourier series. If the convergence-
limiting singularity is at t = ts , then log |ES (N )| ∼ ’| (ts )| N . It follows that a logarithmic
increase of L with N will cause a logarithmic decrease in | (ts )| so that log |ES | and also
the total error (series plus domain truncation) must decrease no faster than

log |E| ∼ O (17.73)
log(N )

Boyd (1994b) dubs this “quasi-geometric” convergence because it fails to be truly geometric
only because of the logarithm.
Cloot and Weideman (1990) show further that if we optimize L so that the domain
truncation and series truncation errors are roughly the same and decrease as rapidly as
possible in lockstep at N ’ ∞, then the optimum L has a remarkably simple form for a
large class of functions For example, suppose that u(y) is such that

EDT ∼ exp(’Aymax ),
ymax >> 1

and has a convergence-limiting singularity (that is, the pole or branch point whose location
has the smallest imaginary part of all the singularities of the function) at y = ys . Applying
the mapping and then simplifying under the assumption that L >> 1 gives,

log(EDT ) ∼ ’A 2’r exp(rπL) (17.75)

| (ys )|
log(ES ) ∼ ’N (17.76)
The sum EDT + ES is minimized as N ’ ∞ by
Lopt ∼ log(N 1/r ) + O(log(log(N ))) (17.77)
In contrast, rational Chebyshev series have subgeometric convergence. For a func-
tion like sech(Ay), the asymptotic analysis of Boyd(1982b) shows Lopt ∼ 1.07N 1/3 /A and
log |ES | ∼ ’1.47N 2/3 , which is subgeometric convergence with exponential index of con-
vergence 2/3. The optimum map parameter depends upon A, the width parameter of u(y),
whereas the optimum L for the Cloot-Weideman mapping depends only upon N .
Fig. 17.12 shows that for sech(y), the Weideman-Cloot sinh-mapped Fourier algorithm
gives much smaller errors for the same number of degrees of freedom. However, it also
shows a problem. The asymptotic estimate for the best map parameter for the rational
Chebyshev basis, L ∼ 3.8, is fairly close to the bottom of the curve of the actual maximum

-3 -3
10 10
-4 -4
10 10
-5 -5
10 10
-6 -6
10 10
-7 -7
10 10
-8 -8
10 10
-9 -9
10 10
-10 -10
10 10
-11 -11
10 10
-12 -12
10 10
0 5 10 0 1 2
L L/Lopt (WC)

Figure 17.12: Errors in the rational Chebyshev series (left panel) and the Weideman-Cloot
algorithm for the function u(y) = sech(y) for various map parameters L (left graph) or
domain parameters L (right). Lopt ≡ (1/π) log(N ) is Cloot and Weideman™s own estimate
of the best L. Twenty-two rational Chebyshev functions were used for the left panel; the
same number of cosine functions for the right graph.

pointwise error versus L, and the error in the T B series is not very sensitive to L anyway.
However, L = 1 in the Weideman-Cloot method gives only one-third the number of correct
digits obtainable by using L only 30% larger; the Weideman-Cloot method is much more
sensitive to L.
This sensitivity difference between mapping and domain truncation was noted by Boyd
(1982b). With domain truncation, the total error is the sum of two terms which grow or
decay differently with L: the domain truncation error EDT decreases with L while the
error in the N -term series increases with L. The total error has the sharp pointed shape
of the letter “V” on a graph of error versus L. For entire functions, the rational Chebyshev
basis is better because there is only the series error, and the graph of error versus L is ¬‚at
at the minimum.
Unfortunately, when u(y) has a singularity, the error for the T B basis is also the sum of
two unrelated terms; one is the series truncation error associated with the singularity, the
other depends on how rapidly u(y) decays with |y| for large |y|. Consequently, the curve on
the left of Fig. 17.12 is V-shaped. However, the penalty for underestimating the optimum
L is less drastic for the rational Chebyshev basis than for the sinh-map.
One dif¬culty in estimating L for the sinh-mapping is that the relative error in the lead-
ing order approximation to Lopt is log (| (ys )| π r 2r /A) / log(N ), which is rather large un-
less N is huge. Indeed, although the leading order estimate is technically independent of
A, the width of sech(Ay), it is actually a good idea to make a preliminary linear change-of-
coordinate so that A ∼ O(1) in the new coordinate. Otherwise, the simple approximation
Lopt ∼ (1/π) log(N ) may be useless until N is ridiculously large.
Fig. 17.13 illustrates the spectral coef¬cients for the two methods. The map parame-
ters were deliberately chosen to be optimal (empirically) for N = 44, but the coef¬cients
are graphed to much larger N . For both methods, the coef¬cients decrease roughly geo-
metrically until N is slightly larger than that for which these map parameters are optimal,
and then both series ¬‚atten out. The Fourier series runs into a wall and becomes very ¬‚at
because the coef¬cients for N > 50 are those of Gibbs™ Phenomenon, controlled by the
17.15. SUMMARY 377



. 77
( 136 .)