. 70
( 136 .)


points more evenly over the interval y ∈ [’1, 1]. Their choice is
arcsin([1 ’ β]x)
arcsin(1 ’ β)
where y is the original “physical” coordinate and x is the computational coordinate. This
is equivalent to replacing Chebyshev polynomials in y as the expansion functions by the
new, non-polynomial basis
φj (y) ≡ Tj (16.34)
where p ≡ arcsin(1 ’ β). When β = 1, this basis reduces to the usual Chebyshev polynomi-
als. When β << 1, the collocation points, de¬ned as the roots or extrema of the Chebyshev
polynomials in the computational coordinate x, are much more uniformly spaced than
those of a standard Chebyshev grid.
De¬ne the difference between the position of one grid point and its nearest neighbor:
δj ≡ yj+1 ’ yj (16.35)
The following theorem was proved by Kosloff and Tal-Ezer by substituting the appro-
priate expression for β into the transformation for y (16.33) and taking the limit N ’ ∞.
β = C/N 2 + O(1/N 3 ) (16.36)
where the number of grid points is N + 1, then
π 2 1

min δj ∼ √ (16.37)
π 2 + 2C + 2C N
where C is a constant.

Table 16.4: Theory and Applications of Kosloff/Tal-Ezer Mapping

References Comments
Kosloff&Tal-Ezer (1993) Introduction and numerical experiments
Tal-Ezer(1994) Theory; optimization of map parameter
Carcione(1994a) Compares standard Chebyshev grid with Kosloff/Tal-Ezer grid
Renaut&Frohlich(1996) 2D wave equations, one-way wave equation at boundary
Carcione(1996) wave problems
Godon(1997b) Chebyshev-Fourier polar coordinate model, stellar accretion disk
Renaut(1997) Wave equations with absorbing boundaries
Don&Solomonoff(1997) Accuracy enhancement and timestep improvement, especially
for higher derivatives
Renaut&Su(1997) 3rd order PDE; mapping was not as
ef¬cient as standard grid for N < 16
Don&Gottlieb(1998) Shock waves, reactive ¬‚ow
Mead&Renaut(1999) Analysis of Runge-Kutta time-integration
Diffractive optical elements; chose β = 1 ’ cos(1/2)
Hesthaven, Dinesen
& Lynov(1999) to double timestep versus standard grid
Abril-Raymundo & Theory and experiment for convergence
ia-Archilla(2000) of the mapping

In words, the theorem states if β decreases suf¬ciently fast with N , then the ratio of the
largest to the smallest inter-grid distance can be bounded by a constant independent of N .
Though we shall not reproduce their graphs, Kosloff and Tal-Ezer calculate the eigenvalues
of the differentiation matrices and show that the great reduction in the non-uniformity
of the grid is indeed accompanied by an equally great reduction in the stiffness of the
matrix. This implies that their Chebyshev-with-a-map scheme requires an O(1/N ) time
step (for a PDE which is ¬rst order in space), no worse than a ¬nite difference scheme on
an equispaced grid except for an N -independent proportionality factor.
Mead and Renaut (1999) have argued that the best choice is
βoptimum = 1 ’ cos(1/N ) ” β ≈ (16.38)
2 N2
This in turn implies that
2 1
min δj ≈ 0.731 (16.39)
which is only slightly smaller than the minimum of 2/N for a uniformly spaced grid. The
tremendous increase in timestep which is thus allowed has created many happy users of
this change-of-coordinates as catalogued in Table 16.4.
The ¬‚aw of the Kosloff/Tal-Ezer transformation is that if β decrease with N , then the
branch points of the mapping move closer and closer to the expansion interval as N in-
creases. The map-induced branch points near the expansion interval deaccelerate the
convergence of the spectral series. It is easiest to see this by making the transformation
t = arccos(x) so the Chebyshev polynomials in x become a cosine series in t. The mapping
arcsin([1 ’ β] cos(t))
arcsin(1 ’ β)
The reason the trigonometric coordinate is convenient is that the convergence-slowing ef-
fects of a singularity are directly proportional to the imaginary part of ts where ts is the
location of the singularity.

Theorem 31 (BRANCH POINTS of KOSLOFF/TAL-EZER MAP) In terms of the trigonomet-
ric coordinate t, the Kosloff/Tal-Ezer mapping has square root branch points at

ts = m π ± i arccosh(1 ’ β) (16.41)

where m is an arbitrary integer. For small β, the all-important imaginary part of the location of the
branch points is given approximately by
√ √ 5 3/2
| (ts )| ≈ + O(β 5/2 ) (16.42)
2 β+ 2β
In particular, when β is chosen to vary inversely to the square of N , the number of Chebyshev grid
√ 1 C
+ O(N ’3/2 ),
| (ts )| ≈ N >> 1, β ∼ (16.43)
Note that µ ≡ (ts ) is the “asymptotic rate of geometric convergence”; the exponential factor in
the usual N ’ ∞ approximation to the spectral coef¬cients (Chapter 2) is exp(’µN ), which in
this case is therefore

exp(’ (ts ) N ) ∼ exp(’ 2C) + O(1/N ) (16.44)

which does NOT DECREASE WITH N .

PROOF: The mapping is singular where the argument of the arcsin in the numerator is
one. Using the identity cos(tr +itim ) = cos(tr ) cosh(tim )’i cos(tr ) cosh(tim ) and separating
real and imaginary parts, this is equivalent to the pair of equations

1 = (1 ’ β) cos(tr ) cosh(tim ) (16.45)
& 0 = sin(tr ) sinh(tim )

The second equation requires tr = 0, π or any other integral multiple of π. It follows that
without approximation tim = arccosh(1’β) as asserted in Eq. 16.41. Expanding the inverse
hyperbolic cosine for small β then gives the remaining two parts of the theorem. Q. E. D.
In the large-degree asymptotic approximation to the coef¬cients of the Fourier series for
a function whose singularity nearest the real t-axis is at t = ts , the exponential-in-degree
factor is exp(’N | (ts )|) for the N -th coef¬cient. If the imaginary part of the mapping
singularities is inversely proportional to N , then the exponential factor does not decrease
at all and the Kosloff/Tal-Ezer mapping has DESTROYED the SPECTRAL ACCURACY of the
Chebyshev algorithm.
To demonstrate this, Fig. 16.3 is a plot of the N -th Chebyshev or Fourier coef¬cient for
the linear polynomial f (y) = y, which is simply T1 (y) in the absence of the mapping. The
map parameter has been varied with N according to the optimum choice of Mead and
Renaut, β ≈ 1/(2N 2 ).
When β decreases more slowly with N than 1/N 2 , the transformed Chebyshev series
still converges geometrically. Unfortunately, the timestep then increases more rapidly with
N than when β is O(1/N 2 ).
In spite of this, the Kosloff/Tal-Ezer mapping has many satis¬ed customers as cata-
logued in the table. However, the claim, made in the title of their paper, that the Kosloff/Tal-
Ezer method gives an O(1/N ) time step limit is true only if one is willing to let spectral
accuracy go to the devil.
For this reason, Hesthaven, Dinesen and Lynov(1999), who are unwilling to sacri¬ce
spectral accuracy, choose

β = 1 ’ cos(1/2) (16.46)

Kosloff/Tal-Ezer N-th coefficient of y




10 0 1 2
10 10 10
Figure 16.3: The N -the Chebyshev coef¬cient versus N when the map parameter is varied
with N according to β = 1 ’ cos(1/N ). (Note that the plotted numbers are not the coef-
¬cients of a single spectral expansion because the map parameter is different for each N .)
Only odd N are plotted because the even N coef¬cients are all zero by symmetry. The dot-
ted curve is a graph of 1/N 2 , which is asymptotically parallel to the solid line. Numerically,
the coef¬cients of f (y) = y asymptote to 0.488/N 2 : the usual exponential convergence of a
Chebyshev series has been destroyed by the change-of-coordinate.

This “accuracy-conservative” choice nevertheless allows twice the time step of an un-
mapped Chebyshev algorithm with no loss of accuracy or stability.
Thus, the controversy is only about the best choice of β. There is no doubt that the
Kosloff/Tal-Ezer mapping is useful and allows a longer time step.
Chapter 17

Methods for Unbounded Intervals

“The in¬nite! No other question has ever moved so profoundly the spirit of man.”
” David Hilbert (1921)

17.1 Introduction
In this chapter, we discuss a variety of special tools for dealing with problems in in¬nite
or semi-in¬nite domains. When the problem is posed on y ∈ [’∞, ∞], the alternatives
include: (i) sinc functions (ii) Hermite functions and (iii) algebraically mapped Chebyshev
polynomials (iv) exponentially mapped Chebyshev polynomials or (v) solving the prob-
lem on a large but ¬nite interval, a technique we shall call “domain truncation”. On the
semi-in¬nite interval y ∈ [0, ∞], we can use (i) Laguerre functions or (ii) Chebyshev poly-
nomials with various mappings or (iii) domain truncation. In the rest of the chapter, we
will describe a number of particular choices of basis set.

The unboundedness of a computational domain extracts two penalties:

1. The rate of convergence is usually SUBGEOMETRIC rather than geometric.

2. In addition to the truncation N , one must always choose some kind of scaling parameter L.
This second numerical parameter may be the size of a truncated computational domain, a map
parameter for a change-of-coordinate that transforms a ¬nite interval to an in¬nite domain,
or a scaling constant for the argument of the basis functions.

The third complication of in¬nite intervals is that there are many good choices of spec-
tral basis. Consequently, we shall treat in turn sinc, Hermite, Laguerre, rational Chebyshev,
domain truncation, and Fourier series with a mapping. Which should one choose? The
good news is: It doesn™t much matter. All these basis sets work well. There are differences,
but only modest differences, in ef¬ciency, ease of programming and the relationship be-
tween the basis sets and the solutions of particular classes of physical problems. Thus, all
of these basis sets are used widely.
The many options for unbounded domains fall into three broad categories:

1. Domain truncation (approximation of y ∈ [’∞, ∞] by [’L, L] with L >> 1


2. Basis functions intrinsic to an in¬nite interval (sinc, Hermite) or semi-in¬nite (La-
3. Change-of-coordinate (“mapping”) of the unbounded interval to a ¬nite one, fol-
lowed by application of Chebyshev polynomials or a Fourier series
These three strategies are not completely distinct. One can always combine mapping of the
coordinate with domain truncation or any basis for an unbounded interval. Furthermore,
mapping is equivalent to creating new basis functions whose natural home is the in¬nite
or semi-in¬nite interval; these are simply the images of Chebyshev or Fourier functions
under the change-of-coordinate. Nevertheless, it is useful to keep these key strategies in
mind as these are mixed and matched to create good algorithms.

17.2 Domain Truncation
De¬nition 35 (DOMAIN TRUNCATION) If a solution u(y) decays rapidly in the direction or
directions for which the computational interval is unbounded (or asymptotes to a simple form), then
the exact solution can be calculated by solving the differential equation on a large but ¬nite interval.
This strategy for unbounded domains is “domain truncation”.

17.2.1 Domain Truncation for Rapidly-decaying Functions
Domain truncation is the “no-brain” strategy for solving problems on an unbounded in-
terval because it requires no modi¬cations of strategies for a ¬nite interval. If the solution
u(y) decays exponentially with |y| for suf¬ciently large |y|, then the error in approximating
the in¬nite interval by the ¬nite interval y ∈ [’L, L] will decrease exponentially with the
domain size L. We can then apply a standard Chebyshev, Legendre or Fourier basis on the
¬nite domain. The joys of applying existing software for a ¬nite domain to an unbounded
computational interval should not be underestimated; programming time is much more
expensive than machine time.
To estimate the error in domain truncation, the following suf¬ces:
De¬nition 36 (DOMAIN TRUNCATION ERROR) The “domain truncation error” is

EDT (L) ≡ |u(L)| if |u(y)| ¤ |u(L)| ∀|y| ≥ L (17.1)


. 70
( 136 .)