<<

. 68
( 136 .)



>>

C T is the trigonometric argument (on interval [0, pi]).
T = ACOS(X)
TN = COS(FLOAT(N)*T)
C Derivatives of cos(Nt) with respect to t:
PT = - FLOAT(N) * SIN(FLOAT(N)*T)
PTT = - FLOAT(N)*FLOAT(N) * TN
PTTT = - FLOAT(N)*FLOAT(N) * PT
PTTTT = - FLOAT(N)*FLOAT(N) * PTT
C = COS(T)
S = SIN(T)
C Conversion of t-derivatives into x-derivatives
TNX = - PT / S
TNXX = ( S*PTT - C * PT) / S**3
TNXXX = (-S*S*PTTT + 3.*C*S*PTT - (3.*C*C + S*S) * PT )/S**5
TNXXXX = (S*S*S*PTTTT-6.*C*S*S*PTTT+(15.*C*C*S+4.*S*S*S) * PTT
1 - (9.*C*S*S+15.*C*C*C) * PT ) / S**7
RETURN
END




16.3 The General Theory of One-Dimensional Coordinate
Transformations
Let y denote the original coordinate and x the new variable where the two are related by

(16.10)
y = f (x)

Elementary calculus shows

d 1d
(16.11)
=
dy f (x) dx

where f ≡ df /dx. By iterating this formula, we obtain the formulas for higher deriva-
tives listed in Table E“1 in Appendix E. The remainder of that appendix is a collection of
derivative tables, generated via the computer language REDUCE, for several particular
mappings.
When a Fourier cosine series is mapped so as to create a new basis set, the orthogonality
relationship is preserved:

f (π)
π
φm (y) φn (y)
cos(mx) cos(nx) dx ≡ (16.12)
dy;
f (f ’1 [y])
0 f (0)
CHAPTER 16. COORDINATE TRANSFORMATIONS
326

The Gaussian quadrature formula for the new interval with the ¬rst-derivative-of-the-
inverse-of-f weight is simply the image of the rectangle rule or trapezoidal rule under the
mapping: the abscissas are unevenly spaced because of the change of coordinates, but the
quadrature weights are all equal.
To have the machinery for transforming derivatives is one thing; to know which map-
pings are useful for which problems is another. In the remainder of this chapter, we offer a
variety of illustrations of the many uses of mappings.


16.4 In¬nite and Semi-In¬nite Intervals
When the computational interval is unbounded, a variety of options are available, and
only some of them require a change-of-coordinate as discussed in much greater detail in
the next chapter. For example, on the in¬nite interval y ∈ [’∞, ∞], Hermite functions
or sinc (Whittaker cardinal) functions are good basis sets. On the semi-in¬nite domain,
y ∈ [0, ∞], the Laguerre functions give exponential convergence for functions that decay
exponentially as |y| ’ ∞.
A second option is what we shall dub “domain truncation”: solving the problem on
a large but ¬nite interval, y ∈ [’L, L] (in¬nite domain) or y ∈ [0, L] (semi-in¬nite) using
Chebyshev polynomials with argument (y/L) or (L/2)[1+y] ), respectively. The rationale is
that if the solution decays exponentially as |y| ’ ∞, then we make only an exponentially-
small-in-L error by truncating the interval. This method, like the three basis sets mentioned
above, gives exponentially rapid convergence as the number of terms in the Chebyshev
series is increased.
However, there is a complication. If we ¬x L and then let N ’ ∞, the error in approxi-
mating u(x) on x ∈ [’L, L] ” call this the “series” error, ES ” decreases geometrically fast
with N . Unfortunately, this is not the only source of error. The “domain truncation error”,
EDT (L), arises because u(±L) has some ¬nite, non-zero value, but we imposed u(±L) = 0
on the numerical solution. It follows that to drive the total error (= ES + EDT ) to zero, we
must simultaneously increase both L and N . This in turn implies that the rate of decrease of
the total error with N is “subgeometric”. A full discussion is in Boyd (1982a).
The third option, and the only one that is strictly relevant to the theme of this chapter,
is to use a mapping that will transform the unbounded interval into [-1, 1] so that we
can apply Chebyshev polynomials without the arti¬ciality of truncating the computational
interval to a ¬nite size. A wide variety of mappings are possible; an early paper by Grosch
and Orszag (1977) compared
y = ’L log(1 ’ x) y ∈ [0, ∞]
“Logarithmic Map” (16.13)
y = L(1 + x)/(1 ’ x) x ∈ [’1, 1]
“Algebraic Map” (16.14)
The logarithmic map stretches the semi-in¬nite interval so strongly that for a function
which decays exponentially with |y|, the change-of-coordinate creates a branch point at
x = 1 ” and a Chebyshev series that converges rather slowly. Grosch and Orszag (1977)
and Boyd (1982a) offer compelling theoretical arguments that logarithmic maps will al-
ways be inferior to algebraic maps in the asymptotic limit N ’ ∞. Paradoxically, however,
some workers like Spalart (1984) and Boyd (unpublished) report good results with an ex-
ponential map. The resolution of the paradox is (apparently) that exponentially-mapped
Chebyshev series approach their asymptotic behavior rather slowly so that for N ∼ O(50)
or less, they may be just as good as algebraic maps.
Nevertheless, we shall discuss only algebraic maps here; more general maps are dis-
cussed in the next chapter. The logarithmic change-of-coordinate is like a time bomb ” it
may not blow up your calculation, but theory shows that it is risky.
16.5. MAPS FOR ENDPOINT & CORNER SINGULARITIES 327

Like domain truncation and Hermite and Laguerre functions, algebraic mapping will
give exponential-but-subgeometric convergence if the function has no singularities on the
expansion interval except at in¬nity and decays exponentially fast as y ’ ∞.
The algebraic mapping (16.14) converts the Chebyshev polynomials in x into rational
functions in y, dubbed the T Ln (y), de¬ned by

y’L
T Ln (y) ≡ Tn (16.15)
y+L

It may seem a little pretentious to introduce a new symbol to represent these “rational
Chebyshev functions on the semi-in¬nite interval” when they are merely Chebyshev poly-
nomials in disguise. However, that the Chebyshev polynomials themselves are merely
cosine functions in masquerade. We can alternatively de¬ne the “rational Chebyshev func-
tions” via

T Ln L cot2 [t/2] ≡ cos(nt) (16.16)

It follows that just as in Sec. 2, we can evaluate the derivatives of the T Ln (y) in terms of
those of cos(nt) by using the transformation formulas given in Table E“6. The pseudospec-
tral grid points are simply the images under the map yi = L cot2 (ti /2) of the evenly spaced
roots in t. (The Gauss-Lobatto or “extrema” grid is inconvenient because the mapping is
singular at the endpoints.) Thus, it is quite trivial to write programs to solve problems on
an unbounded interval.
We must not forget, however, that even though we may prefer to write our program to
solve the trigonometric version of the problem via a Fourier cosine series, the end result is
a series for u(y) as a sum of orthogonal rational functions, the T Ln (y). The mapping has
interesting and important consequences discussed in Chap. 17. The key both to writing the
program and to understanding the subtleties of in¬nite and semi-in¬nite domains is the
change of coordinates.


16.5 One-Dimensional Maps to Resolve Endpoint & Corner
Singularities
The function

g(X) ≡ 1 ’ X2 (16.17)

is bounded everywhere on X ∈ [’1, 1], but because of the branch points, its Chebyshev
series converges with an algebraic convergence index of two:

2 2
1’ 1’
X2 (16.18)
= T2n (X)
4n2 ’ 1
π n=1

Throughout this section, we will assume that the singularities at the endpoints are “weak”
in the sense that the function is bounded at the endpoints.
Stenger (1981) pointed out that a mapping of X ∈ [’1, 1] to y ∈ [’∞, ∞] would heal
such “weak” endpoint singularities if and only if dX/dy decays exponentially fast as |y| ’
∞. The grid spacing in the original coordinate, δX, is related to the grid spacing in y by

dX
δX ≈ (16.19)
δy.
dy
CHAPTER 16. COORDINATE TRANSFORMATIONS
328

The exponential decay of dX/dy implies a nearest neighbor separation near X = ±1 which
decreases exponentially with N , the number of grid points.
For example, under the mapping

(16.20)
X = tanh(y)

1 ’ tanh2 [y] (16.21)
g(X) =

= sech(y) (16.22)

using the identity sech2 (y) = 1 ’ tanh2 (y). In startling contrast to its preimage, sech(y) is a
remarkably well-behaved function. Stenger (1981) shows that the sinc expansion of sech(y)
on y ∈ [’∞, ∞] has subgeometric but exponential convergence with exponential index of
convergence r = 1/2. Boyd (1986a) shows that a Hermite series also has r = 1/2, but with
even faster convergence. A rational Chebyshev basis, T Bj (y), also gives subgeometric
convergence.
The key is the tanh mapping (16.20); the choice of in¬nite interval basis is secondary.
Table V of Boyd (1987a) shows that the sum of the ¬rst sixteen symmetric T Bn (arctanh[X])

gives an error of no more than 1 part in 300,000 for the approximation of 1 ’ X 2 . By
contrast, the same table shows that the error in the unmapped Chebyshev series (16.18)
to the same order is 1/50. The derivatives of sech(y) are all bounded (in fact, 0!) at the

endpoints whereas all derivatives of 1 ’ X 2 , even the ¬rst, are unbounded.
The solutions to partial differential equations on domains with corners may have weak
singularities; the classic example is

u = ’1
2
(16.23)

on the square [’1, 1] — [’1, 1] with u ≡ 0 on all four walls. The solution has branch points
of the form

u = r2 log(r) sin(2θ) + less singular terms (16.24)

near all four corners where r and θ are the radial and angular coordinates in a local polar
coordinate system centered on a particular corner. Stenger (1979) obtained ¬ve decimal
place accuracy for (16.23) using the mapping (16.20) (for both coordinates) plus a 33—33
grid with sinc functions. Thus, although a corner singularity would seem to cry out for
a two-dimensional mapping (as discussed later in this chapter), a simple tensor product
grid with the one-dimensional tanh-map applied independently to both x and y is usually
suf¬cient.
One caveat is in order: when the singularity is weak, the branch point may be irrelevant
unless one needs many decimal places of accuracy. For Poisson™s equation, for example, the
corner branch points still permit an algebraic index of convergence of six ” much weaker
than the square root, which gives an index of two. The result is that Boyd (1986c) found
that it was possible to match Stenger™s accuracy with only 136 basis functions of the proper
symmetry and no mapping.
Boyd (1988d) has elaborated this point by showing that there is a “cross-over” in N :
for smaller N , the mapping induces variable coef¬cients in the differential equation, which
slow convergence, but for N > Ncross-over , the mapping reduces error. It is important to
apply asymptotic concepts asymptotically!
Nevertheless, it is gratifying that exponential mapping gives an “in¬nite order” method
not only for approximating functions with weak endpoint singularities, but also for solving
differential equations ” even though these solutions have unbounded derivatives in the
16.6. TWO-DIMENSIONAL MAPS & CORNER BRANCH POINTS 329

original coordinate. If the singularity is weak, try it ¬rst without a mapping. If this fails, or
if the singularity is strong, i. e. produces an algebraic index of convergence of three or less,
apply a change-of-coordinate such that the original coordinate varies exponentially fast
with the new, computational coordinate. Lund and Riley (1984), Lund (1986), and Boyd
(1987a, 1988c) give further examples.


16.6 Two-Dimensional Maps & Singularity-Subtraction for
Corner Branch Points
A classic test for corner singularity-resolving algorithms is to compute the eigenvalues of
the so-called “L-shaped domain membrane” problem. The domain is the interior of the
region which is composed of three adjacent squares arranged in the shape of the letter L.
The eigenfunctions are strongly singular ” all derivatives are unbounded ” at the corner
where all three rectangles touch to form a 270-degree angle as measured from boundary to
boundary on the interior of the domain.
Mason (1967) showed that one could obtain superb accuracy from the Chebyshev pseu-
dospectral method after ¬rst applying the two-dimensional mapping, invented by Reid
and Walsh (1965),

u = Re z 2/3 v = Im z 2/3 (16.25)
&

where u and v are new coordinates, x and y are the original coordinates, and z ≡ x + iy.
This conformal mapping heals the singularity by mapping the 270-degree angle into a 180-
degree angle, i. e. a straight line. The solution is everywhere analytic on the boundaries as
a function of the new coordinate. The solution f (x, y) has a r2/3 branch point where r is
a local polar coordinate centered on the offending corner. However, f (u, v) is not patho-
logical because f (u[x, y], v[x, y]) has the (2/3)-power singularity built into the mapping
(16.25).
We refer to (16.25) as a “two-dimensional” mapping because it is not the result of in-
dependent, “tensor product” transformations of x and y, but rather strongly couples both
coordinates.
The mapping introduces several complications. One is that the original, L-shaped do-
main is mapped into the interior of a polygon with curved sides in the u ’ v plane. Mason
solves the mapped differential equation on the unit square, [’1, 1]—[’1, 1], which contains
the polygon. This allows him to use a basis whose elements are products of Chebyshev
polynomials in u with Chebyshev polynomials in v. Since parts of the square are mapped
into unphysical points outside the L-shaped domain by the inverse of (16.25), we have no
a priori way of excluding possible poles or branch points which would wreck the conver-
gence of the double Chebyshev series. Mason is able to prove, however, that the solution of
the mapped equation is analytic everywhere within the unit u-v rectangle. For a nonlinear
problem, alas, a similar proof would be dif¬cult or impossible.
The second problem is that the boundary condition f = 0 must be applied on the curved

<<

. 68
( 136 .)



>>