<<

. 92
( 136 .)



>>

equation for a potential energy V (x) is, letting k denote the energy,

ψxx + k 2 ’ V (x) ψ = 0 (19.14)

It is implicitly assumed that V (x) ’ 0 as |x| ’ ∞ so that the asymptotic solution is a plane
wave of wavenumber k.
The scattering problem is to calculate the transmitted and re¬‚ected waves when a plane
wave of unit amplitude is incident from the left. The wavefunction ψ has the asymptotic
behavior

∼ eikx + ±(k) e’ikx x ’ ’∞ (19.15a)
ψ
∼ β(k) eikx x’∞ (19.15b)
ψ

where the ¬rst term in (19.15a) is the incident wave (the forcing), the second term is the
re¬‚ected wave, and (19.15b) is the transmitted wave. The complex coef¬cients ±(k) and
β(k) are unknowns; the primary goal is to calculate the re¬‚ection coef¬cient R de¬ned by

R ≡ |±|2 (19.16)

If V (x) decays exponentially fast as |x| ’ ∞, then ψ(x) tends to its asymptotic forms
(19.15) exponentially fast, too. The rational Chebyshev functions, T Bn (x), are a natural and
exponentially convergent basis for representing the difference between ψ and its asymp-
totic forms.
Unfortunately, the rational Chebyshev functions cannot represent the asymptotic si-
nusoidal waves themselves. Because the sine waves do not decay in amplitude, a small
absolute error on the in¬nite interval is possible if and only if a moderate number of ra-
tional functions can faithfully represent an in¬nite number of crests and troughs. This is
too much to ask of rational Chebyshev functions, sinc functions, Hermite functions, or any
standard basis set.
However, one can retrieve spectral accuracy for the entire problem by supplementing
the rational Chebyshev functions by a pair of special “radiation” functions.
First, de¬ne two real functions, C(x) and S(x), by the requirement that each solves
(19.14) and has the boundary behavior

C(x) ’ as x ’ ’∞ (19.17a)
cos(kx)
S(x) ’ as x ’ ’∞ (19.17b)
sin(kx)
19.4. SPECIAL BASIS FUNCTIONS, II: WAVE SCATTERING 449

Since these are linearly independent, the general solution of (19.14) is a linear combination
of these two functions. Comparing (19.17) with (19.15a) shows that

ψ = (1 + ±) C(x) + i (1 ’ ±) S(x) (19.18)

To compute ± and β, we need the asymptotic behavior of C(x) and S(x) as x ’ ∞. Since
V (x) ’ 0 in this limit (by assumption), it follows that their most general asymptotic be-
havior must take the form

C(x) ∼ γ1 cos(kx) + γ2 sin(kx) x’∞ (19.19a)

∼ σ1 cos(kx) + σ2 sin(kx) x’∞ (19.19b)
S(x)

Once we have calculated the four parameters (γ1 , γ2 , σ1 , σ1 ), we can match (19.19) to
(19.15b) to determine the four unknowns, the real and imaginary parts of ± and β.
To compute C(x) and S(x), we write

˜ ˜ (19.20)
C(x) = cos(kx) + C(x) & S(x) = sin(kx) + S(x)
N ’3
˜ (19.21)
C(x) = an TBn (x) + aN ’2 H(x) cos(kx) + aN ’1 H(x) sin(kx)
n=0

where
1
H(x) ≡ (19.22)
(1 + tanh(x))
2
˜
is a smoothed approximation to a step function. The expansion for S(x) is identical in form
˜
with that for C(x). The two “radiation basis functions” have the asymptotic behavior

as x ’ ’∞
0
H(x) cos(kx) ’ (19.23)
as x ’ ∞
cos(kx)

and similarly for the other function. Because these basis functions are “one-sided”, they
vanish as x ’ ’∞ and are irrelevant to the asymptotic behavior of C(x) in this limit. In
the opposite limit, x ’ ∞, one ¬nds that as N increases,

aN ’2 ’ γ1 ’ 1 aN ’1 ’ γ2 N ’∞ (19.24)
&

˜
Similarly, the two radiation coef¬cients for S(x) converge to σ1 and σ2 ’ 1. Thus, the
coef¬cients of the radiation functions give the needed constants in the asymptotic approx-
imations for C(x) and S(x).
Substituting (19.20) into (19.14) shows that

˜ ˜
Cxx + k 2 ’ V (x) C = V (x) cos(kx) (19.25)

˜
The differential equation for S(x) is identical except that cos(kx) on the right side of (19.25)
must be replaced by sin(kx). To solve (19.25), we apply the pseudospectral method with
the usual N interpolation points for the rational Chebyshev basis:

π [2i ’ 1]
xi ≡ L cot (19.26)
i = 1, . . . , N
2N
CHAPTER 19. SPECIAL TRICKS
450

where L is a constant map parameter (=2 in the example below). Because the problems
˜ ˜
for C(x) and S(x) differ only in the inhomogeneous term, it is necessary to compute and
LU-factorize only a single square matrix whose elements are

Hij ≡ φj,xx (xi ) + k 2 ’ V (xi ) φj (xi ) (19.27)

where the {φj (x)} are the basis functions de¬ned above: rational Chebyshev functions
plus two special radiation functions The factorized matrix equation must be solved twice
(Appendix B) with different inhomogeneous terms:

˜ ˜
fi ≡ V (xi ) cos(kxi ), [C] gi ≡ V (xi ) cos(kxi ), [S] (19.28)

Matching (19.19) to (19.15b) shows that ± is the solution of the 2 — 2 real system

(σ1 ’ γ2 ) (σ2 ’ γ1 )
Re(±)
(γ1 + σ2 )
(19.29)
=
(γ2 ’ σ1 ) ’(σ1 + γ2 )
Im(±)
(γ1 + σ2 )

The rate-determing step is to invert a single N — N square matrix: O([2/3]N 3 ) multiplica-
tions and additions.
Table 19.1 shows the results for the particular case

V (x) = ’v sech2 (x) (19.30)

for v = 1 and various wavenumbers k. The exact re¬‚ection coef¬cient for this potential is

1 + cos(2π v + 1/4)
R= (19.31)
cosh(2πk) + cos(2π v + 1/4)

Because of the hyperbolic function in (19.31), the re¬‚ection coef¬cient is exponentially
small in 1/k. This is a general feature of scattering from a potential well, and not something
special to a sech2 potential. Because of this, one must solve the Schroedinger equation to
very high accuracy to compute even a crude approximation to R for large k. Otherwise, the
tiny re¬‚ection coef¬cient will be swamped by numerical error.
Thus, spectral methods “ with spectral accuracy “ are very valuable for scattering prob-
lems.


19.5 Special Basis Functions, III:
Weakly Nonlocal Solitary Waves
This idea of special “radiation basis functions” can be applied to many other types of wave
problems where the solution asymptotes to a sinusoidal wave rather than to zero. Boyd
(1991a, 1990b, 1991d, 1991e, 1995b, 1995h, 1995j) describes applications to “non-local soli-
tary waves” and other nonlinear waves.


19.6 Root-Finding by Means of Chebyshev Polynomials:
Lanczos Economization and Ioakimidis™ Method
The standard numerical strategy for solving f (x) = 0 is to approximate f (x) by something
simpler which can be solved explicitly, usually a local approximation in the vicinity of a
“¬rst guess” for the root. In Newton™s method, the approximation is a linear Taylor series
19.6. ROOT-FINDING BY CHEBYSHEV POLYNOMIALS 451


Table 19.1: The exact and numerical re¬‚ection coef¬cients R for the sech2 potential as com-
puted using 48 rational Chebyshev functions and two radiation functions.


Absolute Error
k Rnumerical Rexact
0.3 0.423097 0.423097 2.7E’8
0.6 0.0774332 0.0774332 2.6E’8
’3.1E’8
0.9 0.0121005 0.0121005
’9.5E’10
1.2 0.00184535 0.00184535
1.5 0.000280391 0.000280376 1.5E’8
’1.3E’8
1.8 0.000042562 0.000042576
2.1 0.000006471 0.000006465 6.1E’9
’3.2E’9
2.4 0.000000978 0.000000982
2.7 0.000000151 0.000000149 1.5E’9
’8.6E’10
3.0 0.000000022 0.000000023




expansion about x = xi . In the secant method, the approximation is the linear polynomial
that interpolates f (x) at x = xi and xi’1 . The Cauchy and Muller methods are the same
except that the approximations are the quadratic Taylor series and quadratic interpolat-
ing polynomial. Halley™s scheme uses the [1, 1] Pad´ approximant, which is that ratio of
e
one linear polynomial over another whose Taylor expansion matches that of f (x) through
quadratic terms. Shafer suggests using higher order Pad´ approximants: the [2,3] gives a
e
¬fth-order method, but requires only solving a quadratic (the numerator) to ¬nd the root.
An obvious generalization is to use Chebyshev polynomials if the root can be localized
in a narrow region. C. Lanczos used a simple argument to localize the root of a cubic
polynomial within a ¬nite interval, converted the polynomial into a Chebyshev series on
the interval, and then solved the N = 2 truncation of the Chebyshev series, which is a
quadratic polynomial.
Boyd (1995f) applied this strategy of “Lanczos economization”2 to nonlinear eigenvalue
problems. The goal is to ¬nd the root of the determinant of a system of linear equation
when the matrix elements depend nonlinearly on the eigenparameter x. The cost of eval-
uating f (x) is very expensive if the dimension of the matrix is large “ O(106 ) for a 100 x
100 matrix. It therefore is useful to evaluate the determinant on an interval in x and then
compute the roots by ¬nding those of the Chebyshev approximant.
The most reliable methods for ¬nding all roots on an interval unfortunately require fre-
quent evaluations of f (x). One such robust strategy is to compute f (x) on a large number
of points (say one thousand!), look for sign changes, and then re¬ne the roots thus located

2 “Lanczos economization” is usually applied in a somewhat narrower sense to the process of converting a
truncated power series to a Chebyshev series of the same order, truncating to the lowest acceptable degree, and
then converting back to an ordinary polynomial to obtain an approximation that is cheaper to evaluate and more
uniform than the higher-degree polynomial from whence it came. The root-¬nding algorithm described here is
similar in the sense that a function which is expensive to evaluate is replaced by a truncated Chebyshev series
that is much cheaper.
CHAPTER 19. SPECIAL TRICKS
452

by Newton™s method. This would be hideously expensive if we worked directly with f (x).
An N -term Chebyshev approximant can be evaluated in O(N ) operations, however, so that
one can graph the expansion at a thousand points for perhaps less cost than evaluating the
original function just once.
Lanczos economization can be applied to ¬nd roots of functions of two or even three
unknowns, but unfortunately it is unworkable “ unless the degree of the Chebyshev inter-
polant is very low “ in higher dimensions. The other Chebyshev-based root¬nding tech-
nique, due to Ioakimidis , is unfortunately to limited to a small number of unknowns also.
Nevertheless, it is an illuminating and clever application of Chebyshev ideas.
Suppose that f (x) has only a single root on x ∈ [a, b]. Let ρ denote the (unknown!)
location of the root. The integral

x’ρ
b
1

I≡ (19.32)
dx
1 ’ x2
f (x)
a

can be integrated with exponential accuracy by a Chebyshev quadrature because the zero
in f (x) cancels that in the numerator. It follows that the results of Gauss-Chebyshev and
Chebyshev-Lobatto quadrature must differ by an amount exponentially small in N
(19.33)
IGauss (N ) = ILobatto + O(exp(’qN ))
for some positive constant q (neglecting powers of N and other algebraic factors of N mul-
tiplying the exponential) where
N
(xG ’ ρ)
j
IGauss (N ) ≡ G
(19.34)
wj
f (xG )
j
j=1

<<

. 92
( 136 .)



>>