. 75
( 136 .)


not specify in advance what values u(y) will take at the ends of the interval. In contrast,
when we solve a non-periodic boundary value problem, we usually must explicitly impose
numerical constraints such as u(’1) = ±, u(1) = β. Behavioral boundary conditions, like
what are called “natural” conditions in ¬nite element theory, do not require any modi¬ca-
tions to the basis set. The sines and cosines of the Fourier series, for example, automatically
and individually have the behavior of periodicity. In contrast, numerical boundary condi-
tions are “essential”, to again borrow a label from ¬nite element theory: They must be
imposed as explicit constraints.
Boundary conditions at in¬nity may usually be treated as either “behavioral” or “nu-
merical”. This ¬‚exibility is useful because there are advantages and drawbacks to both
approaches. We will describe each in turn.
Although a general theorem is lacking, it is usually true that the behavioral condition of
boundedness at in¬nity is suf¬cient to uniquely determine the solution. The reason is that
the differential equation is usually singular at in¬nity so that only one of the many linearly
independent solutions is bounded at that point. If we use an unmodi¬ed series of T Bn (y)
as the basis, the series will automatically converge to that solution which is ¬nite at in¬nity
in the same way that a trigonometric series will automatically converge to that solution
which is periodic.
It is rather ironic, but, because we can ignore the boundary conditions in setting up the
pseudospectral or Galerkin™s matrix, the singularity of the differential equation at in¬nity
actually makes the problem easier instead of harder, as one might have expected. The same
thing is usually true of endpoint singularities when the interval is ¬nite. The Chebyshev
polynomials and the Associated Legendre functions, for example, both solve second order
Sturm-Liouville differential equations which are singular at x = ±1. The ODE

1 ’ x2 1 ’ x2 ux (17.40)
+ »u = 0

has solutions which are analytic at ±1 only when (i) » = n2 and (ii) u(x) is Tn (x). If
one blindly applies a Chebyshev basis to (17.40) ” without explicitly imposing boundary
conditions ” the eigenvalues of the pseudospectral or Galerkin™s matrix will nonetheless
converge to n2 and the corresponding matrix eigenfunctions will have all but one element
equal to zero.
Gottlieb and Orszag (1977, pg. 32) point out that it is precisely such singular Sturm-
Liouville problems that furnish the best basis sets: Because we need not impose bound-
ary conditions on the solutions of (17.40) [except the implicit one of being analytic at the
endpoints], the Chebyshev polynomials will give geometric convergence for all functions

independent of their boundary behavior, so long as f (x) is analytic at the endpoints. When
the eigenfunctions of a Sturm-Liouville problem have to satisfy speci¬c numerical bound-
ary conditions such as u(±1) = 0, series of these eigenfunctions converge geometrically
only for equally speci¬c and special classes of functions.

Rule-of-Thumb 15 (Behavioral Boundary Conditions at In¬nity) If the desired solution of
a differential equation is the only solution which is bounded at in¬nity, then one may usually obtain
exponential accuracy by using an unmodi¬ed series of the rational functions T Bn (y).
If this doesn™t work, or doesn™t work well, one can always explicitly impose the constraint that
u(y) vanishes at in¬nity, but usually this is an unnecessary waste of programming hours.

The best way to explain this rule-of-thumb is by means of a speci¬c example. The
differential equation (Boyd, 1987b)

y ∈ [0, ∞] (17.41)
y uyy + (y + 1) uy + » u = 0

is de¬ned on the semi-in¬nite interval [0, ∞], and therefore strictly belongs in Sec. 12, but
it illustrates behavioral boundary conditions very well. The problem is singular both at the
origin and at in¬nity. Eq. (17.41) has solutions which are analytic at both endpoints only
when (i) » = n where n is an integer ≥ 0 and (ii) u(y) is the (n-1)-st Laguerre function.
When we expand u(y) as a series of the rational functions T Ln (y) without the imposition
of boundary conditions, it works ” but just barely. With N = 40, the lowest numerical
eigenfunction agrees with u(y) = exp(’y), the exact answer, only to a couple of decimal
places. The correct eigenvalue (» = 1) is mirrored by a pair of complex conjugate eigen-
values with real parts approximately equal to 1 and imaginary parts on the order of 0.01.
For so large an N , this is astonishingly poor accuracy.
The rub is that the other linearly independent solution of (17.41) when » is a positive
integer does blow up as |y| ’ ∞ ” but only as a power of y rather than exponentially. The
remedy is to make a change-of-unknown to a new variable w(y) de¬ned by

w = exp(y/2) u(y)

which is the solution of
y wyy + wy + ’ ’ y + » w = 0 (17.43)

The exact eigenvalues are unchanged by the transformation except for the disappearance
of » = 0, which corresponds to the trivial eigenfunction u(y) ≡ 1. After the change-
of-unknown (17.42), N = 40 gives 12 accurate eigenvalues, and these are all real. From
(17.42), we see that the offending second solutions of (17.43) blow up exponentially, rather
than algebraically, as |y| ’ ∞, and the unmodi¬ed spectral series has a much easier time
separating the true eigenfunctions from the unbounded solutions.
We see why the rule-of-thumb is not a theorem; it still applies to (17.41) in principle,
but is extremely inef¬cient for this problem without the change-of-unknown. Eq. (17.41)
combines two types of behavioral boundary conditions in a single example: the solution
must be bounded at both endpoints even though one is ¬nite and the other at ∞.
There seems to be no simple theorem that covers all eventualities. Gottlieb and Orszag
(1977, pg. 152“153) discuss a Sturm-Liouville eigenproblem whose solutions are the J7 (y)
Bessel functions. The boundary condition at y = 0 is that u(y) be analytic in spite of the fact
that the differential equation is singular there. Blindly applying Chebyshev polynomials
gives an approximation that converges on the true eigenvalue as N increases; indeed, with
N = 26, the approximation is exact to ¬ve decimal places! The only problem is that when

we impose the boundary conditions u(0) = uy (0) = 0 with the same N , we obtain eleven
decimal places.2
The conclusion is that in coping with behavioral boundary conditions, one should be
¬‚exible, prepared to modify the basis set or explicitly impose constraints if need be. “Usu-
ally”3 , however, treating boundary conditions at in¬nity as “behavioral” or “natural” is
successful. Boyd (1987a, b) gives a dozen examples; except for (17.41), using the uncon-
strained T Bn (y) or T Ln (y) gave many decimal places of accuracy with a small number of
basis functions.
Nevertheless, it is possible to treat in¬nity as a numerical boundary condition by impos-

u(∞) = 0

because the solutions to most problems decay as y ’ ∞. The programming is more com-
plex because the basis must be modi¬ed. On the other hand, if the solution is zero at one
or both endpoints, we can reduce the size of the basis set without a loss of accuracy by
switching to new basis functions which also vanish at the boundaries.
Indeed, if u(x) decays exponentially fast, then it is true that

dk u
(∞) = 0
dy k

for any ¬nite k. Thus, we have the theoretical ¬‚exibility to impose an arbitrary number of
endpoint zeros on the basis.
In practice, this freedom should be used cautiously. If all the basis functions have high
order zeros at the endpoints, they will all be almost indistinguishable from zero ” and each
other ” over a large part of the interval. This may give numerical ill-conditioning. Merilees
(1973a) shows that Robert™s suggested basis functions for the sphere ” an ordinary cosine
series multiplied by sinm (θ) to mimic the m-th order zero of the corresponding spherical
harmonic ” is disastrously ill-conditioned for large m. Furthermore, imposition of high
order zeros complicates the programming.
Nonetheless, Boyd (1988f) obtained good accuracy for the Flierl-Petviashvili monopole,
discussed in Appendix D, by applying the basis

φ2n (y) ≡ T B2n (y) ’ 1 (17.46)

which vanishes as 1/y 2 as |y| ’ ∞. The analytical one-basis function approximation was
suf¬ciently accurate to serve as a good ¬rst guess for the Newton-Kantorovich iteration.
The N =1 Chebyshev approximation ” u(y) ≡ a constant ” was considerably less accurate!

17.9 Expansions for Functions Which Decay Algebraically
with y or Asymptote to a Constant
The rational function
f (y) ≡ (17.47)
1 + y2
2 The singularity of the differential equation at the origin actually forces the ¬rst six derivatives of J7 (y) to
vanish at the origin, but it is not necessary to impose all these constraints to obtain very high accuracy.
3 A pure mathematician would loathe the word “usually”, but as Briggs et al. (1981) note, “numerical analysts

are the academic world™s greatest plumbers.”

is rather harmless, but both Hermite series and sinc expansions fail miserably in approxi-
mating it. The trouble with the former is that the Hermite functions all decay exponentially
(like Gaussians) for very large y. One can show that the transition from oscillation to decay
occurs for the n-th Hermite function at the “turning points” given by

yt = ± 2n + 1 (17.48)

Now f (100) is still as large as 1/10,000, so it follows that to obtain four decimal place
accuracy for (17.47), we need to have a Hermite basis large enough so that at least some of
the Hermite functions have turning points at |y| = 100 or larger. Eq. (17.48) implies that
we need at least 5,000 Hermite functions to achieve this ” a ridiculously large number. As
explained in Sec. 3, the Hermite series of a function which decays algebraically with |y| will
have coef¬cients an that decay algebraically with n.
The sinc expansion is similarly in trouble: the coef¬cients of the Whittaker cardinal se-
ries are f (jh), and for a slowly decaying f (y) and a reasonably small grid spacing h, it
follows that f (jh) will not be small until j is huge ” in other words, we must keep a pre-
posterously large number of sinc functions in the series to obtain even moderate accuracy.
The orthogonal rational basis functions, however, have no such problem. Inspecting
Table 17.5, we see that
f (y) ≡ {T B0 (y) ’ T B2 (y)} (17.49)
if L = 1. Even if we choose a different map parameter, the change of variable y = L cot(t)
shows that
1 ’ cos2 (t)
f (y[t]) =
1 + (L2 ’ 1) cos2 (t)

which has no singularities for any real t and therefore has a geometrically convergent
Fourier cosine series. The T Bn (y) expansion of f (y) has the same coef¬cients as the Fourier
series (in t) of f (y[t]), so it follows that it, too, must converge geometrically for arbitrary,
real L.
OR WHICH ASYMPTOTE TO A CONSTANT AS y ’ ∞. There are limits; if f (y) blows
up, even linearly with y, as y ’ ∞, then its image under the mapping y ’ t will be in¬nite
at t = 0, and neither the Fourier series nor the T Bn (y) expansion will even be de¬ned. As
long as f (y) is ¬nite, however, then an exponentially convergent series of rational Cheby-
shev functions is at least a possibility.
We have to be content with the vague phrase “is a possibility” because we need more
speci¬c information about the asymptotic behavior of the function. If we consider the
g(y) ≡ (17.51)
(1 + y 2 )

then since

y = L cot(t) ≈ L/t (17.52)
t 1

it follows that image of (17.51) under the mapping (17.52) will behave like

g(y[t]) ≈ t2/3 (17.53)
t 1

Table 17.6: Examples of functions which asymptote to a constant or decay algebraically
with y. These illustrate the four classes of functions that have both symmetry with re-
spect to y = 0 (denoted by S for symmetric and A for antisymmetric in the column labeled
“Code”) and also have asymptotic expansions which contain only even or only odd pow-
ers of 1/y (indicated by E or O in the “Code” column). The third and fourth columns give
the mathematical forms of these symmetries. The rightmost column indicates the restricted
basis set that is suf¬cient to represent all u(y) that fall into this symmetry class.

Parity with
Asymptotic Form Code Basis Set
u(y[t]) u(y)
respect to y=0
y y 1
∼ u(y) = ’u(’y) A&E
cos(t) +O T B2n+1
|y| y2
1 + y2
y2 ’ 1 1
∼1+O S&E
cos(2t) u(y) = u(’y) T B2n
y2 + 1 y2
1 1 1
∼ S&O Odd sines SB2n
sin(t) +O u(y) = u(’y)
|y| |y|3
1 + y2
2y 2 1
∼ u(y) = ’u(’y) A&O Even sines SB2n+1
sin(2t) +O
1 + y2 y3

and the Fourier and T Bn (y) series will converge algebraically (an ∼ O(1/n5/3 )) because of
the branch point at t = 0.
There are a couple of remedies. One, discussed in Boyd (1987a), is to examine the
general Fourier series in the trigonometric coordinate t. If the asymptotic behavior of f (y)
may be represented as a series of the inverse odd powers of y, then the sine series will
converge exponentially fast. The images of the sines under the inverse mapping de¬ne a
new basis set, SBn (y) ≡ sin {(n + 1)[arccot(y/L)]} for n = 0, 1, 2, . . . .4 These functions,
which are rational in y, are never needed except to approximate a function which is decaying
algebraically as |y| ’ ∞. Even then, they may be unnecessary if f (y) may represented as
series of inverse even powers of y for |y| 1. Table 17.6 summarizes the cases.
The SBn (y; L) can be computed directly by introducing Y ≡ y/L and then applying
the three-term recurrence
1 2Y Y
SB0 = √ SBn+1 = 2 √ SBn ’ SBn’1 (17.54)
, SB1 = ,
1+Y2 1+Y2
The second option is to apply a mapping. For example, the “bad” function (17.51) has


. 75
( 136 .)