. 5
( 136 .)


stuck “ unless one switches to an algorithm that uses a lot less memory, such as a spectral
For this reason, pseudospectral methods have triumphed in metereology, which is most
emphatically an area where high precision is impossible!
The drawbacks of spectral methods are three-fold. First, they are usually more dif¬cult
to program than ¬nite difference algorithms. Second, they are more costly per degree of
freedom than ¬nite difference procedures. Third, irregular domains in¬‚ict heavier losses
of accuracy and ef¬ciency on spectral algorithms than on lower-order alternatives. Over
the past ¬fteen years, however, numerical modellers have learned the “right” way to im-
plement pseudospectral methods so as to minimize these drawbacks.

1.5 Parallel Computers
The current generation of massively parallel machines is communications-limited. That is to
say, each processor is a workstation-class chip capable of tens of mega¬‚ops or faster, but
the rate of interprocessor transfers is considerably slower.
Spectral elements function very well on massively parallel machines. One can assign
a single large element with a high order polynomial approximation within it to a single
processor. A three-dimensional element of degree p has roughly p3 internal degrees of
freedom, but the number of grid points on its six walls is O(6p2 ). It is these wall values that
must be shared with other elements “ i. e., other processors “ so that the numerical solution
is everywhere continuous. As p increases, the ratio of internal grid points to boundary
grid points increases, implying that more and more of the computations are internal to
the element, and the shared boundary values become smaller and smaller compared to
the total number of unknowns. Spectral elements generally require more computation per
unknown than low order methods, but this is irrelevant when the slowness of interprocessor
data transfers, rather than CPU time, is the limiting factor.
To do the same calculation with low order methods, one would need roughly eight
times as many degrees of freedom in three dimensions. That would increase the interpro-
cessor communication load by at least a factor of four. The processors would likely have
a lot of idle time: After applying low order ¬nite difference formulas quickly throughout
its assigned block of unknowns, each processor is then idled while boundary values from
neighboring elements are communicated to it.
Successful applications of spectral elements to complicated ¬‚uid ¬‚ows on massively
parallel machines have been given by Fischer(1990, 1994a,b, 1997) Iskandarani, Haidvogel
and Boyd (1994), Taylor, Tribbia and Iskandarani(1997) and Curchitser, Iskandarani and
Haidvogel(1998), among others.

1.6 Choice of basis functions
Now that we have compared spectral methods with other algorithms, we can return to
some fundamental issues in understanding spectral methods themselves.
An important question is: What sets of ”basis functions” φn (x) will work? It is obvi-
ous that we would like our basis sets to have a number of properties: (i) easy to compute
(ii) rapid convergence and (iii) completeness, which means that any solution can be repre-
sented to arbitrarily high accuracy by taking the truncation N to be suf¬ciently large.

Although we shall discuss many types of basis functions, the best choice for 95% of all
applications is an ordinary Fourier series, or a Fourier series in disguise. By “disguise” we
mean a change of variable which turns the sines and cosines of a Fourier series into differ-
ent functions. The most important disguise is the one worn by the Chebyshev polynomials,
which are de¬ned by

Tn (cosθ) ≡ cos(nθ) (1.16)

Although the Tn (x) are polynomials in x, and are therefore usually considered a sepa-
rate and distinct species of basis functions, a Chebyshev series is really just a Fourier cosine
expansion with a change of variable. This brings us to the ¬rst of our proverbial sayings:


(i) When in doubt, use Chebyshev polynomials unless the solution is spatially periodic, in which
case an ordinary Fourier series is better.
(ii) Unless you™re sure another set of basis functions is better, use Chebyshev polynomials.
(iii) Unless you™re really, really sure that another set of basis functions is better, use Chebyshev

There are exceptions: on the surface of a sphere, it is more ef¬cient to use spherical
harmonics than Chebyshev polynomials. Similarly, if the domain is in¬nite or semi-in¬nite,
it is better to use basis sets tailored to those domains than Chebyshev polynomials, which
in theory and practice are associated with a ¬nite interval.
The general rule is: Geometry chooses the basis set. The engineer never has to make a
choice. Table A-1 in Appendix A and Figure 1.6 summarize the main cases. When multiple
basis sets are listed for a single geometry or type of domain, there is little to choose between
It must be noted, however, that the non-Chebyshev cases in the table only strengthen
the case for our ¬rst Moral Principle. Though not quite as good as spherical harmonics,
Chebyshev polynomials in latitude and longtitude work just ¬ne on the sphere (Boyd,
1978b). The rational Chebyshev basis sets are actually just the images of the usual Cheby-
shev polynomials under a change of coordinate that stretches the interval [-1, 1] into an
in¬nite or semi-in¬nite domain. Chebyshev polynomials are, as it were, almost idiot-proof.
Consequently, our analysis will concentrate almost exclusively upon Fourier series and
Chebyshev polynomials. Because these two basis sets are the same except for a change
of variable, the theorems for one are usually trivial generalizations of those for the other.
The formal convergence theory for Legendre polynomials is essentially the same as for
Chebyshev polynomials except for a couple of minor items noted in Chapter 2. Thus,
understanding Fourier series is the key to all spectral methods.

1.7 Boundary conditions
Normally, boundary and initial conditions are not a major complication for spectral meth-
ods. For example, when the boundary conditions require the solution to be spatially pe-
riodic, the sines and cosines of a Fourier series (which are the natural basis functions for
all periodic problems) automatically and individually satisfy the boundary conditions. Con-
sequently, our only remaining task is to choose the coef¬cients of the Fourier series to
minimize the residual function.

Periodic Non-Periodic
or Legendre

x ∈ [’1, 1]
θ ∈ [0, 2 π]

Semi-Infinite Infinite
Rational Cheby. TL Rational Cheby. TB
or Laguerre or Hermite or Sinc

x ∈ [’ ∞, ∞]
x ∈ [0, ∞]

Figure 1.6: Choice of basis functions. Upper left: on a periodic interval, use sines and
cosines. This case is symbolized by a ring because the dependence on an angular coordi-
nate, such as longitude, is always periodic. Upper right: a ¬nite interval, which can always
be rescaled and translated to x ∈ [’1, 1]. Chebyshev or Legendre polynomials are optimal.
Lower left: semi-in¬nite interval x ∈ [0, ∞], symbolized by a one-sided arrow. Rational
Chebyshev functions T Ln (x) are the generic choice, but Laguerre functions are sometimes
more convenient for particular problems. Lower right: x ∈ [’∞, ∞] (double-ended ar-
row). Rational Chebyshev functions T Bn (x) are the most general, but sinc and Hermite
functions are widely used, and have similar convergence properties.

For non-periodic problems, Chebyshev polynomials are the natural choice as explained
in the next chapter. They do not satisfy the appropriate boundary conditions, but it is easy
to add explicit constraints such as

an φn (1) = ±

to the algebraic equations obtained from minimizing R(x; a0 , a1 , . . . , aN ) so that u(1) = ±
is satis¬ed by the approximate solution.

Alternatively, one may avoid explicit constraints like (1.17) by writing the solution as

u(x) ≡ v(x) + w(x) (1.18)

where w(x) is a known function chosen to satisfy the inhomogeneous boundary conditions.
The new unknown, v(x), satis¬es homogeneous boundary conditions. For (1.17), for ex-
ample, w(1) = ± and v(1) = 0. The advantage of homogenizing the boundary conditions is
that we may combine functions of the original basis, such as the Chebyshev polynomials,
into new basis functions that individually satisfy the homogeneous boundary conditions.
This is surprisingly easy to do; for example, to satisfy v(’1) = v(1) = 0, we expand v(x) in
terms of the basis functions

φ2n (x) ≡ T2n (x) ’ 1, n = 1, 2, . . .
φ2n+1 (x) ≡ T2n+1 (x) ’ x, (1.19)
n = 1, 2, . . .

where the Tn (x) are the usual Chebyshev polynomials whose properties (including bound-
ary values) are listed in Appendix A. This basis is complete for functions which vanish at
the ends of the interval. The reward for the switch of basis set is that it is unnecessary,
when using basis recombination, to waste rows of the discretization matrix on the bound-
ary conditions: All algebraic equations come from minimizing the residual of the differen-
tial equation.

1.8 The Two Kingdoms: Non-Interpolating and Pseudospec-
tral Families of Methods
Spectral methods fall into two broad categories. In the same way that all of life was once
divided into the “plant” and “animal” kingdoms2 , most spectral methods may be classed
as either “interpolating” or “non“interpolating”. Of course, the biological classi¬cation
may be ambiguous “ is a virus a plant or animal? How about a sulfur-eating bacteria? The
mathematical classi¬cation may be ambiguous, too, because some algorithms mix ideas
from both the interpolating and non-interpolating kingdoms. Nonetheless, the notion of
two exclusive “kingdoms” is a useful taxonomical starting point for both biology and nu-
merical analysis.
The “interpolating” or “pseudospectral” methods associate a grid of points with each
basis set. The coef¬cients of a known function f (x) are found by requiring that the trun-
cated series agree with f (x) at each point of the grid. Similarly, the coef¬cients an of a
pseudospectral approximation to the solution of a differential equation are found by re-
quiring that the residual function interpolate f ≡ 0:

R(xi ; a0 , a1 , . . . , aN ) = 0, i = 0, 1, ..., N

In words, the pseudospectral method demands that the differential equation be exactly
satis¬ed at a set of points known as the “collocation” or “interpolation” points. Presum-
ably, as R(x; an ) is forced to vanish at an increasingly large number of discrete points, it
will be smaller and smaller in the gaps between the collocation points so that R ≈ x every-
where in the domain, and therefore uN (x) will converge to u(x) as N increases. Methods in
this kingdom of algorithms are also called “orthogonal collocation” or “method of selected
The “non“interpolating” kingdom of algorithms includes Galerkin™s method and the
Lanczos tau-method. There is no grid of interpolation points. Instead, the coef¬cients of
2 Modern classi¬cation schemes use three to ¬ve kingdoms, but this doesn™t change the argument.

a known function f (x) are computed by multiplying f (x) by a given basis function and
integrating. It is tempting to describe the difference between the two algorithmic kingdoms
as ”integration” versus ”interpolation”, but unfortunately this is a little simplistic. Many
older books, such as Fox and Parker (1968), show how one can use the properties of the
basis functions “ recurrence relations, trigonometric identities, and such “ to calculate co-
ef¬cients without explicitly performing any integrations. Even though the end product is
identical with that obtained by integration, it is a little confusing to label a calculation as an
”integration-type” spectral method when there is not an integral sign in sight! Therefore,
we shall use the blander label of ”non-interpolating”.
Historically, the “non“interpolating” methods were developed ¬rst. For this reason,
the label “spectral” is sometimes used in a narrow sense as a collective tag for the “non“
interpolating” methods. In these notes, we shall use “spectral” only as catch“all for global
expansion methods in general, but the reader should be aware of its other, narrower usage.
(Actually, there are several other uses because “spectral” has other meanings in time series
analysis and functional analysis “ ugh!).
Many spectral models of time-dependent hydrodynamics split the calculation into sev-
eral subproblems and apply different techniques to different subproblems. To continue
with our biological metaphor, the computer code becomes a little ecology of interacting
“interpolating” and “non“interpolating” parts. Each algorithm (or algorithmic kingdom)
has ecological niches where it is superior to all competition, so we must master both non“
interpolating and pseudospectral methods.
At ¬rst glance, there is no obvious relation between the pseudospectral method and the
alternatives that use weighted integrals of R(x; an ) to choose the {an }. Worse still, we now
have the further burden of choosing the interpolation points, {xi }. Fortunately, there is a
natural choice of interpolation points for each of the common basis sets. These points are
the Gaussian quadrature points for the integrals of Galerkin™s method. The pseudospectral
method is therefore equivalent to the spectral if we evaluate the integrals of the latter by
numerical quadrature with (N + 1) points. This is the reason why the interpolation-based
methods are now commonly called “pseudospectral”.
Better yet, we shall show in later chapters that the accuracy of pseudospectral methods
is only a little bit poorer than that of the non-interpolating kingdom “ too little to outweigh
the much greater simplicity and computational ef¬ciency of the pseudospectral algorithms.
Consequently, we shall emphasize pseudospectral methods in this book. Nonetheless, the
justi¬cation for the pseudospectral kingdom is derived from that for the non-interpolating
methods, and the latter are still superior to interpolation for specialized but important
applications. We cannot understand the high ef¬ciency of either kingdom of spectral algo-
rithms without ¬rst reviewing the theory of Fourier series (Chapter 2).

1.9 Nonlinearity
Nonlinearity is not a major complication for spectral methods per se. For expository sim-
plicity, we shall usually concentrate on linear algorithms, especially in explaining the basic
ideas. The extension to nonlinear problems usually only requires minor modi¬cations.
To illustrate this, we shall do a very simple nonlinear boundary value problem here.
Such equations normally are solved by Newton™s iteration. If we set up the iteration by ¬rst
linearizing the differential equation about the current iterate (the “Newton-Kantorovich”
method of Appendix C), then we solve a linear differential equation at each step.
In this example, we shall instead apply the spectral method ¬rst. The only difference
from a linear problem is that the system of algebraic equations for the coef¬cients is nonlin-
ear. It is usually irrelevant whether the Newton iteration is created before or after applying

the spectral method; take your choice!

Figure 1.7: Comparison of exact and approximate solutions: Nonlinear diffusion equation

The nonlinear boundary value problem is

uxx + ±(ux )2 + ±uuxx = 0 (1.21)

subject to the boundary conditions that

u(0) = 0; u(1) = 1


. 5
( 136 .)