. 6
( 136 .)


We will take the approximate solution u2 (x) to be a quadratic polynomial. The most gen-
eral quadratic polynomial which satis¬es the boundary conditions is

u2 = x + a2 (x2 ’ x) (1.23)

Since there is only one undetermined coef¬cient a2 , only a single collocation point is needed.
The obvious choice, the midpoint of the interval, is best.
The residual function is

R(x; a2 ) = a2 ± [6x2 ’ 6x + 1] + 2a2 [3±x + 1 ’ ±] + ± (1.24)

The condition that R(x = 1/2; a2 ) = 0 then gives the quadratic equation

a2 ± [’1/2] + 2a2 [±/2 + 1] + ± = 0 (1.25)

We note an amusing fact: although pseudospectral methods are usually considered only
as numerical techniques, we have in fact obtained an analytical solution to this nonlinear
problem. To see how accurate it is, let us specialize to ± = 1 for which the exact solution is

u(x; ± = 1) = ’1 + (1 + 3x)1/2 (1.26)

There are two roots to the quadratic, of course, but one gives an unphysical heat ¬‚ux to-
wards the boundary source at x = 1, so it can be rejected.3
3 The
ambiguity of multiple solutions is a dif¬culty raised by the nonlinearity of the differential equation, not
by the method used to solve it. All algorithms for solving nonlinear boundary value problems have the drawback
that the algebraic equations that are the discretization of the differential equation have multiple solutions. Most
are unphysical and must be rejected on various grounds including (i) imaginary parts (ii) unrealistic behavior
such as the heat ¬‚ux for this example or (iii) failure to converge as N is varied.

The other gives the approximate solution

u2 (x; ± = 1) = x ’ 0.317(x2 ’ x) (1.27)

Fig. 1.7 compares the exact and approximate solutions. The maximum of u(x) is 1.00;
the maximum absolute error of the 1-point pseudospectral solution is only 0.014. The ¬gure
shows that even though the functional forms of (1.26) and (1.27) bear no obvious resem-
blance, the two graphs differ so little that it is hard to tell them apart.
In real-life problems, of course, the exact solution is not known, but the accuracy of
an approximate solution can be tested by repeating the calculation with higher N . This
problem is particularly dif¬cult because it is nonlinear, so for all N we will invariably
be left with a nonlinear algebraic equation or set of equations to determine the solution.
However, these can be easily solved by Newton™s method since the lowest approximation,
obtained analytically, is suf¬ciently close to the exact solution to furnish a good ¬rst guess
for the iteration. One of the great virtues of the pseudospectral method is the ease with
which it can be applied to nonlinear differential equations.

1.10 Time-dependent problems
Although it is possible to treat the time coordinate spectrally, and we shall describe some
special cases and special algorithms where this has been done, it is generally most ef¬cient
to apply spectral methods only to the spatial dependence. The reason is that the time-
dependence can be marched forward, from one time level to another. Marching is much
cheaper than computing the solution simultaneously over all space-time.
A space-only spectral discretization reduces the original partial differential equation
to a set of ordinary differential equations in time, which can then be integrated by one™s
favorite Runge-Kutta or other ODE time-marching scheme. (This approach, of discretizing
one or more coordinates to generate a system of ODEs in the remaining coordinate, is
sometimes called the “method of lines”, especially in the Russian literature.)
As an illustration, consider the following generalized diffusion problem:

ut = uxx ’ 2q cos(2x)u (1.28)

with the boundary conditions that the solution must be periodic with a period of 2π. The
exact general solution is
∞ ∞
u(x, t) = an (0) exp(’»n t)cen (x) + bn (0) exp(’µn t)sen (x)
n=0 n=1

where the cen (x) and sen (x) are transcendental functions known as Mathieu functions and
the »n and µn are the corresponding eigenvalues. The coef¬cients an (0) and bn (0) are the
values of the coef¬cients of the Mathieu function series for u(x) at t = 0. As for a Fourier
series, they can be calculated via

an (0) = (u[x, t = 0], cen )/(cen , cen )

bn (0) = (u[x, t = 0], sen )/(sen , sen )


(f, g) ≡ [“inner product”] (1.32)
f (x)g(x)dx

In the next chapter, we will discuss “inner products”; the cen (x) and sen (x) are computed
using “sideband truncation” in Chapter 19.
As a numerical example, take
u(x, t = 0) ≡ 1 (1.33)
and employ two-point collocation with the basis functions
u2 (x) = a0 (t) + a2 (t) cos(2x)
and the collocation or interpolation points
x0 = 0; x1 = π/3
The reasons for omitting cos(x) and any and all sine functions are discussed in the chapter
on parity and symmetry (Chapter 8). The choice of collocation points is standard for a
periodic interval as explained in Chapter 4.
The residual function R(x; a0 , a2 ) is
R(x; a0 , a2 ) = ’{ [2q cos(2x) a0 + a0,t ] + cos(2x)[ (4 + 2q cos(2x)) a2 + a2,t ] } (1.36)
The collocation conditions that (i) R(x = 0; a0 , a2 ) = 0 and (ii) R(x = π/3; a0 , a2 ) = 0
give two coupled, ordinary differential equations in time that determine a0 (t) and a2 (t):
a0,t + a2,t + 2q a0 + (4 + 2q) a2 = 0

a0,t ’ (1/2)a2,t ’ q a0 ’ (1/2)(4 ’ q) a2 = 0 (1.38)
Solving these is straightforward; for the special case q = 1,
u2 (x) = {0.95 ’ 0.37 cos(2x)} exp[0.54t] + {0.05 + 0.37 cos(2x)} exp[’5.54t] (1.39)
The corresponding exact solution is
= {0.916 ’ 0.404 cos(2x) + 0.031 cos(4x) ’ · · · } exp[0.454t] (1.40)
+{0.091 + 0.339 cos(2x) ’ 0.030 cos(4x) + · · · } exp[’4.370t] + · · ·
Comparing the two solutions, we see that the low-order collocation approximation is
at least qualitatively correct. It predicts that one mode will grow with time while the rest
decay; the growth rate of the growing mode is about 20 % too large. The dominant Fourier
coef¬cients are of the growing mode are fairly close ” 0.95 versus 0.916, -0.37 versus -
0.404 ” while the coef¬cients of higher degree cosines (cos[4x], cos[6x], etc.), which are
completely neglected in this approximation, have amplitudes of 0.03 or less.
This example is typical of many time-dependent problems we shall solve: the pseu-
dospectral method is applied to the spatial dependence to reduce the problem to a set of
coupled ordinary differential equations in time. The ODE™s in time will often be nonlinear,
however, and it is usually easier to integrate them through ¬nite differences in time even
when a (complicated!) analytic solution is possible.

1.11 FAQ: Frequently Asked Questions
1. Are spectral methods harder to program than ¬nite difference or ¬nite element meth-
Sometimes. However, our ¬rst example took just six Maple statements. Spectral
methods are only a little more dif¬cult to program than ¬nite differences.

2. Is the high, many-decimal place accuracy of spectral methods even needed in the real
world of engineering?
Sometimes. I was called in as a consultant by KMS Fusion because they needed to
model the ¬‚ows around a pellet of frozen deuterium to about ¬ve decimal places.
Small imperfections in the spherical shape, on the order of 1%, drastically altered
nuclear fusion when the pellet was hit with high intensity laser beams. A two or
three decimal place solution would not necessarily have revealed anything about the
role of the bumps because the numerical errors of such crude solutions would be
comparable with the size of the bumps themselves.
Long-term hydrodynamic integrations and transition-to-turbulence are often wrecked
by computational instability. Common strategies for preserving stability include
(i) adding lots of dissipation and (ii) energy-conserving difference or ¬nite element
schemes. However, both strategies can greatly distort the solution. A highly accurate
solution should not need strong arti¬cial damping or explicit imposition of energy
conservation. Spectral solutions are often stable even without damping or imposed
energy conservation.

3. Are spectral methods useful only when high accuracy is needed?
No, because spectral methods also are “memory-minimizing”. In three dimensions,
one can typically resolve a ¬‚ow crudely, to within 1% accuracy or so, using only 1/8
as many degrees of freedom as needed by a second or fourth order ¬nite difference

4. Are spectral methods useful for ¬‚ows with shock waves or fronts?
Yes. It™s true, however, that spectral methods for shock problems do not have the
sophistication of some low order ¬nite difference, ¬nite volume and ¬nite element
codes that have been tailored to shock ¬‚ows. However, much progress has been
made in adapting these ideas to spectral methods.

1.12 The Chrysalis
In numerical analysis, many computations, even in the most sophisticated models, is still
performed using the same second order differences employed by Lewis Richardson in
1910, and Sir Richard Southwell and his group in the 1930™s. There are some good rea-
sons for this conservatism. When computer modelling attacks new challenges, such as
shocks or the complicated geometry of ocean basins or auto frames, it is only sensible to
begin by applying and re¬ning low order methods ¬rst. The challenging task of customiz-
ing old algorithms to new vector and parallel hardware has also (sensibly) begun with
simple differences and elements. Lastly, for weather forecasting and many other species of
models, the physics is so complicated ” photochemistry, radiative transfer, cloud physics,
topographic effects, air-sea interaction, and ¬nite resolution of observations ” that purely
numerical errors are a low priority.
Nevertheless, high order models displaced second order codes for operational forecast-
ing in the 70™s, and seem likely to do the same in other engineering and science ¬elds in
the twenty-¬rst century. Even when the physics is complicated, there is no excuse for poor
numerics. A rusted muf¬‚er is no excuse for failing to change the oil. Another reason is that
we can, with high order algorithms, explore numerical frontiers previously unreachable.
The Space Shuttle has a much greater reach than a clipper ship. Too much of numerical
modelling is still in the Age of Sail.

A ¬nal reason is that low order methods are like the chrysalis of a butter¬‚y. As shown
later, inside every low order program is a high order algorithm waiting to burst free. Given
a second order ¬nite difference or ¬nite element boundary-value solver, one can promote
the code to spectral accuracy merely by appending a single subroutine to spectrally eval-
uate the residual, and then calling the boundary value solver repeatedly with the spectral
residual as the forcing function. Similarly, the structure and logic of an initial value solver is
very much the same for both low and high order methods. The central question is simply:
Will one approximate the spatial derivatives badly or well?
Chapter 2

Chebyshev & Fourier Series

“Fourier™s Theorem is not only one of the most beautiful results of modern analysis, but
it may be said to furnish an indispensable instrument in the treatment of nearly every
recondite question in modern physics.”
” William Thompson & P. G. Tait (1874)

2.1 Introduction
The total error in solving differential equations is the sum of several contributions which
are de¬ned below. These errors are distinct from the spectral coef¬cients {an }, which in
turn are not the same as the terms in the series, which are coef¬cients multiplied by a
basis function. Our ¬rst major theme is that all these quantities, though distinct, have
the property of decaying to zero with increasing N at the same qualitative rate, usually
Our second theoretical keystone is Darboux™s Principle. This states that the convergence
of a spectral series for u(x) is controlled by the singularities of u(x) where “singularity” is
a catch-all for any point in the complex x-plane where u(x) ceases to be analytic in the
sense of complex variable theory. Square roots, logarithms, simple poles, step function
discontinuities and in¬nities or abrupt discontinuities in any of the derivatives of u(x) at a
point are all “singularities”.
The reason that Darboux™s Principle is a keystone is that it implies that two functions
which have convergence-limiting singularities in the same place, of the same strength and
type, will have spectral series whose coef¬cients an asymptote to the same values as n ’
∞. This justi¬es the “Method of Model Functions”: We can understand a lot about the
success and failure of spectral methods by ¬rst understanding the spectral series of simple,
explicit model functions with various types of logarithms, poles, and jumps.
The third keystone is that from Darboux™s Principle, and limited knowledge about a
function, such as whether it is or is not pathological on the solution domain, we can predict
rates of convergence for spectral series and spectral approximations to differential equa-
tions. Several qualitatively different rates are possible: algebraic, geometric, subgeometric,
and supergeometric.
The fourth keystone is that from model functions and Darboux™s Principle, we can de-
velop some rules-of-thumb that allow us to qualitatively estimate a priori how many de-
grees of freedom N are needed to resolve a given physical phenomenon. These heuristics


are useful to identify both errors and unexpected physics, and also to answer the question:
Is a given calculation feasible on my machine?
We will return to each of these four key themes in the middle of the chapter, though not
in the same order as above. First, though, a brief review of Fourier series.

2.2 Fourier series
The Fourier series of a general function f (x) is
∞ ∞
f (x) = a0 + an cos(nx) + bn sin(nx)
n=1 n=1

where the coef¬cients are
a0 = (1/2π) f (x)dx
an = (1/π) f (x) cos(nx)dx
bn = (1/π) f (x) sin(nx)dx

First note: because the sines and cosines are periodic with a period of 2 π, we can also


. 6
( 136 .)