. 3
( 136 .)


The during-and-after theory is important, too. My thesis advisor, Richard Lindzen,
never had much interest in computation per se, and yet he taught me better than anyone
else the art of good scienti¬c number-crunching. When he was faced with a stiff boundary

value problem, he was not too proud to run up and down the halls, knocking on doors,
until he ¬nally learned of a good algorithm: centered differences combined with the tridi-
agonal elimination described in Appendix B. This combination had been known for twenty
years, but was only rarely mentioned in texts because it was hard to prove convergence the-
orems.1 He then badgered the programming staff at the National Center for Atmospheric
Research to help him code the algorithm for the most powerful computer then available,
the CDC 7600, with explicit data swaps to and from the core.
A scientist who is merely good would have stopped there, but Lindzen saw from the
numerical output that equatorial waves in vertical shear satis¬ed the separation-of-scales
requirement of singular perturbation theory. He then wrote two purely analytical papers
to derive the perturbative approximation, and showed it agreed with his numerical cal-
culations. The analysis was very complicated ” a member of the National Academy of
Sciences once described it to me, laughing, as “the most complicated damn thing I™ve ever
seen” ” but the ¬nal answers ¬ts on one line.
In sad contrast, I see far too many students who sit at their workstation, month after
month, trying to batter a problem into submission. They never ask for help, though Michi-
gan has one of the ¬nest and broadest collections of arithmurgists on the planet. Nor will
they retreat to perturbation theory, asymptotic estimates, or even a little time alone in the
It is all too easy to equate multiple windows with hard work, and multiple contour
plots with progress. Nevertheless, a scientist by de¬nition is one who listens for the voice
of God. It is part of the fallen state of man that He whispers.
In order that this book may help to amplify those whispers, I have been uninhibited in
expressing my opinions. Some will be wrong; some will be soon outdated.2 Nevertheless,
I hope I may be forgiven for choosing to stick my neck out rather than drown the reader in
a sea of uninformative blandness. The worst sin of a thesis advisor or a textbook writer is
to have no opinions.

[Preface to the Second Edition, January, 1999]
In revising this book ten years after, I deleted the old Chapter 11 (case studies of ¬‚uid
computations) and Appendix G (least squares) and added four new chapters on eigen-
value problems, aliasing and spectral blocking, the slow manifold and Nonlinear Galerkin
theory, and semi-Lagrangian spectral methods. All of the chapters have been updated and
most have been rewritten. Chapter 18 has several new sections on polar coordinates. Ap-
pendix E contains a new table giving the transformations of ¬rst and second derivatives
for a two-dimensional map. Appendix F has new analytical formulas for the Legendre-
Lobatto grid points up to nine-point grids, which is suf¬cient for most spectral element
My second book, Weakly Nonlocal Solitary Waves and Beyond-All-Orders-Asymptotics (Kluwer,
1998) has two chapters that amplify on themes in this volume. Chapter 8 is an expanded
version of Appendices C and D here, describing a much wider range of strategies for non-
linear algebraic equations and for initializing interations. Chapter 9 explains how a stan-
dard in¬nite interval basis can be extended to approximate functions that oscillate rather
than decay-to-zero at in¬nity.
Other good books on spectral methods have appeared in recent years. These and a
selection of review articles are catalogued in Chapter 23.
1 Alas,
numerical analysis is still more proof-driven than accomplishment-driven even today.
2 Surely,
too, the book has typographical errors, and the reader is warned to check formulas and tables before
using them.
Preface xiii

My original plan was to build a bibliographical database on spectral methods and ap-
plications of spectral algorithms that could be printed in full here. Alas, this dream was
overtaken by events: as the database grew past 2000 items, I was forced to limit the bib-
liography to 1025 references. Even so, this partial bibliography and the Science Citation
Index should provide the reader with ample entry points into any desired topic. The
complete database is available online at the author™s homepage, currently at http://www-
personal.engin.umich.edu/∼jpboyd. To paraphrase Newton, it is better to stand on the
shoulders of giants than to try to recreate what others have already done better.
Spectral elements have become an increasingly important part of the spectral world in
the last decade. However, the ¬rst edition, with but a single chapter on spectral elements,
was almost 800 pages long. (Students irrevently dubbed it the “Encyclopedia Boydica”.)
So, I have reluctantly included only the original chapter on domain decomposition in this
edition. A good treatment of spectral elements in the lowbrow spirit of this book will have
to await another volume.
Perhaps it is just as well. The bibliographic explosion is merely a symptom of a ¬eld
that is still rapidly evolving. The reader is invited to use this book as a base camp for his
or her own expeditions.
The Heart of Africa has lost its mystery; the planets of Tau Ceti are currently unknown
and unreachable. Nevertheless, the rise of digital computers has given this generation its
galleons and astrolabes. The undiscovered lands exist, in one sense, only as intermittent
electric rivers in dendritic networks of copper and silicon, invisible as the soul. And yet
the mystery of scienti¬c computing is that its new worlds over the water, wrought only of
numbers and video images, are as real as the furrowed brow of the ¬rst Cro-Magnon who
was mysti¬ed by the stars, and looked for a story.

The author™s work has been supported by the National Science Foundation through the
Physical Oceanography, Meteorology, Computational Engineering and Computational Math-
ematics programs via grants OCE7909191, OCE8108530, OCE8305648, OCE8509923, OCE812300,
DMS8716766 and by the Department of Energy. My leave of absence at Harvard in 1980
was supported through grant NASA NGL-22-007-228 and the hospitality of Richard Lindzen.
My sabbatical at Rutgers was supported by the Institute for Marine and Coastal Sciences
and the hospitality of Dale Haidvogel.
I am grateful for the comments and suggestions of William Schultz, George Delic, and
the students of the course on which this book is based, especially Ahmet Selamet, Mark
Storz, Sue Haupt, Mark Schumack, Hong Ma, Beth Wingate, Laila Guessous, Natasha Flyer
and Jeff Hittinger. I thank Andreas Chaniotis for correcting a formula
I am also appreciative of the following publishers and authors for permission to repro-
duce ¬gures or tables. Fig. 3.3: C. A. Coulson, Valence (1973), Oxford University Press.
Fig. 7.3: H. Weyl, Symmetry (1952) [copyright renewed, 1980], Princeton University Press.
Tables 9.1 and Figs. 9.1 and 9.2: D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral
Methods (1977), Society for Industrial and Applied Mathematics. Fig. 12-4: C. Canuto and
A. Quarteroni, Journal of Computational Physics (1985), Academic Press. Tables 12.2 and 12.3:
T. Z. Zang, Y. S. Wong and M. Y. Hussaini, Journal of Computational Physics (1984), Academic
Press. Fig. 13.1 and Table 13.2: J. P. Boyd, Journal of Computational Physics (1985), Academic
Press. Fig. 14.3: E. Merzbacher, Quantum Mechanics (1970), John Wiley and Sons. Figs.
14.4, 14.5, 14.7, 14.8, 14.9, 14.10, and 14.11: J. P. Boyd Journal of Computational Physics (1987),
Academic Press. Fig. 15.1: W. D™Arcy Thompson, Growth and Form (1917), Cambridge Uni-
versity Press. Fig. D.1 (wth changes): J. P. Boyd, Physica D (1986), Elsevier. Fig. D.2: E.
Wasserstrom, SIAM Review (1973), Society for Industrial and Applied Mathematics.
I thank Gene, Dale, Dave and Terry of the Technical Illustration Dept., DRDA [now
disbanded], for turning my rough graphs and schematics into camera-ready drawings.
I also would like to acknowledge a debt to Paul Bamberg of the Harvard Physics de-
partment. His lecturing style strongly in¬‚uenced mine, especially his heavy reliance on
class notes both as text and transparencies.
I thank Joel Primack, who directed my undergraduate research, for his many lessons.
One is the importance of preceding calculation with estimation. Another is the need to
write quick-and-rough reports, summary sheets and annotations for even the most prelim-
inary results. It is only too true that “otherwise, in six months all your computer output
and all your algebra will seem the work of a stranger.”
I am also thankful for Richard Goody™s willingness to humour an undergraduate by
teaching him in a reading course. Our joint venture on tides in the Martian atmosphere
was scooped, but I found my calling.
I am grateful for Richard Lindzen™s patient tolerance of my ¬rst experiments with
Chebyshev polynomials. His running commentary on science, scientists, and the interplay
of numerics and analysis was a treasured part of my education.

Acknowledgments xv

I thank Steven Orszag for accepting this manuscript for the Lecture Notes in Engi-
neering series (Springer-Verlag) where the ¬rst edition appeared. The treatment of time-
stepping methods in Chapter 10 is heavily in¬‚uenced by his MIT lectures of many years
ago, and the whole book is strongly shaped by his many contributions to the ¬eld.
I am appreciative of John Grafton and the staff of Dover Press for bringing this book
back into print in an expanded and corrected form.
Lastly, I am grateful for the support of the colleagues and staff of the University of
Michigan, particularly Stan Jacobs for sharing his knowledge of nonlinear waves and per-
turbation theory, Bill Schultz for many fruitful collaborations in applying spectral methods
to mechanical engineering, and Bill Kuhn for allowing me to introduce the course on which
this book is based.
Errata and Extended-Bibliography

These may be found on author™s homepage, currently at


Errata and comments may be sent to the author at the fol-


Thank you!

Chapter 1


“I have no satisfaction in formulas unless I feel their numerical magnitude.”
“Sir William Thomson, 1st Lord Kelvin (1824“1907)

“It is the increasingly pronounced tendency of modern analysis to substitute ideas for
calculation; nevertheless, there are certain branches of mathematics where calculation con-
serves its rights.”
“P. G. L. Dirichlet (1805“1859)

1.1 Series expansions
Our topic is a family of methods for solving differential and integral equations. The basic
idea is to assume that the unknown u(x) can be approximated by a sum of N + 1 “basis
functions” φn (x):

u(x) ≈ uN (x) = (1.1)
an φn (x)

When this series is substituted into the equation

Lu = f (x)

where L is the operator of the differential or integral equation, the result is the so-called
“residual function” de¬ned by

R(x; a0 , a1 , . . . , aN ) = LuN ’ f (1.3)

Since the residual function R(x; an ) is identically equal to zero for the exact solution,
the challenge is to choose the series coef¬cients {an } so that the residual function is mini-
mized. The different spectral and pseudospectral methods differ mainly in their minimiza-
tion strategies.


1.2 First Example
These abstract ideas can be made concrete by a simple problem. Although large problems
are usually programmed in FORTRAN and C, it is very educational to use an algebraic
manipulation language like Maple, Mathematica, Macsyma or Reduce. In what follows,
Maple statements are shown in bold face. The machine™s answers have been converted
into standard mathematical notation.
The example is the linear, one-dimensional boundary value problem:

uxx ’ (x6 + 3x2 )u = 0 (1.4)

u(’1) = u(1) = 1

The exact solution is (Scraton, 1965)

u(x) = exp([x4 ’ 1]/4) (1.6)

Polynomial approximations are recommended for most problems, so we shall choose a
spectral solution of this form. In order to satisfy the boundary conditions independently
of the unknown spectral coef¬cients, it is convenient to write the approximation as u2:=1
+ (1-x*x)*(a0 + a1*x + a2*x*x);

u2 = 1 + (1 ’ x2 )(a0 + a1 x + a2 x2 ) (1.7)

where the decision to keep only three degrees of freedom is arbitrary.
The residual for this approximation is Resid:= diff(u,x,x) - (x**6 + 3*x**2)*u;

R(x; a0 , a1 , a2 ) = u2,xx ’ (x6 + 3x2 )u2 (1.8)

(2a2 + 2a0 ) ’ 6a1 x ’ (3 + 3a0 + 12a2 )x2 ’ 3a1 x3 + 3(a0 ’ a2 )x4 (1.9)
R =
+3a1 x5 + (’1 ’ a0 + 3a2 )x6 ’ a1 x7 + (a0 ’ a2)x8 + a1 x9 + 10a2 x10

As error minimization conditions, we choose to make the residual zero at a set of points
equal in number to the undetermined coef¬cients in u2 (x). This is called the “collocation”
or “pseudospectral” method. If we arbitrarily choose the points xi = (’1/2, 0, 1/2), this
gives the three equations: eq1:=subs(x=-1/2,Resid);
eq2:=subs(x=0,Resid); eq3:=subs(x=1/2,Resid);

659 1683 1171 49
=’ a1 ’ a2 ’
eq1 a0 +
256 512 1024 64
eq2 = ’2(a0 ’ a2 )
659 1683 1171 49
eq3 = ’ a0 ’ a1 ’ a2 ’ (1.10)
256 512 1024 64
The coef¬cients are then determined by solving eq1 = eq2 = eq3 = 0;
solutionarray:= solve({eq1,eq2,eq3}, {a0,a1,a2}); yields

a0 = ’ (1.11)
, a1 = 0, a2 = a0
Figure 1.1 shows that this low order approximation is quite accurate.
However, the example raises a whole host of questions including:


. 3
( 136 .)