. 4
( 136 .)



1. What is an optimum choice of basis functions?
2. Why choose “collocation” as the residual-minimizing condition?
3. What are the optimum collocation points?
4. Why is a1 zero? Could we have anticipated this, and used a trial solution with just
two degrees of freedom for the same answer?
5. How do we solve the algebraic problem for the coef¬cients when the Maple “solve”
function isn™t available?
The answer to the ¬rst question is that choosing powers of x as a basis is actually rather
dangerous unless N , the number of degrees-of-freedom, is small or the calculations are
being done in exact arithmetic, as true for the Maple solution here. In the next section, we
describe the good choices. In an algebraic manipulation language, different rules apply as
explained in Chapter 20.
The second answer is: Collocation is the simplest choice which is guaranteed to work,
and if done right, nothing else is superior. To understand why, however, we shall have to
understand both the standard theory of Fourier and Chebyshev series and Galerkin meth-
ods (Chapters 2 and 3) and the theory of interpolation and cardinal functions (Chapters 4
and 5).
The third answer is: once the basis set has been chosen, there are only two optimal sets
of interpolation points for each basis (the Gauss-Chebyshev points and the Gauss-Lobatto
points); both are given by elementary formulas in Appendices A and F, and which one is
used is strictly a matter of convenience.
The fourth answer is: Yes, the irrelevance of a1 could have been anticipated. Indeed, one
can show that for this problem, all the odd powers of x have zero coef¬cients. Symmetries
of various kinds are extremely important in practical applications (Chapter 8).

Exact - u2 Error
1 0

0.95 -0.004




0.75 -0.018
-1 0 1 -1 0 1

Figure 1.1: Left panel: Exact solution u = exp([x4 ’ 1]/4) (solid) is compared with the
three-coef¬cient numerical approximation (circles). Right panel: u ’ u2 .

Table 1.1: Maple program to solve linear boundary-value problem

u2:=1 + (1-x*x)*(a0 + a1*x + a2*x*x);
Resid:= diff(u,x,x) - (x**6 + 3*x**2)*u;
eq1:=subs(x=-1/2,Resid); eq2:=subs(x=0,Resid); eq3:=subs(x=1/2,Resid);

The ¬fth answer is: the algebraic equations can be written (for a linear differential equa-
tion) as a matrix equation, which can then be solved by library software in FORTRAN or
Many other questions will be asked and answered in later chapters. However, some
things are already clear.
First, the method is not necessarily harder to program than ¬nite difference or ¬nite
element algorithms. In Maple, the complete solution of the ODE/BVP takes just ¬ve lines
(Table 1.1)!
Second, spectral methods are not purely numerical. When N is suf¬ciently small,
Chebyshev and Fourier methods yield an analytic answer.

1.3 Comparison with ¬nite element methods
Finite element methods are similar in philosophy to spectral algorithms; the major differ-
ence is that ¬nite elements chop the interval in x into a number of sub-intervals, and choose
the φn (x) to be local functions which are polynomials of ¬xed degree which are non-zero
only over a couple of sub-intervals. In contrast, spectral methods use global basis func-
tions in which φn (x) is a polynomial (or trigonometric polynomial) of high degree which is
non-zero, except at isolated points, over the entire computational domain.
When more accuracy is needed, the ¬nite element method has three different strategies.
The ¬rst is to subdivide each element so as to improve resolution uniformly over the whole
domain. This is usually called “h-re¬nement” because h is the common symbol for the size
or average size of a subdomain. (Figure 1.2).
The second alternative is to subdivide only in regions of steep gradients where high
resolution is needed. This is “r-re¬nement”.
The third option is to keep the subdomains ¬xed while increasing p, the degree of the
polynomials in each subdomain. This strategy of ”p-re¬nement” is precisely that employed
by spectral methods. Finite element codes which can quickly change p are far from univer-
sal, but those that can are some called “p-type” ¬nite elements.
Finite elements have two advantages. First, they convert differential equations into
matrix equations that are sparse because only a handful of basis functions are non-zero in
a given sub-interval. (“Sparse” matrices are discussed in Appendix B; suf¬ce it to say that
“sparse” matrix equations can be solved in a fraction of the cost of problems of similar
size with “full” matrices.) Second, in multi-dimensional problems, the little sub-intervals
become little triangles or tetrahedra which can be ¬tted to irregularly-shaped bodies like
the shell of an automobile. Their disadvantage is low accuracy (for a given number of
degrees of freedom N ) because each basis function is a polynomial of low degree.
Spectral methods generate algebraic equations with full matrices, but in compensation,
the high order of the basis functions gives high accuracy for a given N . When fast iterative
matrix“solvers are used, spectral methods can be much more ef¬cient than ¬nite element

or ¬nite difference methods for many classes of problems. However, they are most useful
when the geometry of the problem is fairly smooth and regular.
So-called “spectral element” methods gain the best of both worlds by hybridizing spec-
tral and ¬nite element methods. The domain is subdivided into elements, just as in ¬nite
elements, to gain the ¬‚exibility and matrix sparsity of ¬nite elements. At the same time,
the degree of the polynomial p in each subdomain is suf¬ciently high to retain the high
accuracy and low storage of spectral methods. (Typically, p = 6 to 8, but spectral element
codes are almost always written so that p is an arbitrary, user-choosable parameter.)
It turns out that most of the theory for spectral elements is the same as for global spec-
tral methods, that is, algorithms in which a single expansion is used everywhere. Conse-
quently, we shall concentrate on spectral methods in the early going. The ¬nal chapter will
describe how to match expansions in multiple subdomains.
Low order ¬nite elements can be derived, justi¬ed and implemented without knowl-
edge of Fourier or Chebyshev convergence theory. However, as the order is increased, it
turns out that ad hoc schemes become increasingly ill-conditioned and ill-behaved. The
only practical way to implement “nice” high order ¬nite elements, where “high order”
generally means sixth or higher order, is to use the technology of spectral methods.
Similarly, it turns out that the easiest way to match spectral expansions across subdo-
main walls is to use the variational formalism of ¬nite elements.
Thus, it really doesn™t make much sense to ask: Are ¬nite elements or spectral methods
better? For sixth or higher order, they are essentially the same. The big issue is: Does one
need high order, or is second or fourth order suf¬cient?

Smaller h


subdivide only where
high resolution
degree p
p-refinement r-refinement

Figure 1.2: Schematic of three types of ¬nite elements

1.4 Comparisons with Finite Difference Method: Why Spec-
tral Methods are Accurate and Memory-Minimizing
Finite difference methods approximate the unknown u(x) by a sequence of overlapping
polynomials which interpolate u(x) at a set of grid points. The derivative of the local
interpolant is used to approximate the derivative of u(x). The result takes the form of a
weighted sum of the values of u(x) at the interpolation points.

One high-order polynomial for WHOLE domain

Finite Difference
Multiple Overlapping Low-Order Polynomials

Finite Element/Spectral Element
Non-Overlapping Polynomials,
One per Subdomain

Figure 1.3: Three types of numerical algorithms. The thin, slanting lines illustrate all the
grid points (black circles) that directly affect the estimates of derivatives at the points shown
above the lines by open circles. The thick black vertical lines in the bottom grid are the
subdomain walls.

The most accurate scheme is to center the interpolating polynomial on the grid point
where the derivative is needed. Quadratic, three-point interpolation and quartic, ¬ve-point
interpolation give
df /dx ≈ [f (x + h) ’ f (x ’ h)]/(2h) + O(h2 ) (1.12)

df /dx ≈ [’f (x + 2h) + 8f (x + h) ’ 8f (x ’ h) + f (x ’ 2h)]/(12h) + O(h4 ) (1.13)

Figure 1.4: Weights wj in the approximation df /dx |x=x0 ≈ j wj f (x0 + jh) where x0 = π
and h = π/5. In each group, the Fourier weights are the open, leftmost bars. Middle, cross-
hatched bars (j = ±1, ±2 only): Fourth-order differences. Rightmost, solid bars (j = ±1
only): weights for second order differences.

The function O( ), the “Landau gauge symbol”, denotes that in order-of-magnitude, the
errors are proportional to h2 and h4 , respectively.
Since the pseudospectral method is based on evaluating the residual function only at
the “selected points”, {xi }, we can take the grid point values of the approximate solution,
the set {uN (xi )}, as the unknowns instead of the series coef¬cients. Given the value of
a function at (N+1) points, we can compute the (N + 1) series coef¬cients {an } through
polynomial or trigonometric interpolation. Indeed, this symbolic equation

series coef¬cients{an } ⇐’ grid point values{uN (xi )} (1.14)

is one of the most important themes we will develop in this course, though the mechanics
of interpolation will be deferred to Chapters 4 and 5.
Similarly, the ¬nite element and spectral element algorithms approximate derivatives
as a weighted sum of grid point values. However, only those points which lie within a
given subdomain contribute directly to the derivative approximations in that subdomain.
(Because the solution in one subdomain is matched to that in the other subdomain, there
is an indirect connection between derivatives at a point and the whole solution, as true of
¬nite differences, too.) Figure 1.3 compares the regions of direct dependency in derivative
formulas for the three families of algorithms.
Figs.1.4 and 1.5 compare the weights of each point in the second and fourth-order ¬-
nite difference approximations with the N = 10 Fourier pseudospectral weights. Since the
basis functions can be differentiated analytically and since each spectral coef¬cient an is
determined by all the grid point values of u(x), it follows that the pseudospectral differen-
tiation rules are not 3-point formulas, like second-order ¬nite differences, or even 5-point
formulas, like the fourth-order expressions; rather, the pseudospectral rules are N -point
formulas. To equal the accuracy of the pseudospectral procedure for N = 10, one would
need a tenth-order ¬nite difference or ¬nite element method with an error of O(h10 ).
As N is increased, the pseudospectral method bene¬ts in two ways. First, the interval h

Figure 1.5: Same as previous ¬gure except for the second derivative. Hollow bars: pseu-
dospectral. Cross-hatched bars: Fourth-order differences. Solid bars: Second-order differ-

between grid points becomes smaller “ this would cause the error to rapidly decrease even
if the order of the method were ¬xed. Unlike ¬nite difference and ¬nite element methods,
however, the order is not ¬xed. When N increases from 10 to 20, the error becomes O(h20 )
in terms of the new, smaller h. Since h is O(1/N ), we have
Pseudospectral error ≈ O[(1/N )N ] (1.15)
The error is decreasing faster than any ¬nite power of N because the power in the error
formula is always increasing, too. This is “in¬nite order” or “exponential” convergence.1
This is the magic of pseudospectral methods. When many decimal places of accu-
racy are needed, the contest between pseudospectral algorithms and ¬nite difference and
¬nite element methods is not an even battle but a rout: pseudospectral methods win
hands-down. This is part of the reason that physicists and quantum chemists, who must
judge their calculations against experiments accurate to as many as fourteen decimal places
(atomic hydrogen maser), have always preferred spectral methods.
However, even when only a crude accuracy of perhaps 5% is needed, the high order of
pseudospectral methods makes it possible to obtain this modest error with about half as
many degrees of freedom “ in each dimension “ as needed by a fourth order method. In other
words, spectral methods, because of their high accuracy, are memory-minimizing. Problems
that require high resolution can often be done satisfactorily by spectral methods when a
three-dimensional second order ¬nite difference code would fail because the need for eight
or ten times as many grid points would exceed the core memory of the available computer.
™Tis true that virtual memory gives almost limitless memory capacity in theory. In prac-
tice, however, swapping multi-megabyte blocks of data to and from the hard disk is very
slow. Thus, in a practical (as opposed to theoretical) sense, virtual storage is not an option
when core memory is exhausted. The Nobel Laureate Ken Wilson has observed that be-
cause of this, memory is a more severe constraint on computational problem-solving than
1 Chapter2 shows show that the convergence is always exponential for well-behaved functions, but (1.15) is
usually too optimistic. The error in an N -point method is O(M [n]hn ) where M (n) is a proportionality constant;
we ignored the (slow) growth of this constant with n to derive (1.15).

CPU time. It is easy to beg a little more time on a supercomputer, or to continue a job
on your own workstation for another night, but if one runs out of memory, one is simply


. 4
( 136 .)