that is to say, the error is bounded by TWICE the sum of the absolute values of all the neglected

coef¬cients.

Comparing (4.46) and (4.47), we conclude:

the PENALTY for using INTERPOLATION instead of TRUNCATION is at WORST a

FACTOR of TWO.

PROOF:

Similar to that of Theorem 10 (Chap. 2). When we sum the absolute values of the terms

in (4.44) that are the errors due to interpolation, we ¬nd that each neglected coef¬cient

appears in (4.44) exactly once. Each neglected coef¬cient also appears in the difference

between f (x) and fN (x) exactly once. When we add the absolute values of all the errors,

truncation and interpolation, we obtain (4.48).

To a ¬nite difference modeller, the factor of two in Theorem 20 is a big deal. With a

second order scheme, it would be necessary to increase the number of grid points by 40%

to reduce the error by that factor.

A spectral modeller has a rather different perspective. For example, the coef¬cients of

ln(1 + x) for x ∈ [0, 1] are proportional to pn /n where p = 0.18. This implies that each

coef¬cient is more than ¬ve times smaller than its immediate predecessor. It follows that

if the ¬rst N terms of the Chebyshev series for ln(1 + x) gives an error everywhere less

than , then the Chebyshev interpolating polynomial with (N + 1) terms will have an even

smaller error bounded by 0.23 . When N is large, adding one or two more terms increases

the cost only slightly. The penalty we accept for using the pseudospectral method instead

of the Galerkin method is completely insigni¬cant2 .

The only reason the Galerkin method has not completely disappeared for numerical

computations is that it sometimes generates sparse matrices which can be inverted much

more easily than the full matrices generated by the pseudospectral method, as discussed in

the chapter on matrix-solving.

For completeness, we shall state the analogous theorems for Chebyshev polynomials.

Theorem 21 (CHEBYSHEV INTERPOLATION & its ERROR BOUND)

Let the “Chebyshev-extrema” (“Gauss-Lobatto”) grid {xk } be given by

kπ

xk = ’ cos [“Chebyshev-extrema” grid] (4.48)

k = 0, 1, . . . , N

N

Let the polynomial PN (x) which interpolates to f (x) at these grid points be

N

[Extrema Grid] (4.49)

PN (x) = bn Tn (x)

n=0

2 The exception is a paper-and-pencil or Maple calculation for very small N. A 4—4 determinant is approximately

¬ve times harder to evaluate analytically than a 3 — 3 determinant, so the Galerkin method is a good choice for

low order analytical work.

CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT

96

where the ( ) on the summation means that the ¬rst and last terms are to be taken with a factor of

(1/2). [Compare this with the constant and cos(M x) term in the Fourier interpolant, SN (x).] The

coef¬cients of the interpolating polynomial are given by

N

2

[Extrema Grid] (4.50)

bn = f (xk ) Tn (xk )

N

k=0

Let the “Chebyshev-roots” grid be de¬ned by

(2k + 1)π

xk = ’ cos [“Chebyshev-roots” Grid] (4.51)

k = 0, 1, . . . , N

2(N + 1)

and let QN (x) denote the interpolating polynomial of degree N which interpolates to f (x) on this

alternative grid:

N

[Roots Grid] (4.52)

QN (x) = cn Tn (x)

n=0

where the ( ) on the sum means that the ¬rst term [c0 T0 ] is to be divided by (1/2), then the coef¬-

cients are

N

2

[Roots Grid] (4.53)

cn = f (xk ) Tn (xk )

N +1

k=0

Let {±n } denote the exact spectral coef¬cients of f (x), that is, let

∞

(4.54)

f (x) = ±n Tn (x)

n=0

The coef¬cients of the interpolating polynomials are related to those of f (x) via

∞

(±n+2jN + ±’n+2jN ) [Extrema Grid] (4.55)

bn = ±n +

j=1

∞

±n+2j(N +1) + ±’n+2j(N +1) (’1)j [Roots Grid] (4.56)

cn = ±n +

j=1

For all N and all real x ∈ [’1, 1], the errors in either of the interpolating polynomials is bounded

by TWICE the sum of the absolute values of all the neglected coef¬cients:

∞

|f (x) ’ PN (x)| ¤2 |±n | (4.57)

n=N +1

∞

|f (x) ’ QN (x)| ¤2 |±n | , (4.58)

n=N +1

that is, the penalty for using interpolation instead of truncation is the same for Chebyshev series as

for Fourier series:

a FACTOR of TWO.

4.5. PSEUDOSPECTRAL ERRORS 97

PROOF: Fox and Parker (1968).

Canuto et al. (1988) show that for incompressible (or weakly divergent) ¬‚uid mechan-

ics, it is often useful to employ both these Chebyshev grids simultaneously. On this pseu-

dospectral “staggered” grid, the pressure is de¬ned on the “roots” grid while the velocities

are de¬ned on the “extrema” grid.

Although two alternative optimum Chebyshev grids may seem one too many, both sets

of points are useful in practice.

Chapter 5

Cardinal Functions

“Understanding grows only logarithmically with the number of ¬‚oating point operations.”

” J. P. Boyd

5.1 Introduction

Mathematics courses have given the average physics or engineering student a rather warped

view of global expansion methods: The coef¬cients are everything, and values of f (x) at

various points are but the poor underclass. Indeed, some massive tomes on Fourier series

never even mention the the word “interpolation”.

To understand pseudospectral methods, one must take a more broad-minded view: The

series coef¬cients {an } and the values of the function at the interpolation points {f (xi )}

are equivalent and co-equal representations of the pseudospectral approximation to f (x).

Given the (N + 1) coef¬cients of the approximating polynomial PN (x), we can certainly

evaluate the truncated series at each grid point to calculate the {f (xi )}. It is equally true,

however, that given the {f (xi )}, we have all the information needed to calculate the {an }

by applying the discrete inner product, i. e. interpolation. This operation, which is simply

the multiplication of a column vector by a square matrix1 introduces no additional errors

because the discrete inner product computes the expansion coef¬cients of PN (x) exactly.

Because of this, pseudospectral methods may use either the series coef¬cients or the

grid point values as the unknowns. Employing the {u(xi )} is very convenient because

a pseudospectral technique then becomes a grid point method ” just like a ¬nite differ-

ence procedure. The only alteration is that derivatives are computed through an N -point

scheme instead of the three- or ¬ve-point formulas of conventional ¬nite differences.

There are a couple of different ways of working directly with grid point values, and

this chapter will discuss each in turn. In Chapter 9, we will then show that this freedom to

jump between the {an } and {u(xi )} representations of u(x) is absolutely essential to coping

with nonlinearity.

A major reason for discussing cardinal functions, however, is not computational but

conceptual. Pseudospectral algorithms are simply N -th order ¬nite difference methods in

disguise.

1 ForFourier and Chebyshev series, this transformation can alternatively be performed by using a Fast Fourier

Transform (FFT) routine.

98

5.2. WHITTAKER CARDINAL OR “SINC” FUNCTIONS 99

5.2 Whittaker Cardinal or “Sinc” Functions

Sir Edmund Whittaker (1915) showed that for an in¬nite interval, the analogues to the

“fundamental polynomials of Lagrangian interpolation” are what he called the “cardinal

functions”. The collocation points are evenly spaced:

xj ≡ j h j = 0, ±1, ±2, . . . (5.1)

Whittaker™s cardinal functions are

sin{π(x ’ kh)/h}

Ck (x; h) ≡ [“Whittaker Cardinal Functions”] (5.2)

π (x ’ kh)/h

and have the property

(5.3)

Ck (xj ; h) = δjk

One great simpli¬cation is that, through a linear shift and rescaling of the argument, the

Whittaker cardinal functions can all be expressed in terms of a single, universal function

sin(πx)

sinc(x) ≡ (5.4)

(πx)

The sine function in the numerator has zeros at all integral values of x, x = 0, ±1, ±2, etc.

However, the root at the origin is cancelled by the denominator factor of (πx). Near x = 0,

Taylor expansion of the sine gives

(πx) ’ (πx)3 /6 + (πx)5 /120 ’ . . .

sinc(x) ≈ (5.5)

πx

2

(πx)

≈ 1’ |x| (5.6)

+ ... 1

6

Eq. (5.6) shows that sinc(0) = 1 from which (5.3) follows. The sinc function is illustrated in

the upper left panel of Fig. 5.1.

The property that each cardinal function is non-zero at one and only one of the interpo-

lation points implies that for arbitrary f (x), the function de¬ned by

∞

˜

f (x) ≡ (5.7)

f (xj ) Cj (x)

j=’∞

interpolates f (x) at every point of the grid. However, this hardly guarantees a good ap-

proximation; recall the Runge phenomenon for polynomials. In Chapter 17, Sec. 3, how-

ever, we show that, for a suf¬ciently small grid spacing h and for a suf¬ciently large trun-

cation of the in¬nite sum (5.7), the sinc series is an accurate approximation if f (x) decays

suf¬ciently fast as |x| ’ ∞.

Because of this, Sir Edmund Whittaker described sinc(x) as “a function of royal blood in

the family of entire functions, whose distinguished properties separate it from its bourgeois

brethren.” Strong words indeed, but the sinc(x) function is not only successful, but also a

good deal simpler than a Chebyshev polynomial or a spherical harmonic. In the next two

sections, we shall see that Fourier and Chebyshev grids also have their “royal functions”.

The lack of boundaries on an in¬nite interval makes the Whittaker cardinal function

the simplest case.

CHAPTER 5. CARDINAL FUNCTIONS

100

1 1

sinc Trig.

0.5 0.5

0 0

-2 0 2 -2 0 2

0.04

0

0.02

-5

0 Polynomial

-10

-0.02 Cardinal Func.

sinc - Trig -15

-0.04

-2 0 2 -2 0 2

Figure 5.1: a. The Whittaker cardinal function, sinc ([x ’ π]/h), with h = π/6.

b. [Upper Right] Trigonometric cardinal function associated with the same interpolation

point (x = π) and grid spacing (h = π/6) as in (a).

c. [Lower Left] Difference between the Whittaker cardinal function (graphed upper left)

and the trigonometric cardinal function (graphed upper right)

d. Polynomial cardinal function associated with interpolation at x = π and evenly spaced

grid points, same as for the three other panels. Because of the large “wings” of this func-

tion, the value of f (x) at x = π has a greater effect on the interpolant in the remote region

near the endpoints (x = 0, 2π) than in the local neighborhood around x = π.

5.3 Cardinal Functions for Trigonometric Interpolation

Gottlieb, Hussaini, and Orszag (1984) show that for the collocation points