<<

. 22
( 136 .)



>>


(4.29)
(φi , φj )G = δij

where the discrete inner product is de¬ned by
N
(f, g)G ≡ (4.30)
wi f (xi ) g(xi )
i=0

for arbitrary functions f (x) and g(x), that is, where the discrete inner product is the (N + 1)-point
Gaussian quadrature approximation to the integral inner product.

PROOF: The product of two polynomials, each of degree at most N , is of degree at most
2N . Gaussian quadrature with (N + 1) points, however, is exact for polynomials of degree
(2N + 1) or less. Therefore, Gaussian quadrature evaluates without error the integrals that
express the mutual orthogonality of the ¬rst (N + 1) basis functions among themselves.
Q.E.D.
4.4. PSEUDOSPECTRAL IS GALERKIN METHOD VIA QUADRATURE 91

This second theorem implies that the reward for exact evaluation of inner product inte-
grals, instead of approximation by Gaussian quadrature, is modest indeed. If we write
N
(4.31)
f (x) = an φn (x) + ET (N ; x),
n=0

then the property of exponential convergence implies that for large N , ET (N ; x) will be
very small. Precisely how small depends upon f (x) and N , of course, but the virtue of
exponential convergence is that it is not a great deal more dif¬cult to obtain 15 decimal
places of accuracy than it is to get 1% accuracy. (For ln(1 + x) on x ∈ [0, 1], for example,
N = 19 gives a relative error of at worst 1 part in 1015 !) Theorem 17 implies that

an, G ≡ (f, φn )G (4.32)

must evaluate to

(4.33)
an, G = an + (ET , φn )G

In words, the discrete inner product (that is, Gaussian integration) correctly and exactly
integrates that part of (f, φ) which is due to the truncated series portion of f , and all the
error in the integration comes from the truncation error in the (N + 1)-term approxima-
tion. If ET ≈ 1/1015 , it is obvious that the difference between an, G and the exact spectral
coef¬cient an must be of this same ridiculously small magnitude. (The exceptions are the
coef¬cients whose degree n is close to the truncation limit N . The absolute error in ap-
proximating these tiny coef¬cients by quadrature is small, but the relative error may not
be small.) However, Gaussian integration is a well-conditioned numerical procedure, so
the absolute errors in the {an } will always be small for all n. It follows that if we blindly
calculate (N + 1) spectral coef¬cients via (N + 1)-point Gaussian quadrature and then sum
to calculate f (x), we will always get very high accuracy for f (x) if N is suf¬ciently large.
Thus, to represent a known function f (x) as a spectral series, we can safely dump our
table of integrals and simply multiply the grid point values of f (x) by a square matrix:

a ≈ Mf (4.34)

where a is the column vector of spectral coef¬cients and where the elements of the other
two matrices are

Mij ≡ φi (xj ) wj fj ≡ f (xj ) (4.35)
;

where the {xj } are the (N + 1) Gaussian quadrature abscissas ” the roots of φN +1 (x) ”
and the wj are the corresponding quadrature weights. Note that if the basis functions are
not orthonormal, we should divide the i-th row of M by (φi , φi ). This transformation from
the grid point values f (xj ) to the corresponding spectral coef¬cients aj is discussed further
in Chapter 5, Sec. 5, and Chapter 10, Sec. 4, as the “Matrix Multiplication Transformation”
(MMT).
When solving a differential equation, we merely apply the same procedure as in (4.34)
and (4.35) to approximate the residual function R(x; a0 , a1 , . . . , aN ) by interpolation. Again,
if N is large enough so that the coef¬cients rn in the residual series are small for n > N , the
error in using Gaussian quadrature instead of exact integration to compute the rn must be
small.
Although we have referred to “polynomials” throughout this section, Theorems 16 and
17 not only apply to all the standard types of orthogonal polynomials but to trigonometric
CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT
92

polynomials (that is, truncated Fourier series) as well. For most types of polynomials, it
is dif¬cult to estimate the error in (4.34) except to conclude, as we have already done, that
if N is large, this error will be absurdly small. Numerical experience has con¬rmed that
the pseudospectral method is a very accurate and robust procedure for Hermite functions,
Jacobi polynomials and so on.
For Fourier series and Chebyshev polynomials, however, it is possible to analyze the
error in approximating coef¬cient integrals by Gaussian quadrature (4.34) in a simple way.
This “aliasing” analysis turns out to be essential in formulating the famous “Two-Thirds
Rule” for creating “un-aliased” pseudospectral codes for nonlinear ¬‚uid mechanics prob-
lems, so it is the topic of the next section.
We began this chapter with a discussion of interpolation and then turned to inner prod-
ucts and numerical integration. The next theorem shows that these two themes are inti-
mately connected.
Theorem 18 (INTERPOLATION BY QUADRATURE)
Let PN (x) denote that polynomial of degree N which interpolates to a function f (x) at the
(N + 1) Gaussian quadrature points associated with a set of orthogonal polynomials {φn (x)}:
(4.36)
PN (xi ) = f (xi ) i = 0, 1, . . . , N
PN (x) may be expanded without error as the sum of the ¬rst (N + 1) φN (x) because it is merely a
polynomial of degree N . The coef¬cients {an } of this expansion
N
(4.37)
PN (x) = an φn (x)
n=0

are given WITHOUT APPROXIMATION by the discrete inner product
(f, φn )G
(4.38)
an =
(φn , φn )G
that is to say, are precisely the coef¬cients calculated by (4.34) and (4.35) above, the Matrix Multi-
plication Transform (MMT).
PROOF: Since the interpolating polynomial is a polynomial, and since Theorem 18 shows
that the discrete inner product preserves the orthogonality of the ¬rst (N + 1) basis func-
tions among themselves, it is obvious that applying the discrete inner product to the ¬nite
series (4.37) will exactly retrieve the coef¬cients an . What is not obvious is that we will
compute the same coef¬cients when we use f (x) itself in the inner product since f (x) is
not a polynomial, but a function that can be represented only by an in¬nite series.
However, the Gaussian integration uses only the (N + 1) values of f (x) at the interpola-
tion points ” and these are the same as the values of PN (x) at those points. Thus, when we
use Gaussian quadrature to approximate f (x), we are really expanding the interpolating
polynomial PN (x) instead. Q. E. D.
Since it is easy to sum a truncated spectral series like (4.37) by recurrence, it is far more
ef¬cient to perform Lagrangian interpolation by calculating the coef¬cients as in (4.34) and
(4.35) than it is to use the cardinal function representation (4.6), even though the two are
mathematically identical (ignoring round-off error).
One must be a bit careful to understand just what the theorem means. The coef¬cients
computed by Gaussian integration are not the exact spectral coef¬cients of f (x), but only
good approximations. The pseudospectral coef¬cients are the exact expansion coef¬cients
only of PN (x), the interpolating polynomial. For large N , however, such subtleties are
academic: PN (x) is a ridiculously good approximation to f (x), and therefore its coef¬cients
are exceedingly good approximations to those of f (x).
4.5. PSEUDOSPECTRAL ERRORS 93

4.5 Pseudospectral Errors: Trigonometric & Chebyshev Poly-
nomials
Theorem 19 (TRIGONOMETRIC INTERPOLATION)
Let the collocation points {xk } be de¬ned by

2πk
xk ≡ ’π + (4.39)
k = 1, 2, . . . , N
N
Let a function f (x) have the exact, in¬nite Fourier series representation
∞ ∞
1
f (x) ≡ ±0 + (4.40)
±n cos(nx) + βn sin(nx)
2 n=1 n=1

Let the trigonometric polynomial which interpolates to f (x) at the N collocation points (4.39) be

N/2’1 N/2’1
1 1
(4.41)
SN (x) = a0 + an cos(nx) + bn sin(nx) + aM cos(M x)
2 2
n=1 n=1

where M ≡ N/2 and where

(4.42)
SN (xk ) = f (xk ), k = 1, 2, . . . , N

Then the coef¬cients of the interpolant can be computed without error by the Trapezoidal Rule
N
2
[Trapezoidal Rule] (4.43a)
an = f (xk ) cos (nxk )
N
k=1
N
2
[Trapezoidal Rule] (4.43b)
bn = f (xk ) sin(nxk )
N
k=1

and these coef¬cients of the interpolant are given by in¬nite series of the exact Fourier coef¬cients:

N
(4.44a)
an = ±n + (±n+jN + ±’n+jN ) n = 0, 1, . . . ,
2
j=1

N
(βn+jN ’ β’n+jN ) ’1 (4.44b)
bn = βn + n = 1, 2, . . . ,
2
j=1

PROOF: Young and Gregory (1972).
The factor of (1/2) multiplying both a0 and ±0 is a convention. The reason for it is that
(cos[nx], cos[nx]) = (sin[nx], sin[nx]) = π for any n > 0, but (1, 1) = 2 π. There are two
ways of dealing with this factor of 2. One, which is normal in working with a Fourier series,
is to simply insert a denominator of (1/2π) in front of the integral that gives the constant
in the Fourier series and a factor of (1/π) in front of all the other coef¬cient integrals. The
alternative, which is used in the theorem, is to employ a factor of 1/π in the de¬nition of all
coef¬cients ” which means computing a coef¬cient a0 which is a factor of two larger than
the constant in the Fourier series ” and then inserting the compensating factor of (1/2)
directly into the Fourier sum (4.40) or (4.41).
This second convention is quite popular with the pseudospectral crowd because it elim-
inates factors of 2 from (4.44) as well as from (4.43).
CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT
94

Eq. (4.43) is labelled the “Trapezoidal Rule” is because it is equivalent to applying that
simple integration formula to the Fourier coef¬cient integrals. 1 The Trapezoidal Rule is a
very crude approximation with a relative accuracy of only O(h2 ) for general, that is to say,
for non-periodic functions. For periodic f (x), however, the Trapezoidal Rule is equivalent to
Gaussian integration. Unlike the optimum quadrature methods associated with Legendre
or Hermite polynomials, it is not necessary to look up the weights and abscissas in a table.
The weights (with the convention of the factor of (1/2) in (4.40) and (4.41)) are wk ≡ 2/N for
all k, and the interpolation points are evenly spaced as in (4.39). (Parenthetically, note that
in computing Fourier transforms, the Trapezoidal Rule also gives accuracy that increases
exponentially with N for suf¬ciently nice functions.)
Most of Theorem 19 merely repeats the trigonometric equivalent of the interpolation-
at-the-Gaussian-abscissas ideas of the previous section, but (4.44) is something remarkably
different. Theorems 17 and 18 imply that for any set of orthogonal polynomials,

(4.45)
an = ±n + en,j (N ) ±j
j=N +1


where the {±n } are the exact spectral coef¬cients and where the {an , n = 0, 1, . . . , N } are
the coef¬cients of the interpolating polynomial. What is surprising about (4.44) is that it
shows that for Fourier series (and through change-of-variable, Chebyshev polynomials),
only one ej out of each group of (N/2) is different from 0.
This simple fact turns out to be profoundly important for coping with “nonlinear alias-
ing instability”, which is a vice of both ¬nite difference and pseudospectral hydrodynamic
codes. For pseudospectral algorithms and quadratic nonlinearity, aliasing can be cured by
evaluating the nonlinear products using 2N interpolation points [double the number of
terms in the series] so that the expansion of the nonlinear term can be computed exactly.
S. A. Orszag pointed out that this is wasteful for Fourier and Chebyshev methods. The
special form of the Gaussian quadrature error in (4.44) makes it possible to use only about
(3/2) N points instead of 2N . This has become known as the “3/2™s Rule” or “Two-Thirds
Rule” where the latter name re¬‚ects that the yield of coef¬cients is only (2/3) the number of
points used for quadrature. Unfortunately, the Two-Thirds Rule applies only for these two
types of basis functions: Fourier and Chebyshev. We will discuss the “Two-Thirds Rule” in
some detail in Chapter 11.
Another payoff of (4.44) is a proof of the following.

Theorem 20 (ERROR: FOURIER INTERPOLATION)
Let SN (x) denote the trigonometric polynomial that interpolates to a function f (x) at N points.
Let fN (x) denote the corresponding truncation of the exact Fourier series. Let the {±n } denote the
exact spectral coef¬cients of the in¬nite Fourier series. Then, as stated in Theorem 10 of Chapter 2
(with slightly different notation),

|f (x) ’ fN (x)| ¤ (|an | + |bn |) , (4.46)
bN/2 +
n=1+N/2


that is to say, the error is bounded by the sum of the absolute value of all the neglected coef¬cients.
The new theorem is that the corresponding bound for trigonometric interpolation is, for all N

= ’π and multiply f (’π)
1 Strictly speaking, the classical Trapezoidal Rule would add the quadrature point x

and f (π) by one-half the weight used for the interior points. Because of the periodicity, however, f (’π) ≡ f (π),
so it suf¬ces to weight all the points the same, and use only f (π).
4.5. PSEUDOSPECTRAL ERRORS 95

and all real x,
± 
 

|f (x) ’ SN (x)| ¤ 2 (|an | + |bn |) , (4.47)
bN/2 +
 

<<

. 22
( 136 .)



>>