<<

. 20
( 136 .)



>>

Another exception arises in time-marching. Implicit methods require solving a bound-
ary value problem at each step. However, if only some of the terms are treated implicitly
so as to give a “semi-implicit” algorithm, the boundary value problem may be linear and
constant coef¬cient. For such problems, as explained in Chapter 15, the Galerkin matrix is
banded and Galerkin™s method is much faster than the pseudospectral scheme. Galerkin™s
method is an important component of spherical harmonic weather forecasting and climate
models for this reason.
In addition to these exceptions, quantum chemistry is an area where the Galerkin method
is still widely used, and for all the right reasons. Complex molecules require so many de-
grees of freedom that most of the computations must be done by machine. However, one is
usually forced to use a small number of basis functions per electron, so that is one is in effect
carrying out a low order simulation even when N = 400! The extra accuracy of Galerkin™s
method is therefore very important.
Normally, one should use only the standard, simple basis sets like Fourier series and
Chebyshev polynomials. However, quantum chemists are quite sensible in using LCAO
(“Linear Combinations of Atomic Orbitals”) as the trial functions; it would be quite im-
+
possible to obtain such good results for the H2 molecular ion with just three degrees of
freedom unless the choice of trial solution was guided by physical intuition. For similar
reasons explained Chapter 20 on symbolic calculations, it is sometimes better to use poly-
nomials other than Chebyshev for certain types of analytical, low-order calculations.
For most high-resolution numerical calculations, however, the best advice is still this:
use pseudospectral methods instead of spectral, and use Fourier series and Chebyshev
polynomials in preference to more exotic functions.
Chapter 4

Interpolation, Collocation & All
That

“In the past several years, there has been extensive activity in both the theory and ap-
plication of spectral methods. This activity has been mainly concentrated in the area of
pseudospectral methods.”
” Gottlieb, Hussaini, & Orszag (1984)




4.1 Introduction
The pseudospectral family of algorithms is closely related to the Galerkin method. To show
this and to understand the mechanics of pseudospectral methods, it is necessary to review
some classical numerical analysis: polynomial interpolation, trigonometric interpolation,
and Gaussian integration.


De¬nition 15 (INTERPOLATION)
An INTERPOLATING approximation to a function f (x) is an expression PN ’1 (x), usually
an ordinary or trigonometric polynomial, whose N degrees of freedom are determined by the require-
ment that the INTERPOLANT agree with f (x) at each of a set of N INTERPOLATION points:

(4.1)
PN ’1 (xi ) = f (xi ) i = 1, 2, . . . , N

In the rest of this chapter, we will discuss the choice of interpolation points and methods
for computing the interpolant.
A note on terminology: we shall use “collocation points” and “interpolation points”
as synonyms. However, “interpolation” has the connotation that f (x), the function which
is being approximated by a polynomial, is already a known function. “Collocation” and
“pseudospectral” are applied to interpolatory methods for solving differential equations
for an unknown function f (x). The label “pseudospectral” is narrower than “collocation”
in that the former is rarely used except when the basis functions are global, such as Cheby-
shev or Fourier functions. Thus, almost any journal article with “pseudospectral” in the
title is relevant to this book, but many “collocation” methods are ¬nite element procedures.

81
CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT
82




x1
x0
Figure 4.1: Linear interpolation. The dashed line is that linear polynomial which intersects
the function being approximated (solid curve) at the two interpolation points.


4.2 Polynomial interpolation
Before hand-held calculators, tables of mathematical functions were essential survival equip-
ment. If one needed values of the function at points which were not listed in the table, one
used interpolation. The simplest variant, linear interpolation, is to draw a straight line
between the two points in the table which bracket the desired x. The value of the linear
function at x is then taken as the approximation to f (x), i. e.
(x ’ x1 ) (x ’ x0 )
f (x) ≈ [Linear Interpolation] (4.2)
f (x0 ) + f (x1 )
(x0 ’ x1 ) (x1 ’ x0 )
Fig. 4.1 is a graphic proof of the high school theorem: a straight line is completely deter-
mined by specifying any two points upon it; the linear interpolating polynomial is unique.
A more abstract de¬nition is that P1 (x) is that unique linear polynomial which satis¬es the
two interpolation conditions
(4.3)
P1 (x0 ) = f (x0 ) ; P1 (x1 ) = f (x1 )
Linear interpolation is not very accurate unless the tabulated points are very, very close
together, but one can extend this idea to higher order. Fig. 4.2 illustrates quadratic interpo-




x2
x0 x1
Figure 4.2: Schematic of quadratic interpolation. The dashed curve is the unique quadratic
polynomial (i. e., a parabola) which intersects the curve of the function being approximated
(solid) at three points.
4.2. POLYNOMIAL INTERPOLATION 83

lation. A parabola is uniquely speci¬ed by giving any three points upon it. Thus, we can
alternatively approximate f (x) by the quadratic polynomial P2 (x) which satis¬es the three
interpolation conditions

(4.4)
P2 (x0 ) = f (x0 ) ; P2 (x1 ) = f (x1 ) ; P2 (x2 ) = f (x2 )


(x ’ x1 )(x ’ x2 ) (x ’ x0 )(x ’ x2 )
P2 (x) ≡ (4.5)
f (x0 ) + f (x1 )
(x0 ’ x1 )(x0 ’ x2 ) (x1 ’ x0 )(x1 ’ x2 )
(x ’ x0 )(x ’ x1 )
+ f (x2 )
(x2 ’ x0 )(x2 ’ x1 )

In general, one may ¬t any N + 1 points by a polynomial of N -th degree via
N
PN (x) ≡ [Lagrange Interpolation Formula] (4.6)
f (xi ) Ci (x)
i=0

where the Ci (x), the “cardinal functions”, are polynomials of degree N which satisfy the
conditions

(4.7)
Ci (xj ) = δij

where δij is the Kronecker δ-function. The cardinal functions are de¬ned by
N
x ’ xj
[Cardinal Function] (4.8)
Ci (x) =
xi ’ xj
j=0,j=i

The N factors of (x ’ xj ) insure that Ci (x) vanishes at all the interpolation points except xi .
(Note that we omit the factor j = i so that Ci (x) is a polynomial of degree N , not (N + 1).)
The denominator forces Ci (x) to equal 1 at the interpolation point x = xi ; at that point,
every factor in the product is (xi ’ xj )/(xi ’ xj ) = 1. The cardinal functions are also called
the “fundamental polynomials for pointwise interpolation”, the “elements of the cardinal
basis”, the “Lagrange basis”, or the “shape functions”.
The cardinal function representation is not ef¬cient for computation, but it does give a
proof-by-construction of the theorem that it is possible to ¬t an interpolating polynomial
of any degree to any function. Although the interpolating points are often evenly spaced
” surely this is the most obvious possibility ” no such restriction is inherent in (4.6); the
formula is still valid even if the {xi } are unevenly spaced or out of numerical order.
It seems plausible that if we distribute the interpolation points evenly over an interval
[a, b], then the error in PN (x) should ’ 0 as N ’ ∞ for any smooth function f (x). At the
turn of the century, Runge showed that this is not true.
His famous example is

1
f (x) ≡ x ∈ [’5, 5] (4.9)
1 + x2
Runge proved that for this function, interpolation with evenly spaced points converges
only within the interval | x |¤ 3.63 and diverges for larger | x | (Fig. 4.3). The 15-th degree
polynomial does an excellent job of representing the function for | x |¤ 3; but as we use
more and more points, the error gets worse and worse near the endpoints. The lower
right panel of Fig. 4.3, which plots the logarithm (base 10) of the error for the 30-th degree
polynomial, makes the same point even more strongly.
CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT
84


1 2
6-pt 11-pt
1.5
1
0.5
0.5
0
0
-5 0 5 -5 0 5


2
31-pt
15-pt
1.5 0
10
1 Errors
0.5
0
-5 0 5 -5 0 5
Figure 4.3: An example of the Runge phenomenon.
a. Solid curve without symbols: f (x) ≡ 1/(1 + x2 ), known as the “Lorentzian” or “witch
of Agnesi”. Disks-and-solid curve: ¬fth-degree polynomial interpolant on x ∈ [’5, 5]. The
six evenly spaced interpolation points are the locations where the dashed and the solid
curves intersect.
b. Interpolating polynomial [disks] of tenth degree.
c. The interpolating polynomial of degree ¬fteen is the dashed curve. d. Same as previous
parts except that only the error, log10 (|f (x)’P30 |), is shown where P30 (x) is the polynomial
of degree thirty which interpolates f (x) at 31 evenly spaced points on x ∈ [’5, 5]. Because
the error varies so wildly, it is plotted on a logarithmic scale with limits of 10’3 and 102 .


In the interior of the interval, the error is only 1 /1000. Just inside the endpoints, how-
ever, P30 (x) reaches a maximum value of 73.1 even though the maximum of f (x) is only
1! It is not possible to graph f (x) and P30 (x) on the same ¬gure because the curve of f (x)
would be almost invisible; one would see only the two enormous spikes near the endpoints
where the polynomial approximation goes wild.
Since f (x) = 1/(1 + x2 ) has simple poles at x = ±i, its power series converges only for
| x |¤ 1. These same singularities destroy the polynomial interpolation, too; the wonder
is not that Lagrangian interpolation fails, but that it succeeds over a much larger interval
than the Taylor series.
This in turn hints that the situation is not hopeless if we are willing to consider an uneven
grid of interpolation points. The lower panels of Fig. 4.3 show that, as Runge proved, the
middle of the interval is not the problem. The big errors are always near the endpoints. This
suggests that we should space the grid points relatively far apart near the middle of the
4.2. POLYNOMIAL INTERPOLATION 85

interval where we are getting high accuracy anyway and increase the density of points as
we approach the endpoints.
Unfortunately, as its degree N increases, so does the tendency of a polynomial of degree
N to oscillate wildly at the endpoints. Consequently, the enpoint squeezing must increase
with N. The Chebyshev interpolation procedure, which does indeed make Lagrangian in-
terpolation successful for any function which is analytic on the interval, has the grid points
only O(1/N 2 ) apart near the endpoints versus the 1/N spacing of our (failed!) effort to
approximate Runge™s example with an evenly spaced grid.
But what distribution of points is best? The answer is given by a couple of very old
theorems.

Theorem 12 (CAUCHY INTERPOLATION ERROR THEOREM)
Let f (x) have at least (N + 1) derivatives on the interval of interest and let PN (x) be its La-
grangian interpolant of degree N . Then

N
1
f (x) ’ PN (x) = (x ’ xi )
f (N +1) (ξ) (4.10)
[N + 1] ! i=0

for some ξ on the interval spanned by x and the interpolation points. The point ξ depends on the
function being approximated, upon N , upon x, and upon the location of the interpolation points.

PROOF: Davis (1975).
If we want to optimize Lagrangian interpolation, there is nothing we can do about the
(N +1)
(ξ) factor (in general) since it depends on the speci¬c function being approximated,
f
but the magnitude of the polynomial factor depends upon our choice of grid points. It
is evident that the coef¬cient of xN is 1, independent of the grid points, so the question
becomes: What choice of grid points gives us a polynomial (with leading coef¬cient 1)
which is as small as possible over the interval spanned by the grid points? By a linear
change of variable, we can always rescale and shift the interval [a, b] to [-1, 1], but what
then? Ironically, this question was answered half a century before the Runge phenomenon
was discovered.

Theorem 13 (CHEBYSHEV MINIMAL AMPLITUDE THEOREM)
Of all polynomials of degree N with leading coef¬cient [coef¬cient of xN ] equal to 1, the unique
polynomial which has the smallest maximum on [-1, 1] is TN (x)/2N ’1 , the N -th Chebyshev poly-
nomial divided by 2N ’1 . In other words, all polynomials of the same degree and leading coef¬cient
unity satisfy the inequality

TN (x) 1
max |PN (x)| ≥ max (4.11)
= N ’1
2N ’1 2
x ∈ [-1,1] x ∈ [-1,1]


PROOF: Davis (1975, pg. 62).
Now any polynomial of degree N can be factored into the product of linear factors of
the form of (x ’ xi ) where xi is one of the roots of the polynomial, so in particular

<<

. 20
( 136 .)



>>