Evenl Spaced

y

Figure 5.5: Relationship between the evenly spaced grid and the endpoint-concentrated

Legendre-Lobatto grid. The points of both grids are marked by circles. The Legendre-

Lobatto grid is at the top.

are constants. It is often possible to simplify (5.28) so as to eliminate the second derivative

by using the differential equation which the orthogonal polynomials satisfy.

The shape of polynomial cardinal functions is very similar to that of their trigonometric

counterparts. The Chebyshev cardinal functions are in fact identical with the cosine car-

dinal functions except for the change of coordinate. This mapping causes the Chebyshev

cardinal functions to oscillate very rapidly near the endpoints and slowly near the center.

The cardinal functions for Gegenbauer polynomials of higher degree, such as the Legendre

polynomials PN (x), are qualitatively similar except that the interpolation points are a little

closer to the center of the interval, so the oscillations near the endpoints are larger than for

Chebyshev functions.

Our description of polynomial cardinal functions is brief because they can be used ex-

actly like their trigonometric counterparts. In particular, Dirichlet boundary conditions are

very easy to impose if the “extrema-plus-endpoints” (“Gauss-Lobatto”) grid is used, as in

(5.28), which is why this choice of interpolation points has become very popular.

The differentiation matrices for Chebyshev and Legendre cardinal functions, analogous

(k)

to δij of the previous section, are given in Gottlieb, Hussaini, and Orszag (1984) and Ap-

pendix F.

Fig. 5.4 illustrates the cardinal functions for a low order Legendre discretization on the

Lobatto grid, a typical choice for spectral elements and h ’ p-¬nite elements. The wild

oscillations of power-of-x, non-trigonometric polynomials on an evenly spaced grid are

suppressed because, as illustrated in Fig. 5.5, the pseudospectral grid is not evenly spaced,

but has points concentrated near both endpoints.

5.5. TRANSFORMATIONS AND INTERPOLATION 107

5.5 Transformations and Interpolation

It is possible to apply the pseudospectral method using either the grid point values {u(xi )}

or the series coef¬cients {an } as the unknowns. The relationship between the two for-

malisms is important because ef¬cient algorithms for nonlinear problems must jump back

and forth between the grid point representation and the truncated series.

For simplicity, assume that L u = f is a linear equation. Let the pseudospectral-generated

matrix problems take the equivalent forms

[grid point/cardinal function form] (5.29)

Lu =f

[series coef¬cient form] (5.30)

Ha = f

where the matrix elements of a are the coef¬cients in the truncated sum

N

u(x) ≈ (5.31)

an φn (x)

n=0

and where the other matrix elements are given by

≡ u(xi ) (5.32)

ui

≡ f (xi ) (5.33)

fi

≡ (LCj [x])|x=xi (5.34)

Lij

≡ (Lφj [x])|x=xi (5.35)

Hij

where the {xi } are the interpolation points, the {φj (x)} are the spectral basis functions, and

the {Cj (x)} are the cardinal functions.

The coef¬cients {an } and the grid point values are related through the following equa-

tion from Chapter 4, which we repeat:

(5.36)

Mu = a

where the elements of M are

φi (xj ) wj

Mij ≡ [transformation matrix u(xj ) ’ ai ] (5.37)

(φi , φi )

where the {wj } are the Gaussian quadrature weights and where (φi , φi ) denotes the inner

product of the i-th basis function with itself (Appendix A).

Comparing (5.36) with (5.29) and (5.30), we see that

(5.38)

L = HM

= L M’1 (5.39)

H

Thus, we can use the gridpoints as unknowns even if we only have subroutines to compute

the basis functions {φj (x)}, and don™t want the bother of writing codes to evaluate the

cardinal functions. We simply set up the matrix H and then multiply it on the right by M

to obtain L. Conversely, we can convert from a cardinal function representation to a form

that uses the series coef¬cients as the unknowns by multiplying the matrix L by M’1 . It is

not necessary to invert M via Gaussian elimination because M’1 satis¬es the equation

u = M’1 a (5.40)

CHAPTER 5. CARDINAL FUNCTIONS

108

This shows that the inverse transformation matrix has elements given by

’1

Mij ≡ φj (xi ) [inverse transformation aj ’ u(xi )] (5.41)

i. e. M’1 is simply the matrix that sums the spectral series to obtain grid point values of

the solution.

Consequently, we can move freely back and forth from one representation to another

merely by multiplying one square matrix by another. Such transformations-by-matrix are

dubbed, in both directions, the Matrix Multiplication Transform (MMT) and are discussed

further in Chapter 10, Sec. 4. Unfortunately, shift-of-representation is a relatively expensive

calculation: O(N 3 ) to calculate L from H or vice versa, and O(N 2 ) merely to calculate u

from a or the reverse.

For the special case of Fourier series and its clone, the Chebyshev expansion, the work

can be reduced by using the Fast Fourier Transform (FFT) discussed in Chapter 10.

Chapter 6

Pseudospectral Methods for

Boundary Value Problems

“One must watch the convergence of a numerical code as carefully as a father watching his

four year old play near a busy road.”

” J. P. Boyd

6.1 Introduction

The goal of this chapter is to apply what we have learned in the previous ¬ve chapters

to solving boundary value problems. Time-dependent equations require additional tech-

niques developed later. There will be some repetition, with ampli¬cation, of key ideas

already presented.

It is suf¬cient to restrict ourselves to linear boundary value equations. Nonlinear prob-

lems are usually solved through an iteration, the so-called Newton-Kantorovich method,

in which a linear differential equation is solved at each step (Appendices C and D).

6.2 Choice of Basis Set

Table A“1 is a ¬‚ow chart for choosing the basis set. On a sphere, use spherical harmonics

for latitude and longitude. If the solution is periodic, use Fourier series. If the domain

is ¬nite, but u(x) is not a periodic function of x, Chebyshev polynomials are best. If the

interval is unbounded, i. e., one or both endpoints are at |x| = ∞, then use the appropriate

species of rational Chebyshev functions (T Ln (x) or T Bn (x)) (Chapter 17).

6.3 Boundary Conditions: Behavioral & Numerical

Boundary conditions may be divided into two broad categories: behavioral and numerical.

Periodicity is a behavioral condition: it demands that the solution u(x) have the property

that u(x) = u(x + 2 π) for any x, but this behavior does not impose any speci¬c numerical

value upon u(x) or its derivatives at any particular point. Another behavioral condition is

109

CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS

110

that of being bounded and in¬nitely differentiable at a point where the differential equation

is singular. In contrast, conditions such as u(’1) = 3 and u(1) + du/dx(0) = 1.7 are

numerical.

The signi¬cance of these categories is that it is almost always necessary to explicitly im-

pose numerical boundary conditions. In contrast, behavioral conditions may be satis¬ed

implicitly by choosing basis functions such that each have the required property or behav-

ior.

For example, when the boundary conditions are that u(x) is periodic in x, then the ap-

propriate basis functions are the sines and cosines of a Fourier series. Since each basis

function is itself periodic, their sum will be periodic, too. Consequently, it is unnecessary

to explicitly impose periodicity on the numerical solution. Our choice of basis function has

implicitly satis¬ed the boundary conditions.

Similarly, the two-dimensional basis functions known as “spherical harmonics” auto-

matically have the proper behavior on the surface of a sphere. It is never necessary to

impose additional explicit constraints after choosing spherical harmonics to represent the

latitude/longitude dependence of functions in spherical coordinates.

When the problem is posed on an unbounded interval, the coef¬cients of the equation

are usually singular at the in¬nite endpoint or endpoints. As explained Chapter 17 on

in¬nite interval problems, this usually implies that only one of the linearly independent

solutions of the differential equation is bounded and in¬nitely differentiable at the end-

points. By choosing a basis of analytic functions, we force the numerical approximation

to have the desired behavior at in¬nity. On a doubly in¬nite interval, x ∈ [’∞, ∞], the

rational Chebyshev functions T Bn (x) are a good basis. If, for example, the exact solution

decays exponentially fast as |x| ’ ∞, the differential equation will then force the numer-

ical solution to have the proper exponential decay even though the individual T Bn (x) do

not decay at all!

Behavioral boundary conditions are possible on a bounded interval, too. The differen-

tial equation

(1 ’ x2 ) uxx ’ x ux + » u(x) = 0 x ∈ [’1, 1] (6.1)

is singular at both endpoints. Nonetheless, u(x) may be nonsingular at x = ±1 if the eigen-

value » is equal to any one of an in¬nite number of discrete values. Because the Chebyshev

polynomials are individually analytic at the endpoints, their sum satis¬es the behavioral

boundary condition of analyticity at x = ±1. Consequently, it is possible to solve this

singular differential equation, with an exponential rate of convergence, merely by substi-

tuting unmodi¬ed Chebyshev polynomials into (6.1) without any explicit constraints. In

fact, with »n = n2 , the eigenfunctions of (6.1) are the Chebyshev polynomials Tn (x).

Numerical boundary conditions, in contrast, must be explicitly imposed. There are

two strategies: (i) reducing the number of collocation conditions on the residual of the

differential equation, and using rows of the pseudospectral matrix to explicitly impose the

constraint or (ii) modifying the problem (if need be) so that the boundary conditions of the

modi¬ed problem are homogeneous and then altering the basis set so that the basis functions

individually satisfy these conditions. We will refer to these two strategies as “boundary-

bordering” and “basis recombination”, respectively, and discuss them at length in the next

two sections.

In ¬nite element theory, boundary conditions are classi¬ed as either “essential”, which

must be explicitly imposed, or “natural” conditions, which are automatically satis¬ed by

the approximate solution. This dual classi¬cation is not quite the same as the division be-

tween “behavioral” and “numerical” conditions discussed here because in most instances,

both “essential” and “natural” conditions are numerical and the distinction is only be-

tween whether u(x) or its derivative is set equal to a particular number at the endpoints.

6.4. “BOUNDARY-BORDERING” 111

Nevertheless, from a programmer™s standpoint, this duality is the same: “behavioral” and

“natural” conditions can be ignored in the code whereas “numerical” and “essential” con-

ditions require work. It is for this reason that Boyd (1978b, 1987a, b) borrowed the ¬nite

element terminology: for global expansion methods, “numerical” boundary conditions are

most de¬nitely “essential”!

6.4 “Boundary-Bordering” for “Numerical” Boundary Con-

ditions

To generate a “boundary-bordered” pseudospectral matrix of dimension N for a second

order ordinary differential equation boundary value problem, we demand that the resid-

ual of the differential equation should vanish at (N ’ 2) interpolation points on the interior

of the interval, and then allocate the top two rows of the matrix to impose the boundary

conditions. The reason for the name “boundary-bordering” is that the “border” of the ma-

trix ” in this case, the ¬rst and second rows ” explicitly enforces the boundary conditions.

For example, suppose the problem is

and (6.2)

uxx + q(x) u = f (x) & ux (’1) = ± u(1) = β

The Chebyshev grid is

πi

i = 1, 2, . . . , (N ’ 2) (6.3)

xi = cos

N ’1

The matrix discretization of (6.2) is

(6.4)

La = F

where the elements of the column vector a are the Chebyshev coef¬cients of u(x),

N

u(x) ≈ (6.5)

aj Tj’1 (x)

j=1

and where

i = 1, . . . , (N ’ 2) (6.6)

Li+2, j = Tj’1, xx (xi ) + q(xi ) Tj’1 (xi )

j = 1, . . . , N

i = 1, . . . , (N ’ 2) (6.7)

Fi+2 = f (xi )

where Tj’1,xx denotes the second x-derivative of the (j’1)-st Chebyshev polynomial. The

top two rows of the matrix impose the boundary conditions:

(6.8a)

L1, j = Tj’1, x (’1) & F1 = ±

(6.8b)

L2, j = Tj’1 (1) & F2 = β

where j = 1, . . . , N and subscript x denotes differentiation with respect to x.

Eqs. (6.6) to (6.8) easily generalize. A fourth order differential equation, for example,

would require four boundary rows instead of two. A Dirichlet boundary condition on

u(’1) would simply replace the ¬rst derivative of the Chebyshev polynomials by the poly-

nomials themselves in (6.8a). The non-boundary elements of the square matrix L, given by

CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS

112