differential equation at the i-th interpolation point where j is the column index and (i + 2)

is the row index. There are some subtleties for partial differential equations which will be

discussed later in the chapter.

The “boundary-bordering” method is applicable to even the most bizarre boundary

conditions. For a given N , this approach does give a slightly larger matrix (dimension N

versus (N ’ 2) for a second order ODE) in comparison to the basis recombination method,

but for large N , this is compensated by the higher cost of calculating the matrix elements

in the latter technique.

6.5 “Basis Recombination” and “Homogenization of Bound-

ary Conditions”

If the problem

(6.9)

Lu = f

has inhomogeneous boundary conditions, then it may always be replaced by an equivalent

problem with homogeneous boundary conditions, so long as the boundary conditions are

linear. The ¬rst step is to choose a simple function B(x) which satis¬es the inhomogeneous

boundary conditions. One may then de¬ne a new variable v(x) and new forcing function

g(x) via

u(x) ≡ v(x) + B(x) (6.10)

g(x) ≡ f (x) ’ L B(x) (6.11)

so that the modi¬ed problem is

(6.12)

Lv = g

where v(x) satis¬es homogeneous boundary conditions. For simplicity, (6.11) and (6.12)

implicitly assume that L is a linear operator, but (6.10) and this technique for “homogeniz-

ing the boundary conditions” are applicable to nonlinear differential equations, too, if the

boundary conditions are linear.

The shift function B(x) is arbitrary except for the constraint that it must satisfy all the

inhomogeneous boundary conditions. However, the simplest choice is the best: polyno-

mial interpolation of the lowest order that works. For example, consider a two-point ODE

boundary value problem with the mixed conditions

du

(1) ’ u(1) = δ (6.13)

ux (’1) = γ &

dx

Assume

(6.14)

B(x) = Λ + „¦ x

and then choose the two undetermined coef¬cients by demanding that B(x) satisfy the

boundary conditions. For example, in case (ii)

dB

”

(’1) = γ „¦=γ

dx

dB

(1) ’ B(1) = δ ” „¦ ’ (Λ + „¦) = δ ’ Λ = ’δ (6.15)

dx

6.5. “BASIS RECOMBINATION” 113

Once B(x) has been found and the problem transformed to the new unknown v(x), the

next step is to actually do “basis recombination”: Choosing simple linear combinations of

the original basis functions so that these combinations, the new basis functions, individu-

ally satisfy the homogeneous boundary conditions. For example, suppose that

(6.16)

v(’1) = v(1) = 0

The identity Tn (cos(t)) = cos(nt) for t = 0, π implies

T2n+1 (±1) = ±1 (6.17)

T2n (±1) = 1 &

so a good choice of basis functions such that φn (±1) = 0 for all n is

φ2n (x) ≡ T2n (x) ’ 1 (6.18a)

n = 1, 2, . . .

φ2n+1 (x) ≡ T2n+1 (x) ’ x (6.18b)

n = 1, 2, . . .

Note the indices in (6.18): Because of the two boundary constraints, T0 (≡ 1) and T1 (≡ x)

are no longer independent basis functions, but rather are determined by the coef¬cients of

all the higher functions.

The collocation points are the same as for boundary bordering: the (N ’ 2) interior

points of an N -point Gauss-Lobatto grid. The residual is not evaluated at the endpoints

because the boundary conditions are implicitly used at these points instead.

We write

N ’1

v(x) ≈ (6.19)

bn φn (x)

n=2

If we de¬ne a column vector b of dimension (N ’ 2) whose i-th element is bi’1 , then the

differential equation becomes the matrix problem

(6.20)

Hb = G

where

Hij ≡ φj+1,xx (xi ) + q(xi ) φj+1 (xi ) i, j = 1, . . . , N ’ 2 (6.21)

where the double subscript x denotes double x-differentiation and

Gi ≡ g(xi ) i = 1, 2, . . . , (N ’ 2) (6.22)

After solving (6.20) to compute the coef¬cients of the φ-series for v(x), it is trivial to

convert the sum to an ordinary Chebyshev series

N ’1

v(x) ≡ (6.23)

an Tn (x)

n=0

by noting

n≥2 (6.24)

an = bn

(2n)¤(N ’1) (2n+1)¤(N ’1)

a0 = ’ a1 = ’ (6.25)

b2n & b2n+1

n=1 n=1

CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS

114

By using the identity

p’1

n2 ’ k 2

dp T n n+p

(6.26)

= (±1) ,

dxp 2k + 1

x=±1 k=0

one can derive similar basis sets for other boundary conditions. As an illustration, we give

a set of functions that individually satisfy homogeneous Neuman conditions:

±

1

n=0

≡

φ2n (x) 2

n

T2n (x) ’

T2n+2 (x) n = 1, 2, . . .

(n + 1)2

{φn,x (±1) = 0} (6.27)

2

2n + 1

φ2n+1 (x) ≡ T2n+1 (x) ’ T2n+3 (x) n = 0, 1, . . .

2n + 3

This method is superior to “boundary-bordering” for eigenvalue problems whose bound-

ary conditions do not involve the eigenvalue because “basis recombination” gives a spectral

matrix which contains the eigenvalue in every row, and therefore can be attacked with stan-

dard library software. “Boundary-bordering” would give two rows which do not contain

the eigenvalue, and this wrecks some library eigenvalue-solvers.

“Basis recombination” also gives a smaller matrix than boundary bordering: the dimen-

sion of H is only (N ’2) although the ¬nal answer (6.23) is the sum of the ¬rst N Chebyshev

polynomials. In practice, because the matrix elements for the modi¬ed basis functions are

more complicated to compute than those involving the unmodi¬ed basis functions of the

“boundary-bordering” method, this advantage is non-existent except for very small N . (In

Chapter 20, we show that “basis recombination” is almost always the method of choice for

analytical, paper-and-pencil or algebraic manipulation language calculations.)

In the general case, the great advantage of “basis recombination” is conceptual sim-

plicity. After we have shifted to the modi¬ed basis set, we can thenceforth ignore the

boundary conditions and concentrate solely on the differential equation. The form of the

pseudospectral matrix is the same for numerical boundary conditions as for behavioral

boundary conditions: All the rows are derived from setting the residual of the differential

equation equal to zero at a given collocation point.

In many situations, the differences between basis recombination and boundary border-

ing have little practical signi¬cance. Karageorghis (1993b) has discussed the relationship

between basis recombination and boundary bordering formulations for the pseudospectral

method in rectangles.

6.6 Basis Recombination for PDEs: Trans¬nite Interpola-

tion

For partial differential equations, the principles are quite similar. Suppose, for example,

that on the sides of the unit square [’1, 1] — [’1, 1], the boundary conditions are that

u(’1, y) = fW (y), u(1, y) = fE (y), u(x, ’1) = fS (x), u(x, 1) = fN (x) (6.28)

We assume that the boundary conditions are continuous around the wall so that fW (’1) =

fS (’1), or in other words that two boundary functions which share a corner give the

6.7. THE CARDINAL FUNCTION BASIS 115

same values at that corner. (This condition is necessary to avoid strong singularities in

the corners; the solution may be weakly singular at the corners even when the boundary

conditions are continuous.) Then the so-called “trans¬nite” interpolation formula gives a

quadratic polynomial in x and y which satis¬es the boundary conditions:

1’x 1’y

x+1 y+1

B(x, y) ≡ fW (y) + fE (y) + fS (x) + fN (x)

2 2 2 2

(1 ’ x)(1 ’ y) (1 ’ x)(y + 1)

’ u(’1, ’1) ’ u(’1, 1)

4 4

(x + 1)(1 ’ y) (x + 1)(y + 1)

’ u(1, ’1) ’ (6.29)

u(1, 1)

4 4

The linear terms (¬rst row) are the same as for two independent one-dimensional inter-

polation problems. The second and third rows of quadratic terms correct for “interference”

between the linear terms in the ¬rst row. Along the eastern boundary x = 1, for example,

1’y 1’y

y+1 y+1

fN (1) ’ u(1, ’1) ’

B(1, y) = fE (y) + fS (1) + u(1, 1)

2 2 2 2

(6.30)

= fE (y)

as desired because the assumption of continuity around the boundary implies that fS (1) =

u(1, ’1) and fN (1) = u(1, 1).

Nakamura (1996) gives a good treatment of trans¬nite interpolation with additional

generalizations of the concept.

6.7 The Cardinal Function Basis

A cardinal function basis can always be used in place of the Chebyshev polynomials:

N

(6.31)

u(x) = uj Cj (x)

j=1

where the cardinal functions, as in Chapter 5, are de¬ned by the requirement that Cj (xi ) =

1 if i = j and 0 otherwise. The unknowns uj are now the values of u(x) at the interpolation

points, i. e., uj ≡ u(xj ).

The cardinal basis is very convenient with Dirichlet boundary conditions. With the

“extrema-plus-endpoints” grid, one may enforce the boundary conditions v(x1 ) = v(xN ) =

0 merely by omitting the two cardinal functions corresponding to the endpoints, simplify-

ing (6.31) for the new unknown v(x) to

N ’1

v(x) ≈ (6.32)

vj Cj (x)

j=2

If we de¬ne a column vector v of dimension (N ’ 2) whose i-th element is vi’1 , then the

differential equation becomes the matrix problem

(6.33)

Hv = G

where

Hij ≡ Cj+1, xx (xi+1 ) + q(xi+1 ) Cj+1 (xi+1 ) i, j = 1, . . . , N ’ 2 (6.34)

CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS

116

where the double subscript x denotes double x-differentiation and the elements of G are

g(xi ) as in (6.22). The Gauss-Lobatto grid is:

π(i ’ 1)

i = 2, . . . , N ’ 1 (6.35)

xi = cos

N ’1

What is most striking, however, is how little is changed by the shift from the polyno-

mial basis to the cardinal function basis. In particular, the unknowns are the coef¬cients

of a spectral series (either polynomial or cardinal function), the matrix elements of H are

identical except for the replacement of one set of basis functions by another, and the the

column vector on the right-hand side is unchanged.

The truth is that if the basis functions individually satisfy the boundary conditions, the

abstract formalism ” the formulas for the matrix elements ” are independent of the basis

set. Only the numerical values of the matrix elements change when we replace Chebyshev

polynomials by Legendre polynomials, or Tj (x) by the corresponding cardinal function.