<<

. 27
( 136 .)



>>

The matrix problem is always

(6.36)
Ha = G

where

(6.37)
Gi = g(xi )
(6.38)
Hij = (L φj )|x=xi

where g(x) is the inhomogeneous term in the differential equation, L is the operator of the
differential equation, and the notation in (6.38) means the elements of H are the result of
applying the operator L to the j-th basis function and then evaluating the resulting residual
at the i-th grid point. The column vector a contains the unknown coef¬cients of the spectral
series. In the cardinal function basis, these coef¬cients are also the grid point values of u(x).


6.8 The Interpolation Grid
As explained in Chapter 4, every standard basis function has one or two optimal grids
associated with it. The Fourier, Chebyshev, and rational Chebyshev grids are given by an-
alytical formulas involving nothing more exotic than trigonometric functions (Appendix
F). When a given basis function has two “good” grids, the choice between them is strictly
one of convenience. For Legendre polynomials, Hermite functions, Laguerre functions
and spherical harmonics, the grid points are not known in analytical closed form except
for small N . (The Legendre-Lobatto grid up to nine points are given by previously un-
published analytical formulas in Appendix F.) However, numerical tables for these points
are given in mathematical handbooks such as Abramowitz and Stegun (1965) and may be
calculated ab initio using the subroutines given in Canuto et al. (1988), Appendix C.


6.9 Computing the Basis Functions and Their Derivatives
All the standard basis functions without exception (including Fourier ) may be computed via
three-term recurrence relations as given in Appendix A. These recurrences may be evalu-
ated as a single DO loop. What could be simpler?
Of course, to solve differential equations we also need the derivatives of the functions.
However, the derivative of the n-th Hermite polynomial is proportional to the (n ’ 1)-st
6.9. COMPUTING BASIS FUNCTIONS & DERIVATIVES 117

Hermite polynomial. The Chebyshev and Legendre polynomials belong to a family of
polynomials known as the “Gegenbauer polynomials”, which may be labelled by a pa-
rameter ±. The derivative of any member of this family is another polynomial within the
family: n is reduced by one while ± is increased by 1. It follows that one may always evalu-
ate the derivatives of Chebyshev and Legendre polynomials to arbitrary order by three-term
recurrence. All the needed formulas are in Appendix A.
Texts tend to discuss these recurrences as if they were the only option because for Leg-
endre, Hermite, Laguerre, and spherical harmonics, they are the only option. For those
basis sets which are a disguised Fourier series, i. e. Chebyshev polynomials and the ra-
tional Chebyshev functions for in¬nite and semi-in¬nite intervals, we may alternatively
compute the basis functions and their derivatives using trigonometric functions.
If x is the argument of the Chebyshev polynomials and t the argument of the trigono-
metric functions, then

x = cos(t) ←’ t = arccos(x) x ∈ [’1, 1] & t ∈ [0, π] (6.39)
(6.40)
Tn (x) = cos(nt)
’1
dTn d sin(nt)
(6.41)
= [cos(nt)] = n
dx sin(t) dt sin(t)
’1 ’1
d 2 Tn d d
(6.42)
= [cos(nt)]
dx2 sin(t) dt sin(t) dt
’n2 n cos(t)
(6.43)
= cos(nt) + sin(nt)
2
sin3 (t)
sin (t)

and so on, repeatedly applying the elementary identity d/dx ” [’1/ sin(t)]d/dt when x
and t are connected via (6.39). The strategy for the rational Chebyshev functions is similar;
such change-of-coordinate tricks are so useful that we will devote a whole chapter to the
subject.
The ¬rst table of Chapter 16 on coordinate-transformation methods gives a FORTRAN
routine for computing Tn (x) and its ¬rst four derivatives. It contains only 11 executable
statements. All the details of the change-of-coordinate are buried in this subroutine; the
calling program is blissfully ignorant.
There is one minor complication: the derivative formulas have numerators and de-
nominators that are singular as t ’ 0, π, which correspond to x ’ ±1. Analytically, these
formulas have ¬nite limits that can be derived by l™Hopital™s rule, which is simply the Tay-
lor expansion of both numerator and denominator about their common zero, but blind
numerical application of (6.40) and (6.42) will give over¬‚ow. In practice, this is no dif¬-
culty because (6.26) is a simple analytical expression for the exact endpoint derivatives of
the p-th derivative of Tn (x) for arbitrary p. Consequently, we may avoid over¬‚ow merely
by adding an IF statement to switch to the boundary derivative formula at the endpoints.
After using recurrences early in my career (Boyd, 1978a) and trigonometric derivative
formulas in recent years, I personally ¬nd the latter are simpler. However, the choice is
mostly a matter of habit and preference. The trigonometric method requires fewer loops,
but demands the evaluation of transcendentals; however, the cost of evaluating the basis
functions is usually a negligible fraction of total running time. The trigonometric formu-
las become less accurate near the boundaries because both numerator and denominator
are tending to 0, especially for the higher derivatives, but the derivative (Gegenbauer) re-
currences are mildly unstable, especially for the higher derivatives. In spite of these mild
de¬ciencies, however, both approaches are usually accurate to within one or two digits
of machine precision for ¬rst and second order derivatives. Schultz, Lee and Boyd(1989)
show that the trigonometric form is a little more accurate than recurrence.
CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS
118

Both because it is my personal preference and also because this method has been largely
ignored in the literature, trigonometric derivative formulas are used in the sample program
at the end of the chapter.
It is most ef¬cient to write a single subroutine that will return arrays containing the
values of all N polynomials and their derivatives at a single point; this simultaneous com-
putation reduces the number of subroutine calls.
To implement basis recombination, it is easy to appropriately modify the basis-computing
subroutine. Spectral methods lend themselves well to “data-encapsulation”: the main pro-
gram never needs to know even the identity of the basis set ” only that there is a subrou-
tine BASIS that can be called to return arrays with the values of the basis functions at a
given x.
In the cardinal basis, the trigonometric formulas are unnecessary. Analytical formulas
for derivatives at the grid points for the standard Fourier, Legendre and Chebyshev cardi-
nal functions are given in Appendix F.


6.10 Special Problems of Higher Dimensions: Indexing
To solve a boundary value problem in two dimensions, it is most ef¬cient to use a tensor
product basis, that is, choose basis functions that are products of one-dimensional basis
functions. Thus, on the unit square [’1, 1] — [’1, 1],

¦mn (x, y) ≡ φm (x) φn (y) (6.44)
m = 1, . . . , N x & n = 1, . . . , N y

As in one dimension, one may apply either “boundary-bordering” or “basis recombina-
tion” to deal with the boundary conditions. Similarly, the interpolation grid is a tensor
product, the N x N y points

x ≡ (xi , yj ) (6.45)
i = 1, . . . , N x & j = 1, . . . , N y

One complication is that the grid points, the basis functions, and the coef¬cients of the
spectral series now require two indices. The rules of matrix algebra, however, allow only
one index for a vector and two for a square matrix, not four (i, j, m, n). The simplest
remedy is to write a short subroutine to perform the preprocessing step of collapsing two
indices into one and vice versa.
Thus, let ix and iy denote one-dimensional indices and let I vary over the range 1, 2, . . . , N x N y.
Let x(ix) and y(iy) denote functions that compute the points on the one-dimensional grids.
Then trace the double loop
I=0
for ix = 1 to Nx, for iy = 1 to Ny
I=I+1
XA(I) = x(ix), YA(I) = y(iy)
MA(I) = ix, NA(I) = iy
IA(MA(I),NA(I)) = I
end loops
For example, suppose N x = N y = 10. The loop will then compute four one-dimensional
arrays which each contain 100 elements. Each value of the “collapsed” index I speci¬es
a unique point on the two-dimensional grid, and also a unique, two-dimensional basis
function. When we compute the 100 — 100 pseudospectral matrix, the 70th row of the
matrix is determined by evaluating the residual of the differential equation at the 70th
grid point. What are the x and y values of that grid point? XA(70) and YA(70), respectively.
6.10. HIGHER DIMENSIONS: INDEXING 119

Similarly, the 31-st column of the square matrix is the contribution of the 31-st basis function
to the differential equation residual at all the points on the grid. MA(31) and NA(31) give
us the subscripts (m, n) in (6.44) so that we can compute this basis function. In symbols,

xI ≡ (XA(I), Y A(I)) (6.46)
I = 1, . . . , N x · N y
¦I (x, y) ≡ φM A(I) (x) φN A(I) (y) (6.47)

Thus, the four one-dimensional arrays associate a single index I with pairs of one-dimensional
indices of smaller range. The two-dimensional array IA is used to go in the opposite direc-
tion: to compute I given (m, n). This is useful in printing a table of the spectral coef¬cients
as functions of m and n.
As discussed in Chapter 8, it is often possible to greatly reduce the work by exploit-
ing the symmetries of the problem. In higher spatial dimensions, these can become rather
complicated. Boyd (1986c) describes an eigenvalue problem with an eight-fold symmetry.
One may reduce the size of the basis by this same factor of eight by choosing a new basis,
formed from linear combinations of Chebyshev polynomials, which all possess the same
eight-fold symmetry as the solution. However, the symmetry-respecting basis is no longer
a simple tensor product basis.
Another motive for rejecting a tensor product basis is to apply a “circular” rather than
“rectangular” truncation. It is a little inconsistent to retain basis functions such as φ10 (x) φ10 (y)
while omitting functions such as φ11 (x) φ0 (y) because the latter is actually smoother than
the former. If we were using complex Fourier terms as the building blocks of our basis,
then exp(i[10x + 10y]) has a much larger total wavenumber than√ deleted term such as
a
exp(i 11 x) where the total wavenumber of exp(imx + iny) is k ≡ m2 + n2 . For this rea-
son, many prefer the “circular” truncation,

m2 + n2 ¤ N x [“circular”] (6.48)

instead of the simpler “rectangular” truncation, which keeps all basis functions whose one-
dimensional indices satisfy the inequalities

m ¤ Nx n ¤ Ny [“rectangular”] (6.49)
&

As one might imagine, symmetry-modi¬ed basis sets and circular truncation generate a
complicated relationship between the one-dimensional indices (m, n) and the “collapsed”
index I. However, the double loop strategy makes it quite unnecessary to have an analytical
relationship between indices. It is merely necessary to insert the proper IF statements into
the preprocessing double LOOP (to skip the assignment statements if ix2 + iy 2 > N x,
for example). The rest of the program ” everything outside this preprocessing subroutine
and the code that computes the basis functions ” is shielded from the complexities of the
modi¬ed basis set. The basis function subroutine never performs any index calculations;
given I, it reads the necessary values of (m, n) from the arrays MA(I) and NA(I).
It may seem silly to discuss such a humble matter as indexing in such detail, but the
philosophy of this book is ruthlessly practical. The reason that a multi-dimensional BVP
solver may take weeks or months to develop is not because of the intricacies of Sobolev
spaces, but because of the care and fussing that is needed to correctly use the !*#$%& [ex-
pletive deleted] matrix indices.
CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS
120

6.11 Special Problems of Higher Dimensions: Boundary Con-
ditions, Singular Matrices, and Over-Determined Sys-
tems
In two dimensions, a singular pseudospectral matrix seems to be a common experience.
Part of the problem is boundary conditions: the four corner points are each a part of two
different walls. Thus, imposing the condition that u = 0 at each of the N x grid points
on the top and bottom walls and each of the Ny grid points on the sides does not lead to
(2 N x + 2 N y) independent conditions; the corners are counted twice. A similar problem
exists even for non-interpolating algorithms as shown by Eq. (14.8) of Gottlieb and Orszag
(1977).
Another dif¬culty, especially when using a symmetry-modi¬ed basis or a “circular”
truncation, is that some of the basis functions may lose linear independence on the grid.
(Note that circular truncation requires reducing the number of grid points to match the
reduction in the number of basis functions.) For example, T4 (x) can be exactly interpolated
by basis functions of lower degree on a grid that contains only four points. In one dimen-
sion, it is easy to avoid such absurdities; with complicated basis sets in higher dimensions,
it may be quite dif¬cult (Haupt and Boyd, 1988). (A Galerkin method, with numerical
quadrature for the integrals, will solve these dif¬culties because one can use a rectangu-
larly truncated quadrature grid even if the basis set itself has a circular truncation.)
Schultz, Lee and Boyd (1989), who obtained a singular pseudospectral matrix in at-
tempting to solve the fourth order, two-dimensional partial differential equation for Stokes™
¬‚ow in a box, evaded this problem by computing a least squares answer to the over-
determined linear algebra system.
Indeed, experience has shown that large pseudospectral matrices are often mildly ill-
conditioned even if non-singular. Fourteen decimal place precision or better is recom-
mended, but occasionally this is not enough. (Increasing N and also increasing order of
derivatives both amplify roundoff problems.) The Householder method, alias “QR” fac-
torization, has lower round-off error than Gaussian elimination, and thus may be a helpful
option for solving high order differential equations even if the pseudospectral matrix is
not over-determined. Numerical checks described in Lee, Schultz and Boyd (1989b) show
that the solution to the over-determined system has all the desirable properties of spectral
solutions: very rapid convergence and extremely high accuracy for moderate cost.
Thus, redundant boundary conditions, loss of linear independence on the grid, and
the weak ill-conditioning of very large pseudospectral systems are all dif¬culties that are
almost trivially removed. It is essential, however, that the reader be prepared for these
problems; the alternative is despair and cursing when a Gaussian elimination subroutine
bombs out with the error message: SINGULAR MATRIX.


6.12 Special Problems in Higher Dimensions: Corner Sin-
gularities
When the walls of a computational domain meet at a sharp angle ” the corners of a rectan-
gle or a triangle as opposed to the ever-smooth walls of a circle ” the solution to a partial
differential equation will almost invariably be singular at the corner. The classic example
is Poisson™s equation u = ’1 on the unit square with the boundary conditions that u = 0
on all four walls. Even though the boundary conditions are continuous, the coef¬cients of
the differential equation and the forcing are constant, and the equation is linear, the solu-
tion is weakly singular. If we de¬ne a local polar coordinate system centered on one of the
6.13. MATRIX METHODS 121

corners, then the most strongly singular term is of the form

r2 log(r) sin(2θ) (6.50)

This geometrical dif¬culty occurs only at sharp corners and has no one-dimensional coun-
terpart. If we “round” the corners even a little, then u(x, y) is smooth everywhere within
the domain including the boundaries. In the absence of rounding, however, the corner
singularities will dominate the asymptotic behavior of the Chebyshev coef¬cients for suf¬-
ciently large (m, n), and the convergence will be algebraic rather than exponential.

<<

. 27
( 136 .)



>>