<<

. 62
( 136 .)



>>

the full matrix Λ, which is O(N 2 ) without special tricks and and O(8N log2 N ) even with
FFT methods. The harder question is: Do the eigenvalues of H approximate those of Λ
suf¬ciently well so that (15.16) is satis¬ed?
For the smallest eigenvalue, the answer is always and obviously: Yes. The reason is that
»min corresponds to the lowest mode of the original differential equation. Unless N is very
small, both the ¬nite difference and pseudospectral approximations will do a resoundingly
good job of resolving this mode, and therefore

[Any ¬nite difference preconditioning] (15.17)
±min = 1

The situation, alas, is quite different for the higher modes because they will not be
resolved well by either the spectral or ¬nite difference methods. Our only hope is that the
erroneous large eigenvalues of the spectral and ¬nite difference matrices will be of the same
magnitude. It turns out that this is sometimes true and sometimes false, so it is necessary to
specialize to particular cases.
EXAMPLE: Consider the ordinary differential equation

(15.18)
a2 (x) uxx + a1 (x) ux + a0 (x) u = f (x)

with the periodic boundary conditions u(x + 2 π) = u(x).
1 Thiswas suggested by D™yakonov (1961), who used a similar preconditioning to solve a non-separable PDE
through an iteration that required solving a separable PDE at each step.
15.3. PRECONDITIONING: FINITE DIFFERENCE 295

The unknowns are then u(xj ) where xj = 2 π j/N, j = 0, 1, . . . , N ’ 1. Let u denote
the column vector whose elements are the grid point values of u(x). The ¬nite difference
approximation is
uj+1 ’ 2 uj + uj’1 uj+1 ’ uj’1
(15.19)
(Hu)j = a2 (xj ) + a1 (xj ) + a0 (xj ) uj
( x)2 2x
x ≡ 2π/N .
where
’1
The eigenfunctions u and eigenvalues ±k of the matrix A = H Λ satisfy the equation

(15.20)
Λu = ±Hu

Now if u(x) is a smooth function ” a low eigenmode of the differential operator L ”
then both the ¬nite difference and pseudospectral approximations will do a good job of
approximating it, and ± ≈ 1 as already noted.
If N is suf¬ciently large, then the N -th eigenmode will oscillate very rapidly in com-
parison to a2 (x). This justi¬es the WKB approximation (Bender & Orszag, 1978). This
shows that the eigenmodes of large » must locally resemble those of a constant coef¬cient
equation, which are exp[i k x] for both the spectral and ¬nite difference approximations.
Furthermore, if u(x) is highly oscillatory ” a large Λ mode ” then

(15.21)
uxx ux u

which implies that the ¬rst derivative and undifferentiated terms in the differential equation
are irrelevant to the eigenvalues ±k for large k except when the coef¬cient of the second
derivative is extremely small: (O[1/N ] in comparison to the ¬rst derivative, or O(1/N 2 )
relative to the undifferentiated term.)
Substituting

(15.22)
uj = exp(i k xj )

into
uj+1 ’ 2uj + uj’1
a2 (’k 2 ) uj = ±k a2 (15.23)
( x)2
gives

k 2 [ x]2
(15.24)
±k =
4 sin2 [(1/2) k x]
Noting that k ∈ [0, π/ x], the corresponding range of ± is then

π2
1¤±¤ ≈ 2.5 [Second derivative] (15.25)
4
for all modes, independent of the number of grid points N .
Success! The optimum choice of „ for Richardson™s method is
4
(15.26)
„=
7
The error decreases by a factor of 7/3 = 2.333 for each Richardson™s iteration. Since Cheby-
shev series are but a Fourier cosine series in disguise, (15.25) and (15.26) also apply when
(15.18) is solved via Chebyshev polynomials.
CHAPTER 15. MATRIX-SOLVING METHODS
296

For fourth order derivatives, one can derive

π4
1¤±¤6 (15.27)
=
16

which is good but not great. Orszag shows that if we write the problem
4
(15.28)
u=f

as the equivalent system

’1
2
u 0
(15.29)
=
2
0 v f

the pre-conditioned matrix equivalent to the operator K, de¬ned by the square matrix in
(15.29), has eigenvalues that satisfy

1 ¤ ± ¤ 2.5 (15.30)

When the highest derivative is odd, however, the preconditioned eigenvalues can be-
come arbitrarily large. (The preconditioned eigenvalues will also become enormous when
the highest derivative is even but has an extremely small coef¬cient, such as high Reynolds
number ¬‚ow.) The operator ‚/‚x has the eigenvalues »k = i k for periodic boundary con-
ditions while the corresponding centered ¬nite difference approximation has the eigenval-
ues (i sin[k x]/ x) so that

kx
±k ∼ O (15.31)
sin (k x)

Unfortunately, this is unbounded. If we ¬lter the high wavenumber components and thus
impose the restriction

|k [2/3™s rule] (15.32)
x| < 2 π/3

versus the maximum resolvable wavenumber kmax = π/ x, then

1 ¤ ± ¤ 2.4 [First derivative; 2/3™s Rule Filter] (15.33)

which is almost identical to that for the (un¬ltered) second derivative. The remedy is fa-
miliar because the ¬lter is the same as the Orszag Two-Thirds Rule (Chapter 11, Sec. 5)
for dealiasing a quadratically nonlinear ¬‚ow: Filter out the highest one-third of the modes
by setting the large k components equal to zero immediately after taking any Fast Fourier
Transform from grid point values to spectral coef¬cients.
Canuto et al. (1988, pg. 142“143) describe another alternative: a staggered grid. First or-
der derivatives (both ¬nite difference and pseudospectral) are evaluated on a set of points
which is shifted from the grid on which u(x) itself is evaluated. The new grid points lie
halfway between the points of the original grid. All the necessary manipulations may be
performed by the FFT. The pre-conditioned eigenvalues are con¬ned to the very narrow
range [1, 1.5708]. The staggered grid is often useful, independent of preconditioning con-
siderations, to cope with the “pressure problem” in wall-bounded viscous ¬‚ows.
Funaro(1994, 1996) and Funaro and Russo(1993) have described an interesting variation
for convection-diffusion problems where the second derivative coef¬cients are very small
compared to the ¬rst derivatives. By using a second collocation grid with an “upwind”
shift, one can obtain a good preconditioner.
15.4. COMPUTING ITERATES: FFT/MATRIX MULTIPLICATION 297

In one dimension, ¬nite difference (and ¬nite element) preconditioning is cheap. How-
ever, in two or more dimensions, the LU factorization of H is expensive. Fortunately, it is
often possible to use a different preconditioning which is much simpler than an “honest”
¬nite difference matrix and yet still approximates Λ in some sense. This strategy ” trading
a few extra iterations for a much lower cost per iteration ” is the theme of Sec. 15.5.
Another dif¬culty is that some eigenvalues of A may be complex-valued ” systems
of differential equations in complicated, high Reynolds number ¬‚ows, for example. This
motivates the staggered pseudospectral grid of Hussaini & Zang (1984). One can almost
always ¬nd a ¬lter or a grid which will give a narrow range for the span of the eigenvalues
of the pre-conditioned matrix A.
One technical complication for non-Fourier applications is that the ¬nite difference
problem must be solved using the same unevenly spaced grid as employed by the pseu-
dospectral method. Fornberg (1988, 1996) describes a simple recursion for computing the
¬nite difference weights to all orders for an uneven grid. Alternatively, one can compute
and differentiate the Lagrange polynomial in Maple or Mathematica.
For example, if the grid points are xj , j = 1, 2, . . . , de¬ne

hj ≡ xj+1 ’ xj , (15.34)
j = 1, 2, . . .

The centered (unstaggered) ¬nite difference approximations are

du hj 1 1 hj’1
’ ’
(xj ) = uj’1 + uj + uj+1
dx hj’1 (hj’1 + hj ) hj’1 hj hj (hj’1 + hj )
(15.35)


d2 u 2 2 2
uj’1 ’ (15.36)
(xj ) = uj + uj+1
dx2 hj’1 (hj’1 + hj ) hj’1 hj hj (hj’1 + hj )

Canuto et al. (1988) amplify on some of the points made here. They also brie¬‚y discuss
¬nite element preconditioning.


15.4 Computing the Iterates:
FFT versus Matrix Multiplication
The rate-determining step in the Richardson iteration is the matrix product Λu where

Λij ≡ a2 (xi ) Cj,xx (xi ) + a1 (xi ) Cj,x (xi ) + a0 (xi ) Cj (xi ) (15.37)

for a second order ODE where the Cj (x) are the appropriate cardinal functions such that
Cj (xi ) = δij . There are two options for evaluating the column vector which is the product
of Λ with u. The ¬rst is a matrix-vector multiply at a cost of about 2N 2 operations per
iteration. (We neglect the set-up cost of computing the elements of Λ because this is done
only once.)
The alternative uses the Fast Fourier Transform. Note that

≡ a2 uxx + a1 ux + a0 u|x=xi (15.38)
Λu
i
CHAPTER 15. MATRIX-SOLVING METHODS
298




Figure 15.2: Two methods for evaluating the matrix product Λ u


The FFT of u gives the series coef¬cients {an }. By differentiating the individual terms in
the spectral series, we obtain (in O(N ) operations) the series coef¬cients for the ¬rst and
second derivatives.
The differentiation is trivial for a Fourier basis. For a Chebyshev series, one employs a
(q)
recurrence. If an denotes the Chebyshev coef¬cients of the q-th derivative of u(x), then


(q) (q)
cn’1 an’1 ’ an+1 = 2 n a(q’1) n≥1 (15.39)
n



(q’1)
where cn = 2 if n = 0 and cn = 1 for n > 0. Given the N coef¬cients {an }, we initialize
(q) (q)
by setting aN = aN ’1 = 0 and then apply (15.39) in descending order to compute all the
coef¬cients of the derivative.
Two FFT™s give ux (xi ) and uxx (xi ). Three multiplications and additions per grid point
complete the calculation of the vector (Λu). The rate-determining step is the cost of the
three FFT™s, which is only O(7.5N log2 N ). The two options are shown schematically in
Fig. 15.2.
As explained in Chapter 10, for low to moderate resolution, matrix multiplication is as
good or better than the FFT. For large N , the FFT is best.
With the FFT and an ef¬cient preconditioning, we can solve a boundary value problem
in any number of dimensions at an expense which is O(total # of grid points) multiplied by
(log2 N ) where N is the number of grid intervals in one coordinate.
15.5. ALTERNATIVE PRECONDITIONERS 299

15.5 Alternative Pre-Conditioners for Partial Differential Equa-
tions
In two or more dimensions, Orszag™s (1980) ¬nite difference preconditioning, here dubbed
HF D , is much more expensive than in one coordinate. A sparse matrix direct method2
such as block tridiagonal elimination costs O(N 4 ) operations and O(N 3 ) storage even on a
two-dimensional N — N grid.
Zang, Wong, and Hussaini (1984) therefore experimented with alternatives. Their HLU
and HRS , are approximate LU factorizations of HF D . As in an exact factorization, L is lower
triangular and U is upper triangular with diagonal elements all equal to unity.
In their ¬rst preconditioning, LLU is the lower triangular part of HF D . (They do not
factor HF D , merely identify its diagonal and lower triangular elements as the corresponding
elements of LLU .) The elements of ULU are chosen so that the product LLU ULU ≡ HLU has
its two superdiagonals [elements with column numbers greater than the row numbers]
agree with those of HF D . Their second approximate factorization is identical except that
the diagonal elements of L are altered so that the row sums of HRS equal the row sums of
HF D .
Both approximate factorizations can be computed recursively. For example, let the ¬nite
difference approximation in two dimensions be of the form

(15.40)
Bjl uj,l’1 + Djl uj’1,l + Ejl ujl + Fjl uj+1,l + Hjl uj,l+1 = fjl

<<

. 62
( 136 .)



>>