. 55
( 136 .)


to obtain ’Λv = g, which is just what we want. (Recall that Λ is the Laplacian operator.)
However, the “consistency” of the splitting-up scheme with the heat equation is not abso-
lute because for suf¬ciently large „ , the Λ1 Λ2 Λ3 term cannot be neglected, and we have in
fact calculated the steady-state solution to the wrong differential equation:

’ 2
v+ vxxyyzz = g
The highest Fourier coef¬cients will be contaminated by the extra term even if „ ∼ O(h2 ).

To show this, denote the Fourier coef¬cients of exp(ikx + imy + inz) for g and v by gkmn
and vkmn . Direct substitution into (13.20) implies
vkmn =
’ („ 2 /4) (k 2 m2 n2 )
k2 m2 n2
+ +
For waves near the aliasing limit, and assuming an equal grid spacing h in all directions,
kmax = mmax = nmax = π/h and
h2 gkmn
vkmn =2 k = kmax , m = mmax , n = nmax
π 3 ’ („ 2 /h4 )(π 4 /4)

which con¬rms that the wavenumbers near the aliasing limit are seriously corrupted even
for „ = h2 , a timestep suf¬cient for the stability of explicit time-marching.
Fortunately, it is fairly straightforward to modify splitting to obtain a similar fractional
step method that is absolutely consistent (Yanenko, 1971). Note, however, that the offend-
ing term is the product of three operators; the splitting-up scheme is both absolutely stable
and absolutely consistent in two dimensions.
The spectral and pseudospectral schemes we will discuss later will always be consis-
tent. Nonetheless, it is a complication: every fractional step algorithm must be checked for
both stability and consistency.

13.5 Operator Theory of Time-Stepping
Now that we have seen splitting-up in a speci¬c example, it is time to look at an abstract
formalism for general integration methods. For simplicity, consider the system of ordinary
differential equations in time given by

ut = Λ u

where u is a column vector and Λ is a square matrix whose elements are independent of
time. As noted in the previous chapter, any method of discretizing a parabolic or hyper-
bolic PDE that is ¬rst order in time will give a system like (13.23) so long as the equation is
linear and the coef¬cients are independent of time.
The system (13.23) has the exact solution (assuming that Λ has a full set of eigenvectors)

u ≡ exp Λ t u0 (13.24)

where u0 is the column vector containing the initial conditions. We can interpret the matrix
exponential (and other functions-of-a-matrix) by expanding in a series of the eigenvectors
of Λ:
Λ ej = »j ej
(Note that we omit the minus sign included earlier in the chapter.)
u0 ≡ (13.26)
u= uj ej ; u0,j ej
j=1 j=1

exp Λ t u0 = exp (»j t) u0,j ej

From this viewpoint, the problem of devising good time-stepping schemes is equivalent
to ¬nding good ways of approximating the exponential of a matrix on the small interval

t ∈ [0, „ ] (13.28)

The Euler method is the one-term Taylor series approximation:

exp Λ„ ≈ 1 + „ Λ + O „ 2 [Euler] (13.29)

We could build higher approximations simply by taking more terms in the Taylor series.
However, if we can put upper and lower bounds on the eigenvalues of the matrix Λ, the
truncated Chebyshev series of the exponential is better. This has rarely been done when the
transient is interesting because one can usually obtain better time-stepping accuracy for the
same amount of work simply by applying (13.29) on a smaller interval. However, when the
prime goal is the asymptotic solution, the Chebyshev-of-the-exponential has been widely
used under the name of the Chebyshev iteration (Chap. 15).
The Crank-Nicholson (and higher order implicit methods) are examples of a third strat-
egy: so-called “Pad´ approximants”. These are rational functions which are chosen so that
their power series expansion matches that of the exponential function through however
many terms are needed to determine all the coef¬cients in the Pad´ approximant. For ex-
ample, the [1, 1] approximant [linear polynomial divided by a linear polynomial] gives the
Crank-Nicholson scheme:
exp Λ„ ≈ I ’ („ /2)Λ [Crank-Nicholson] (13.30)
I + („ /2)Λ

where I is the identity matrix. Taylor expanding gives
’1 „2 2
I ’ („ /2)Λ I + („ /2)Λ ≈ I + „ Λ + Λ + O(„ 3 Λ3 ) (13.31)
which shows that the expansion in „ agrees with that of the exponential through the ¬rst
three terms, thus demonstrating both that the Crank-Nicholson method is indeed the “[1,
1] Pad´ approximant” of the exponential and also that it is more accurate than Euler™s
We can obviously generalize both the Euler and Crank-Nicholson methods by taking
more terms. For example, the “fourth order” Crank-Nicholson scheme is
„2 „2
„ „
exp Λ„ ≈ I ’ Λ + Λ2 Λ + Λ2 + O(„ 5 Λ5 ) (13.32)
2 12 2 12
(Calahan, 1967, and Blue and Gummel, 1970).
The obvious advantage of Pad´ approximants is that they remain bounded as „ ’ ∞;
Eq. (13.32) generates a time-stepping algorithm which, like the Crank-Nicholson scheme,
is always stable as long as the eigenvalues of Λ are all negative. There is a second, less
obvious virtue: Pad´ approximants are almost always more accurate ” sometimes very
much more so ” than the Taylor series from which they are generated (Baker, 1975, Bender
and Orszag, 1978).
In more than one dimension, splitting enormously reduces costs since the denominator
in the rational approximations requires inverting a matrix which approximates a two- or
three-dimensional differential operator. Let us write

Λ = Λ 1 + Λ2

where Λ1 and Λ2 are the matrix representations of one-dimensional operators that are eas-
ily inverted.
The Alternating-Direction Implicit scheme (ADI) is the fractional steps algorithm

un+1/2 ’ un 1
Λ1 un+1/2 + Λ2 un (13.34)
„ 2
’ un+1/2
u 1
Λ1 un+1/2 + Λ2 un+1 (13.35)
„ 2
The reason for the name “Alternating Direction” is that we treat one direction implicitly on
the ¬rst half-step while treating the y-derivatives explicitly and then interchange the roles
of the two coordinates on the second half-step. If we introduce the auxiliary operators

Bi („ ) ≡ I + (13.36)
Λi i = 1, 2
where I is the identity matrix, then we can write the ADI scheme in whole steps as

un+1 = B2 (’„ )’1 B1 („ ) B1 (’„ )’1 B2 („ ) un (13.37)

Although this approximation to the exponential is the product of four factors, what
matters is that it still agrees with the Taylor expansion of the exponential. By using the
well-known series 1/(1 + x) = 1 ’ x + x2 + . . . , which is legitimate even when x is a matrix,
we ¬nd that (13.37) is accurate to O(„ 2 ) just like the Crank-Nicholson scheme.
If the operators Λ1 and Λ2 commute, then we can swap B1 („ ) and B2 („ ) to obtain the
same scheme in a different form:

un+1 = B2 (’„ )’1 B2 („ )B1 (’„ )’1 B1 („ ) un [Splitting] (13.38)

This is the “splitting-up” form: un+1/2 = B1 (’„ )’1 B1 („ ) un and we have

un+1/2 ’ un 1
Λ1 un+1/2 + un (13.39)
„ 2
n+1 n+1/2
u 1
Λ2 un+1 + un+1/2 (13.40)
„ 2

When the operators Λ1 and Λ2 do not commute, the splitting and ADI schemes are not
identical and splitting is only ¬rst-order accurate in time.

13.6 Higher Order Splitting When the Split Operators Do
not Commute: Flip-Flop and Richard Extrapolation
There are two ways to recover second order accuracy when the operators do not commute.
The ¬rst is to reverse the sequence of fractional steps on adjacent whole steps (“¬‚ip-¬‚op”).
In other words, we solve the BVP in x ¬rst and then that in y whenever n is even and then
solve the boundary value problem in y ¬rst on every odd time step. The method is then
second order even with noncommuting Λ1 and Λ2 , but with an effective time step of 2 „
(rather than „ ).
The other alternative is Richardson extrapolation (not be confused with Richardson™s
iteration of Chapter 15). This is a very general strategy for improving the accuracy of low
order methods. The basic idea is that if the error in a given algorithm is O(hr ) for some

r and suf¬ciently small h, then two computations with different h can be combined with
appropriate weights so that the leading error terms cancel to give an answer of higher
Let ulow (2 „ ) and uhigh (2 „ ) denote the results of integrating from t = 0 to t = 2 „ with
a single time step of t = 2 „ and with two steps of length t = „ , respectively. For a ¬rst
order method, the errors are
= uexact + 4 p „ 2 + O(„ 3 ) (13.41)
ulow (2„ )
= uexact + 2 p „ 2 + O(„ 3 ) (13.42)
uhigh (2 „ )
The numerical value of p is unknown, but the existence of an error which is quadratic in „
is simply a de¬nition of what it means for a scheme to “¬rst order in time”.1 It follows that
uRichardson (2 „ ) ≡ 2 uhigh (2 „ ) ’ ulow (2 „ ) = uexact + O(„ 3 ) (13.43)
uRichardson is second order because the weighted sum cancels the leading error terms in the
¬rst order approximations, uhigh and ulow . This extrapolation can be performed on every
sub-interval of length 2 „ to re¬ne the solution to second order accuracy as we march in
Consequently, when the operators of different sub-problems do not commute, we can
always retrieve second order accuracy at the expense of additional programming. Often,
Richardson extrapolation is omitted because hydrodynamic calculations are usually lim-
ited in accuracy by spatial rather than temporal resolution. At the other extreme, Strain
(1995) has used repeated Richardson extrapolation of the modi¬ed Backwards Euler method
to create an implicit time-marching code in which the order is easily and adaptively varied.

13.7 Splitting and Fluid Mechanics
Our example is the shallow water wave equations:
’ f v + φx (13.44)
=µ u
+ f u + φy =µ v

+ ux + vy = 0
where f is the Coriolis parameter and D/Dt denotes the total derivative:
D ‚ ‚ ‚
≡ (13.47)
+u +v
Dt ‚t ‚x ‚y
is a nondimensional combination of depth, planetary radius, rotation rate and the gravi-
tational constant, “Lamb™s parameter”.
The physical splitting is (Fig. 13.4)
Dv n+1/3
[Advective Step] (13.48b)
1 Over many time steps, the errors accumulate so that the total error on a ¬xed interval in t is a linear function
of „ even though the error on a single step is quadratic in „ .


. 55
( 136 .)