. 38
( 136 .)


the solution at previous time levels. Implicit schemes are always more costly per time step
than their explicit counterparts, but implicit schemes allow a long time step, which often
makes them more ef¬cient. Semi-implicit schemes, which treat some terms in the PDE ex-
plicitly so as to simplify the boundary value problem, have become the norm in operational
weather forecasting and climate prediction.
Unfortunately, there are some schemes which defy such simple classi¬cations. Semi-
Lagrangian algorithms, which have very good front-resolving and stability properties, are
explicit, but one has to solve an auxiliary problem, as in implicit schemes. We will describe
such algorithms in Chapter 14. The Regularized Long Wave equation (water waves) and
the quasi-geostrophic equation (meteorology, oceanography, and plasma physics where it
is the “Hasegawa-Mima” equation) have a differential operator acting on the time deriva-
tive, and thus rather confusingly require the solution of a boundary value problem at every
time step even when the time-marching algorithm is explicit. (See Sec. 6 below.)
In this chapter, we focus on explicit time-marching schemes. We shall assume that
the spatial dependence has already been discretized by the pseudospectral or Galerkin


algorithm so that the remaining problem is to integrate a system of ordinary differential
equations in time of the form

ut = F (u, x, t),

where u and F are vectors, either of grid point values or spectral coef¬cients.
Explicit time-marching methods are inevitably limited by an unphysical, purely com-
putational instability ¬rst identi¬ed by Courant, Friedrichs and Lewy in 1928.

De¬nition 22 (Courant-Friedrichs-Lewy (CFL) Instability)
When the timestep „ for an EXPLICIT time-marching method exceeds a limit „max that depends
on the time-marching algorithm, the physics of the underlying partial differential equation, and the
spatial resolution, the error grows exponentially with time. This exponential error growth is the
CFL Instability. All explicit methods have a ¬nite „max , but many implicit methods (discussed in
later chapters) are stable for arbitrarily large timesteps.

To obtain a precise value for „max , one must analyze each individual combination of
spatial and temporal discretizations for each speci¬c problem. However, it is possible to
offer an estimate.

The maximum timestep is the same order of magnitude as the time scale for advection or dif-
fusion or wave propagation or whatever across the SMALLEST distance h between two grid points.
Thus, for wave propagation and for diffusion, the limits are
„max = d ν h2
[Waves] [Diffusion] (9.2)
„max = d
where d, d are O(1) constants, cmax is the speed of the FASTEST-MOVING waves and ν is the
diffusion or viscosity coef¬cient, and h is the SMALLEST distance between two adjacent grid point,

h ≡ minj |xj ’ xj+1 | (9.3)

The two most popular time-integration methods (with spectral algorithms) are the fourth-
order Runge-Kutta (“RK4”) scheme, which is given in all elementary numerical analysis
texts, and the third-order Adams-Bashforth method, which is
23 4 5
F (un , x, tn ) ’ F un’1 , x, tn’1 +
un+1 = un + „ F un’2 , x, tn’2 (9.4)
12 3 12

where „ is the time step and where tn = n „ .
The Runge-Kutta algorithm is rather expensive because the fourth order version re-
quires evaluating the vector F four times per time step. Since F must be computed by
calculating spatial derivatives and nonlinear products, usually with extensive use of Fast
Fourier Transforms, the evaluation of F is the most costly part of any time-marching calcu-
lation. However, Runge-Kutta schemes have considerable virtues, too. First, RK4 is fourth
order, that is, has an error decreasing as O(„ 4 ): one does not wish to sacri¬ce the high ac-
curacy with which spatial derivatives are computed in a spectral method by combining it
with a lousy time-marching routine.1 Second, it is stable with a rather long time step com-
pared to other explicit methods for both wave propagation and diffusion. Third, Runge-
Kutta methods are “self-starting” and do not require a different time-marching scheme to
1 There is a tendency for Runge-Kutta schemes to have relatively large errors near the boundaries when numer-

ical (as opposed to behavior) boundary conditions are imposed. This is not a serious problem and never arises
with a Fourier basis; some remedies are known (Fornberg, 1996).

compute the ¬rst couple of time steps, as is true of Adams-Bashforth. Fourth, one may
freely vary the length of the time step as the ¬‚ow evolves because the algorithm is self-
starting. Fifth, for only a little extra expense, one can compute Runge-Kutta methods of
adjacent orders simultaneously to obtain error estimates that can be used to dynamically
and adaptively vary the time step. Most ODE libraries offer a routine in which RK4 and
RK5 are combined (“RK45”) to vary the time step on the ¬‚y to keep the error within a
user-speci¬ed tolerance. In contrast to most other algorithms, it is not necessary for the
user to specify a time step; the adaptive RK code will automatically choose one so that the
calculation is stable.
There are a couple of caveats. If one does know the time step in advance, then it is much
faster to use RK4 with that „ and no adaptation. Also, although the library adaptive codes
are quite reliable and robust, the RK4/RK5 code occasionally blows up anyway if the time
scale shortens abruptly. Even so, RK4 and adaptive RK4/RK5 are good time-marching
methods when the cost of evaluating F (u, x, t) is not excessive.
The third order Adams-Bashforth scheme requires an initialization phase in which u1
and u2 are computed from the initial condition u0 by some other procedure, such as RK4 or
a ¬rst or second order scheme with several short time steps. One must also specify the time
step „ since adaptively varying the time step, to allow the code to choose its own time step,
requires restarting the march with a different time step. However, AB3 is stable, robust
and less costly per time step than RK4. Consequently, it, too, is widely used. Durran (1991)
praises AB3 as the most ef¬cient of the many algorithms he compares in his very readable
review of dissipation and phase errors in time-marching schemes.
Many alternatives have also been tried. In ye olde days, leapfrog was very popular
because of its simplicity:

un+1 = un’1 + 2 „ F (un , x, tn ) [Leapfrog] (9.5)

It has the disadvantage that the solution at odd time steps tends to drift farther and farther
from the solution for even time steps, so it is common to stop the integration every twenty
time steps or so and reinitialize with the ¬rst order forward Euler method, or else average
un and un’1 together. The method is only second order even without the averaging or
Euler steps. It is also unstable for diffusion for all „ , so it was common in old forecasting
models to “lag” the diffusion by evaluating these terms at time level (n ’ 1), effectively
time-marching diffusion by a ¬rst order, forward Euler scheme instead of leapfrog. With
Chebyshev polynomials, leapfrog is also unstable even for the simple wave equation

ut + ux = 0,

which may be taken as a prototype for advection (Gottlieb, Hussaini and Orszag, 1984).
With Fourier series, leapfrog is okay, but its popularity has fallen like leaves in the ¬rst
winter snow.
The second order Adams-Bashforth scheme
3 1
F (un , x, tn ) ’ F un’1 , x, tn’1
un+1 = un + „ (9.7)
2 2

has also been used with spectral methods, but has fallen from grace because it has a very
weak computational instability for all „ . The growth rate is so slow that AB2 is quite satis-
factory for short-time integrations, but for long runs, AB2 is slow poison.
Canuto et al. (1988) and Durran (1991) give good reviews of other explicit schemes.
Often, however, the major worry is adequate spatial resolution, especially in computa-
tional ¬‚uid dynamics. A second or even ¬rst order time marching scheme may then be

adequate and the time step „ will be chosen to be the largest step that is stable rather than
the largest that achieves a certain time-marching error.
In any event, the big worry is not choosing a time integration scheme, but rather ¬nding
a cheap way to evaluate spatial derivatives. In the next section, we discuss the problem and
its cure.

9.2 Differential Equations with Variable Coef¬cients
Trouble appears even for a simple problem like

ut + c(x) ux = 0

with periodic boundary conditions and the initial condition

u(x, t = 0) = q(x)

If c(x) = c, a constant, then the solution of (9.8) is the trivial one

u = q(x ’ c t) (9.10)

(This illustrates a general principle: constant coef¬cient differential equations are best solved
by special algorithms. We shall illustrate this point for Chebyshev solutions of non-periodic
problems when we discuss matrix-solving methods in Chapter 15, Sec. 10.)
When c(x) is variable, the obvious collocation algorithm is very expensive. With a
cardinal function basis {Cj (x)}, leapfrog is
N ’1
’ 2 „ c(xj ) j = 0, . . . , N ’ 1
un+1 un’1 un Ck,x (xj ) (9.11)
= k
j j

where the second subscript denotes differentiation with respect to x. This is identical with
a ¬nite difference scheme except that the derivative is evaluated using N points instead
of just three. This N -point formula is much more accurate than three-point differences,
but also much more expensive. Since the sum in (9.11) has N terms and we must evaluate
such a sum at each of N grid points at each and every time step, the leapfrog/cardinal
method has an operation count which is O(N 2 /time step). Ouch! An explicit-in-time/
¬nite-difference-in-space code has a cost of only O(N )/time step.
Galerkin™s method is no improvement unless c(x) is of special form. If c(x) is a trigono-
metric polynomial (or an ordinary polynomial if solving a non-periodic problem using
Chebyshev polynomials), then Galerkin™s method yields a sparse matrix, i. e. the sum
analogous to (9.11) has only a small, N -independent number of terms. To show this, let
N ’1
u(x) = am (t) φm (x)

If the basis functions φn (x) are orthogonal, then the spectral form of (9.8) is
N ’1
’ 2„
an+1 an’1 an (t) (φm , c(x) φk,x ) (9.13)
m m k

If c(x) is a polynomial, then the matrix

Mmk ≡ (φm , c(x) φk,x ) (9.14)

Figure 9.1: Schematic for the leapfrog/Fourier pseudospectral method. The arrows show
how the algorithm processes grid point values of u(x) at time level n [lower double circle,
left side] to generate {u(xj )} at time level (n + 1) [upper double circle]. The algorithm pro-
ceeds in a counterclockwise direction. (Inspired by a diagram of Vichnevetsky & Bowles

will be sparse and banded. (An explanation for the banded structure will be given in later
chapters.) Unless c(x) has such a special form, however, the matrix M is a full matrix, and
leapfrog/Galerkin method will also cost O(N 2 ) per time step.
This is poor in comparison to the O(N ) operation count for a spatial ¬nite difference
algorithm. We seem to be trapped between Scylla and Charybdis2 , but the pseudospectral
and Galerkin™s method are costly for different reasons. The cardinal function representation
(9.11) is not bothered in the least by the fact that c(x) is a function of x instead of constant;
the value of c(x) is merely multiplied into the value of ux at the j-th grid point. What wrecks
the cardinal function method is that evaluating the derivative requires a sum over the values
of u at all the grid points.
In contrast, Galerkin™s method is untroubled by differentiation because the derivative
of a trigonometric function is always a trigonometric function. Consequently, if c(x) were
constant, the matrix M de¬ned by (9.14) would have only a single non-zero element in each
row: the Fourier-Galerkin representation of differentiation (to any order) is a diagonal ma-
trix. What ruins the Galerkin method is the need to multiply the derivative by a complicated
function and then reexpand the product as a series of basis functions.
The remedy is a hybrid algorithm ” part cardinal function, part Galerkin™s. This combi-
nation is usually dubbed the “pseudospectral” method (even though it contains Galerkin
ideas) because this hybrid algorithm is the only practical way to solve time-dependent
problems with non-constant coef¬cients and large N .
The basic idea is to modify (9.11), the cardinal function representation, in one crucial
2 InGreek mythology, two monsters that guarded either side of the [narrow!] Straits of Messina between Sicily
and Italy.

respect. Instead of evaluating ‚u/‚x by summing up all the derivatives of the cardinal
functions, use

i k ak eikx (9.15)
ux =

The coef¬cients of the derivative can be evaluated from those of u(x) in only N operations.
We can then evaluate the derivative series (9.15) at each of the N grid points to obtain u(xj ).
We may now extrapolate in time with a ¬nal step that needs only O(N ) operations:

un+1 = un’1 ’ 2 „ c(xj ) un (xj ) j = 0, . . . , N ’ 1 (9.16)
j j

This is more like it!
For Chebyshev algorithms, differentiation is also an O(N ) operation in spectral space.
dq u (q)
= ak Tk (x)

so that the superscript “q” denotes the coef¬cients of the q-th derivative. These may be
computed from the Chebyshev coef¬cients of the (q ’ 1)-st derivative by the recurrence
relation (in descending order)
(q) (q)
aN = aN ’1 =0
(q) (q’1) (q)
k = N ’ 1, N ’ 2, N ’ 3, . . . , 1
ak’1 = 2 k ak + ak+1 ,


. 38
( 136 .)