<<

. 48
( 136 .)



>>

(12.7)
ut + cux = 0,
CHAPTER 12. IMPLICIT SCHEMES & THE SLOW MANIFOLD
224

the usual Fourier basis (exp(ikx)) shows that the equation for each wavenumber is uncou-
pled with
» = ’ikc, (12.8)
bk (t) = bk (0) exp(’ikct)
Other constant coef¬cient wave equations similarly generate pure imaginary eigenvalues.
Advection does, too; note that the one-dimensional advection equation, also known as the
inviscid Burgers™ equation, is identical with 12.7 except that c ’ u.
In contrast, the diffusion equation
(12.9)
ut = uxx
generates negative real eigenvalues; in the Fourier basis, which is the eigenexpansion for the
diffusion equation,
» = ’k 2 ; bk = bk (0) exp(’k 2 t) (12.10)
In general, numerical algorithms fail to propagate and/or diffuse at the exact rate, cre-
ating computational dispersion and/or diffusion that is superimposed on whatever is in
the exact solution. One major theme of this chapter is that the choice of time-stepping al-
gorithms and time step are both strongly constrained by the question: How much error is
tolerable for a given component of the solution?
Before we turn to time-marching errors, however, it is useful to ¬rst look brie¬‚y at errors
due to spatial discretization.


12.2 Dispersion and Amplitude Errors Due to Spatial Dis-
cretization
It is easy to discuss the computational dispersion and dissipation of a Fourier basis: there
isn™t any! Advection is corrupted only by aliasing errors; if the Two-Thirds Rule is applied,
then advection is exact for all the un¬ltered wavenumbers.
If no ¬ltering is used, then advection is not exact even in a Fourier basis because of
aliasing errors; high wavenumbers near the truncation limit will not be correctly advected
by high wavenumber components of the velocity. However, high wavenumbers will be
advected exactly by low wavenumbers including the zero wavenumber component, which
is the mean ¬‚ow. Even in an aliased calculation, the total error in advection is small, which
is one reason why Fourier methods have become so popular for modelling turbulence in a
periodic box.
Unfortunately, the same is not so for ¬nite difference methods. One simplifying prin-
ciple, much exploited by J. von Neumann, is that exp(ikx) is an eigenfunction of a ¬nite
difference approximation just as it is for differential operators. Thus, backwards and cen-
tered difference approximations to the ¬rst derivative,
u(x) ’ u(x ’ h)
du
≈ [BACKWARDS]
dx h
u(x + h) ’ u(x ’ h)
≈ [CENTERED] (12.11)
2h
give the approximations, when u = exp(ikx),
(1 ’ exp(’ikh))
d exp(ikx)
≈ [BACKWARDS]
exp(ikx)
dx h
i sin(kh)
≈ [CENTERED] (12.12)
exp(ikx)
h
12.2. DISPERSION AND AMPLITUDE ERRORS 225

The corresponding approximations to the true eigenvalue of the ¬rst derivative operator
can be written as
h2
≈ ik + k + O(h2 k 3 ) [BACKWARDS]
»k
2
h2 3
≈ ik ’ i k + O(h4 k 5 ) [CENTERED] (12.13)
6

The leading terms in (12.13) correctly reproduce the exact eigenvalue of the ¬rst deriva-
tive operator, which is ik. However, the backwards difference approximation has a second
term, the dominant error, which is proportional to the eigenvalue of the second derivative
operator, ’k 2 . Indeed, it is explained in elementary numerical analysis texts that the error
for the one-sided and centered differences are proportional to the second and third deriva-
tives of the function being differentiated, respectively. It follows that one can give two
interpretations to the ¬nite difference approximations to the wave equation ut + cux = 0.
The ¬rst is that the replacing the space derivative by two-point backwards and centered
differences approximates that equation with errors of O(h) and O(h2 ), respectively. The
unconventional interpretation (equally correct) is that the differences formulas give second
and third order approximations to the modi¬ed wave equations:

h
[BACKWARDS] (12.14)
ut + cux = c uxx
2


h2
ut + cux = ’c [CENTERED] (12.15)
uxxx
6

Whether one prefers to conceptualize using the “modi¬ed” equations (12.14), 12.15) or
the power series for the eigenvalues (12.13), the point is that the backwards difference errs
by providing an arti¬cial computational diffusion. Diffusion is scale-selective, damping the
high wavenumbers much faster than low wavenumbers. The backwards difference (which
is the “upwind” difference for c > 0) is popular in hydrodynamics because the inherent
diffusivity reduces spectral blocking, that is, the build-up of noise at high wavenumbers
near the truncation limit. In the presence of marginally resolved fronts, the upwind differ-
ence approximation acts as a smoother. In contrast, the forward (“downwind”) difference
acts an “antidiffusion”, generating spectral blocking and blow-up. (Note that the waves
near the truncation limit are most strongly ampli¬ed by an “antidiffusion”.)
The centered difference is not dissipative to lowest order. However, the exact solution
to the original wave equation is nondispersive: all wavenumbers propagate with phase
speed c so that the initial disturbance translates without change in shape. The centered dif-
ference introduces a spurious computational dispersion. A crest in the initial condition will
disperse into many small ripples over time because of the O(h2 ) dispersion. When there
is a frontal zone, the lack of damping and the dispersion cause lots of small oscillations
to develop. Consequently, upwind differences are more popular than centered differences
when fronts are anticipated even though the centered difference is technically of higher
order.
With a Fourier basis, as already noted, we don™t have to worry about either computa-
tional dissipation or dispersion. Unfortunately, with the Chebyshev basis only the lowest
(2/π)N eigenvalues of an N -term Chebyshev approximation are good approximations to
the corresponding eigenvalues of the derivative operator. However, the error in the lower
eigenvalues is exponentially small in N , so the computational dissipation and dispersion
of a Chebyshev algorithm is very small compared to a ¬nite difference algorithm.
CHAPTER 12. IMPLICIT SCHEMES & THE SLOW MANIFOLD
226
4

3




µ/ »
2


1


0
’1 ’0.5 0 0.5 1
„»

Figure 12.1: Numerical/exact eigenvalue ratio, µ/», for the forward Euler solution to
du/dt = »u, plotted as a function of the scaled time step „ ». The horizontal dividing
line at µ/» = 1 is the exact decay rate.


12.3 Time-Marching Errors and the
CFL Stability Limit for Explicit Schemes
Even when there is no error due to spatial discretization, as true with a Fourier basis, there
will be still be computational dispersion and/or dissipation because of time-marching er-
rors. The ODE which results from switching to a basis of the eigenvectors of the spatial
discretization operators is (12.6), which we repeat for clarity:
du
(12.16)
= »u + f (t)
dt
The simplest explicit scheme is the “forward Euler” (FE) method:

un+1 = (1 + „ ») un + „ f (tn ) (12.17)

where the superscripts denote the time level (not powers) and „ is the time step.
If » is negative, as for diffusion, then for suf¬ciently small „ , the FE solution damps out,
too. If f is a constant, then the ¬nite difference solution will damp out to the steady state

u∞ = ’f /» (12.18)

which is exact.
The bad news is that the FE solution does not damp out at the correct rate. The exact
solution decays as exp(»t) whereas the ¬nite difference solution decays towards the steady
state as exp(µt) where

log(1 + „ ») „»
≈» (12.19)
µ=» 1+ + ...
„» 2
12.3. ERRORS & CFL LIMIT FOR EXPLICIT SCHEMES 227

Fig. 12.1 shows that the numerical solution damps out too rapidly; when „ » = ’1, the
numerical damping rate is in¬nitely large! When » > 0, the ¬nite difference solution grows
(as appropriate for “anti-diffusion”), but grows too slowly: at only 70 % of the true growth
rate when „ » = 1.
When „ » < ’1 (not shown on the graph), something worse happens: the numerical
solution grows with time even though the exact solution decays. This sort of computational
instability was ¬rst discovered by Courant, Friedrichs and Lewy (CFL) in 1928. In this case,
the “CFL Stability Criterion” is that „ < |1/»| for computational stability.
Stability is usually presented as an “all-or-nothing” concept, but this is misleading.
First, nothing bad happens when „ > |1/»| and » > 0: the numerical solution blows
up, but the exact solution blows up, too. The only dif¬culty with „ > |1/»| is that the nu-
merical solution is growing at only two-thirds or less of the true rate of growth, which is
usually an unacceptably large error.
Second, even when » < 0, Fig. 12.1 shows that the error becomes large even before the
CFL criterion is violated. Blow-up seems to the modeller a disaster that appears discontin-
uously when the CFL criterion is exceeded, but really the instability is just the outlands of
a larger region where the time-stepping is inaccurate.
There is one good defense for marching with a time step which is close to the stability
limit: When the only goal is to approximate the steady-state, large errors in the decay rate
are irrelevant. As noted earlier, the Forward Euler method always gives the steady state
exactly. Consequently, it is perfectly sensible to solve the Poisson equation

(12.20)
uxx = f (x)

by applying the FE scheme to the diffusion equation

ut = uxx ’ f (x) (12.21)

with a time step which is only a little below the stability limit. The steady-state limit of the
diffusion equation is the solution to the Poisson equation.
If, however, we wish to follow the decay of the transients to the steady state, then the
errors in the decay rate matter, and one should use a time step which is much shorter than
the stability limit.
The diffusion equation with a steady forcing has two time scales: the short time scale
on which transients decay towards the steady solution and the in¬nitely long scale of the
steady solution and forcing. Inaccurate resolution of the “fast” time scale [decay] is okay
provided that one is only interested in the “slow” time scale. This line of reasoning leads
to the concept of the “slow manifold” which will dominate the second half of this chapter.
Other explicit time-marching schemes have stability limits which are similar to that of
the Forward Euler method:

(12.22)
„ < q/|»|

where the constant q depends on the algorithm, but is always O(1). (For the Fourth-Order
Runge-Kutta (RK4) method, for example, q varies with arg(»), but is no smaller than about
2.8.) Unfortunately, these time-stepping constraints can be rather severe.
For the diffusion equation, for example, the eigenvalues in a Fourier basis {exp(ikx)}
are »k = ’k 2 . If we truncate to k ∈ [’N/2, N/2], then it follows that the FE will be stable
for all wavenumbers only if

„ < 4/N 2 (12.23)

Thus, doubling N requires four times as many time steps to reach a given time. It follows
that applying an explicit time-marching scheme to the diffusion equation is not a very fast
CHAPTER 12. IMPLICIT SCHEMES & THE SLOW MANIFOLD
228

way to solve Poisson equations: we need O(N 2 ) time steps to get a good approximation to
the steady state. Much faster iterations will be described in Chapter 15.
Chebyshev polynomials have an even more deplorable characteristic: their largest nu-
merical eigenvalues tend to be O(N 2j ) where j is the order of the highest derivative. Thus,
an explicit time-marching scheme for the diffusion equation with Chebyshev spatial dis-
cretization is stable only when „ is O(1/N 4 ) or smaller. Ouch! For this reason, implict
and semi-implicit time-marching algorithms, which allow long time steps, are especially
important in the spectral world.


12.4 Implicit Time-Marching Algorithms
Implicit time-marching algorithms for du/dt = F (u, t) are formulas that require us to solve
a boundary value problem (BVP) at every time step to advance the solution to the next time
level. The reward for this costly BVP solution is that implicit methods are stable for much
longer time steps than explicit schemes.
There is a very wide range of implicit algorithms. Lie (1993) has successfully used
implicit Runge-Kutta schemes with spectral methods, for example, but implicit RK time-
stepping is rather expensive. The most popular implicit schemes include the Backward-
Differentiation Formulas (BDF) of of various orders and the “Crank-Nicholson” (CN) method.
The ¬rst order BDF scheme is more commonly known as the “Backwards Euler” method
and also as the Adams-Moulton scheme of order one. The Crank-Nicholson scheme is also
known as the “Trapezoidal” method or the second order Adams-Moulton algorithm.
For du/dt = F (u, t) where both F and u may be vectors, these algorithms are

un+1 ’ un
= F (un+1 , tn+1 ) [Backwards Euler (“BE”)] (12.24)



(3/2)un+1 ’ 2un + (1/2)un’1
= F (un+1 , tn+1 ) [BDF2] (12.25)



(11/6)un+1 ’ 3un + (3/2)un’1 ’ (1/3)un’2

<<

. 48
( 136 .)



>>