HH

J

B H

H

J

H

BDF2

J H 0.001

H

J H

Crank-Nicholson

0.0001

H

0.00001

µ„

0.1 1

Figure 12.8: Maximum pointwise errors in approximating the slow manifold of du/dt +

iωu = exp(i t), plotted versus the product of with „ . The ratio of ω/ is ¬xed at 1/10;

the errors for the Nonlinear Galerkin schemes, which are independent of the time step and

therefore appear as horizontal lines, are proportional to (ω/ )m for NG(m).

the slow modes. Thus, it is always necessary to choose „ < 1 in practice, and the implicit

schemes always triumph over N G(1).

It also true that if we compare implicit schemes with N G(m) in the limit m ’ ∞ for

¬xed time step and ratio of time scales , the N G scheme will be superior. This is illustrated

by Fig. 12.8, which compares NG schemes of various orders with implicit methods for a

particular value of . However, even though = 1/10, the Crank-Nicholson scheme is

better than N G(2) for all „ < 1/ , that is, whenever there are more than six points per time

period of the slow manifold. Furthermore, because of the complexity of high order NG

methods, most Nonlinear Galerkin studies beginning with Daley (1980) have used N G(1)

only.

Garc´ ia-Archilla and de Frutos (1995) have compared time-marching algorithms for some

simple nonlinear equations. Advection does not change the conclusion: “The results show

that for these problems, the non-linear Galerkin method is not competitive with either pure

spectral Galerkin or pseudospectral discretizations.” Haugen and Machenhauer (1993)

found that N G(1) was not competitive with the semi-implicit method for a time step of

an hour when advection was handled by a semi-Lagrangian scheme.

All this negative evidence raises the question: Why has so much work been done with

Nonlinear Galerkin methods? Part of the answer is that many of the articles in Table 12.2

report considerable cost-savings over conventional algorithms. (However, most studies

CHAPTER 12. IMPLICIT SCHEMES & THE SLOW MANIFOLD

248

were restricted to explicit schemes versus NG schemes only.) A second answer is that

numerical analysts are as opinionated as soccer fans.

A third answer is that NG schemes are self-initializing. The Crank-Nicholson scheme

is a good method to track the slow manifold, yes, but it is powerless to follow the slow

manifold unless some external agency adjusts the initial conditions ¬rst.

A fourth answer is that the Nonlinear Galerkin scheme is a very thoughtful algorithm.

In a problem with multiple time scales that evolves only on the slow time scale, the high

frequency modes are forced only by the slow modes. This, to lowest order, is what really

happens. The insight underlying NG schemes has led to interesting connections between

them and multigrid, hiearchical ¬nite element bases, and a wide variety of other algorithms

as outlined in the many papers of T´ mam and his collaborators. Perhaps in time, at least

e

in special cases, the inef¬ciencies so evident in Figs. 12.7 and 12.8 will be remedied.

12.14 Following the Slow Manifold with

Implicit Time-Marching:

Forced Linear Oscillator

To examine implicit algorithms experimentally, it is useful to consider same model as in

the previous section:

(12.66)

ut + iu = exp(i t)

which, writing u(t) = x(t) + i y(t), is equivalent to the pair of real-valued equations

xt ’ y = cos( t) (12.67)

& yt + x = sin( t)

The specialization to unit frequency is no restriction because if the frequency is some differ-

ent number ω, we can de¬ne a new time variable s by ωt = s, and then the homogeneous

solutions of Eq.(12.64) will oscillate with unit frequency in s. Note that this requires re-

de¬ning : the effective perturbation parameter for general ω is /ω, that is, the ratio of the

frequency of the forcing to the frequency of the homogeneous solution.

The specialization to a forcing function that is harmonic in time is a little restrictive, but

only a little, because very general forcing functions can be constructed by using Fourier

series or integrals to superpose components of the form of exp(i t) for various . However,

the exponential forcing is very convenient because it allows us to explicitly compute the

slow manifold not only for the differential equation but also for its approximation by NG

and various implicit time-marching schemes.

Fig. 12.9 shows the exact solution, the slow manifold, and the Crank-Nicholson ap-

proximation for an initial condition which excites both fast and slow motion. The timestep

„ = 1/ , which is 2 π points per slow temporal period.

The fast component oscillates through ten periods on an interval where the Crank-

Nicholson scheme has only six points, so it is absurd to suppose the time-marching method

can accurately track the fast motion. Still, it fails in an interesting way. The CN scheme does

oscillate around the slow manifold, just like the exact solution. Its cardinal sin is that, as

asserted earlier in Rule-of-Thumb 6, it tremendously slows down the oscillation, approxi-

mating an oscillation of period 2π by an oscillation of period 2„ .

As asserted earlier, without a separate slow manifold initialization procedure, the Crank-

Nicholson scheme is useless for integrating motion that is supposed to be purely slow

motion. In contrast, the Backwards Euler scheme, which damps high frequencies, will

converge to the slow manifold eventually.

12.15. THREE PARTS TO MULTIPLE SCALE ALGORITHMS 249

When the initial conditions are on the slow manifold, the Crank-Nicholson scheme is

very good even with a coarse grid as illustrated in Fig. 12.10. The maximum error with

roughly six points per period is only 0.012.

12.15 Three Parts to Multiple Time Scale Algorithms: Iden-

ti¬cation, Initialization, and Time-Marching on the Slow

Manifold

Computing slow solutions for a dynamical system that also allows fast motions requires

three key steps:

1. Identi¬cation of the Slow Manifold

2. Initialization onto the Slow Manifold

3. Slow Manifold-Following Time-Stepping Algorithm.

Identi¬cation of a slow manifold is both a problem in physics or engineering and a

problem in numerical analysis. The same system may or may not have a slow manifold,

depending on the situation and the goals of the modeller. The atmosphere, for example, has

a slow manifold for climate modelling and large-scale weather forecasting. Gravity waves,

convective plumes and small-scale vortices, which can be neglected in climate studies, are

essential to mesoscale meteorology (that is, small time-and-space scale studies). Therefore,

mesometeorology lacks a slow manifold. A forced diffusion problem has a slow manifold

” the steady state ” if one is only interested in the long-time solution, but it has no slow

manifold if one wants to track the high frequency transients, too.

1.5

1

0.5

0

y

-0.5

-1

-1.5

0 20 40 60

Time

Figure 12.9: Imaginary part y(t) for the forced linear oscillator with = 1/10 and the initial

conditions x(0) = 0, y(0) = 0, which excite a mixture of fast and slow motion. Solid,

unlabelled curve: slow manifold. x™s: exact solution. Solid disks: the Crank-Nicholson

values.

CHAPTER 12. IMPLICIT SCHEMES & THE SLOW MANIFOLD

250

1000* CN Errors

5

0.5

0

0

y

-5

-0.5

-10

0 50 0 50

Time Time

Figure 12.10: Left panel: Imaginary part y(t) for the forced linear oscillator with = 1/10

and the initial conditions x(0) = 0, y(0) = ’1/(1 + ), which excites only slow motion.

Solid, unlabelled curve: slow manifold (also the exact solution). Solid disks: Crank-

Nicholson values. Right panel: errors in the Crank-Nicholson approximation, multiplied

by 1000.

Numerical errors can also make fast-evolving components of the discretized model very

uninteresting. Even if there is no physical slow manifold, there may be a numerics-induced

slow manifold where the slow components are those modes which can be accurately re-

solved by the Chebyshev discretization.

No kindergardener would be dumb enough to pursue a project without clear goals;

even at ¬ve, my son Ian is an enthusiastic planner. The availability of fast workstations

sometimes has a bad effect on inquiring minds of supposedly much superior maturity.

A common disease of adult numerical analysts is: Compute First, Think Later. But it is

very hard to get good results without ¬rst asking: What do I really want from the model?

And what will the model allow me? It is from such metaphysical thinking that the slow

manifold can be identi¬ed.

Initialization onto the slow manifold is irrelevant in climate modelling and many other

problems where the dumb and mindless forces of dissipation are suf¬cient to move the

problem onto the slow manifold soon enough for the modeller. For weather forecasting

and lots of other problems, however, damping works too slowly; one must ensure that the

initial data will lie on the slow manifold only by using some sort of initialization method.

The third step is to choose a numerical algorithm which is good at ignoring the fast

dynamics and accurately evolving the ¬‚ow along the slow manifold. The standard im-

plicit time-marching schemes usually meet this standard; explicit schemes don™t. How-

ever, implicit methods can sometimes fail because of problems of “consistency”, boundary-

splitting errors, and a few subtleties discussed in later chapters.

Slow manifold concepts have always been implicit in implicit time-marching schemes.

Curiously, however, these ideas have been glossed over in numerical analysis texts. In

meteorology, for example, gravity wave-¬ltering approximations have been used since the

12.15. THREE PARTS TO MULTIPLE SCALE ALGORITHMS 251

late 1940s. However, the concept of the slow manifold was not formally de¬ned until

Leith(1980). And yet the notion of ¬ltering the gravity waves to leave only the slow ¬‚ow,

as embodied in Charney™s 1947 quasi-geostrophic equations of motion, is as goofy as a

Bugs Bunny cartoon unless the large-scale ¬‚ow in the atmospheric does lie on or very near

a subspace of purely slow motion in a phase space that allows both fast and slow dynamics.

In regard to time-marching algorithms, it is better to be explicit about the reason for

implicit. Otherwise, implicit algorithms become a kind of vague intuition, as unreliable as

a child™s belief that hiding under a blanket, because it keeps out the light, will also keep

out the thunder and lightning. Implicit algorithms are neither voodoo nor a lucky charm,

but an ef¬cient way to integrate along the slow manifold.

Chapter 13

“Fractional Steps” Time

Integration: Splitting and Its

Cousins

“Divide et impera” ”-“Divide and conquer”

“Louis XI

“Bender™s Law: Different but reasonable schemes for the same problem require about the

same work.”

“ Carl M. Bender

13.1 Introduction

Implicit time-marching algorithms are usually quite expensive per time step because of the

need to solve a boundary value problem at each time step. The exceptions are:

1. One (spatial) dimension.

2. Semi-implicit methods with a Fourier or spherical harmonics basis which require

solving only constant coef¬cient boundary value problems; these are trivial in a

Galerkin formalism.

3. Problems which are periodic in all but one spatial dimension.

For spherical harmonic climate models, for example, it is only necessary to solve one-

dimensional boundary value problems (in height), one BVP for the vertical dependence

of each spherical harmonic coef¬cient. But what is one to do for the vast number of prob-

lems which require solving multi-dimensional BVPs by means of Chebyshev polynomials

every timestep?

252

13.1. INTRODUCTION 253

Unsplit:

Simultaneous

t=0 t=„

Split:Sequential

Diffusion

t=0

y-diffusion

t=„