of under-

spectrum

resolved

of energy-

fronts

containing

eddies

π /h

0 wavenumber k

Figure 11.11: Solid: spectrum of energy-containing eddies (“synoptic eddies” in Phillips™

simulation). Circles: Fourier coef¬cients of the unresolved fronts.

will spontaneously develop fronts too thin to be numerically resolved (if the viscosity is

very weak) and (ii) the energy spectrum will fall off as O(k ’3 ) for wavenumbers smaller

than kdiss , the beginning of the dissipation range, where k is the total wavenumber. For

the Fourier coef¬cients (as opposed to the energy, whose spectrum is proportional to the

absolute value squared of the Fourier coef¬cients), this implies |ak | ∼ O(1/k 3/2 ). The crucial

point is that the Fourier coef¬cients of a jump discontinuity (like a front or the “sawtooth”

function) decrease more slowly as O(1/k). The implication is that the fronts have rather

small energy, or otherwise the energy spectrum would decay as O(1/k 2 ).

Thus, it is possible for the underresolved small features to contain little energy com-

pared to large, well-resolved features. Thus, spectral blocking in Phillips™ model could be

described (conjecturily) by Fig. 11.11.

On the other hand, Brown and Minion(1995), Minion and Brown(1997) and the stud-

ies reviewed by Aoyagi (1995) have identi¬ed some speci¬c computational instabilities.

These seem to occur only for marginally-resolved or poorly-resolved ¬‚ow, thank good-

ness. When sideband resonance is the villain, the bland statement “too little resolution”

seems insuf¬cient. Perhaps with a little tinkering of the algorithm, it might be stabilized

against sideband resonace (or whatever) so that the integration could be extended without

a shift to a ¬ner spatial grid.

The theory of aliasing instability and blow-up is on the whole in a pretty sorry state. It

matters because many important ¬‚ows can only be marginally resolved, forcing the use of

energy-conserving algorithms, dealiasing and so on. We could ¬ght the instability much

better if we only understood what we were ¬ghting!

11.9 Summary

Spectral blocking, 2h-waves and blow-up are as much a problem today as in the time of

Phillips™ ¬rst GCM experiment. The bad news is that despite Phillips™ own demonstration

11.9. SUMMARY 219

s

log ak

me

;

r h Sche

lle

ma ptive

Sa

Ad

l

/Artificia

Filtering y

Viscosit

t

metric

w-sym

Ske ction

adve

log ak

Spectral Energy-conserving

Blocking

t

Front-tracking: PPM

FCT,etc.

Blow-Up

Dealiasing

; 2/3 Rule

Figure 11.12: Schematic of the time evolution of a model from a smooth initial condition

(top left) to a ¬‚ow that has lost some accuracy to spectral blocking (middle left) before

blowing up (bottom left) and six types of remedies for spectral blocking and blow-up (right

arrows).

of the wrong-way energy cascade created by aliasing, the causes of blow-up are still not

well understood. The good news is that we have a quiver full of arrows to shoot at the

problem (Fig. 11.12).

The ¬rst arrow is to increase the resolution. Phillips found that this didn™t help very

much, buying only a few hours of additional time. However, this may only re¬‚ect the fact

that frontogenesis happens very rapidly once the eddies have grown to a reasonable size. It

is dangerous to always blame blow-up on a numerical villain, a computational Bogeyman

named Aliasing Instability. It may just be that the ¬‚ow is physically generating features

whose width is 1/1000 of the size of the computational domain, and a model with N = 100

fails simply because it can™t resolve it.

Adaptive schemes are very useful in this context because often a ¬‚ow can be resolved

with rather small N over much of its lifetime before the front develops. With a variable

resolution, that is, a ¬ne grid around the front only, the narrow features may be resolved

with only moderate N . Cloot, Herbst, and Weideman (1990), Bayliss, Gottlieb, Matkowsky

and Minkoff (1989), Bayliss and Turkel(1992), Bayliss, Garbey and Matkowsky (1995) and

CHAPTER 11. ALIASING, SPECTRAL BLOCKING, & BLOW-UP

220

Augenbaum (1989, 1990) are illustrations.

The second arrow is ¬ltering or the use of an arti¬cial, computational viscosity (Chapter

18, Sec. 20).

The third arrow is skew-symmetric advection. This forces the numerical discretization

of advection to provide pure translation without adding a little bit of spurious numerical

growth or decay. It seems to be helpful with marginally-resolved ¬‚ows.

The fourth arrow is an energy-conserving algorithm. Canuto et al. (1988, Sec. 4.6) give

an example of a magnetohydrodynamic ¬‚ow where this is helpful. Energy-conserving

schemes are likely to be useful for climate modelling, too, where the ¬‚ow is integrated

over a very long period of time. Symplectic integration schemes for Hamiltonian systems

conserve not only energy but the Hamiltonian structure.

The ¬fth arrow is to use a front-resolving scheme like PPM or MUSCL or Flux-Corrected

Transport (FCT) or the like. Spectral generalizations of most of these algorithms are now

available.

The sixth arrow is the Two-Thirds Rule. This is really just a special case of ¬ltering, but

we list it separately because it is often discussed in the literature as something separate

with a different motive from conventional ¬lters: to dealias a quadratically nonlinear com-

putation. However, Orszag™s procedure discards 1/3 of the numerical degrees of freedom

in one dimension, 5/9 in two dimensions and 19/27 in three dimensions. One pays a very

heavy price just to eliminate aliasing, so this is probably a last resort.

Note that the Two-Thirds Rule is an “all-or-nothing” ¬lter: the coef¬cients are unmod-

i¬ed or completely discarded. To resolve fronts and other narrow features, ¬lters which

vary smoothly and continuously with wavenumber have become quite popular and success-

ful. Experience and some theory suggests that a smooth ¬lter is better than a step function

(in wavenumber) like the Two-Thirds Rule. Since the higher wavenumbers are clobbered

by the ¬lter anyway, one is likely to gain most of the bene¬ts of dealiasing.

There are two important things to remember before applying these remedies. The ¬rst

is that for a well-resolved ¬‚ow, none of them are needed! A spectral code will do a very

good job of preserving the energy over a fairly long time simply because of its inherent ex-

ponential accuracy. It is only when the ¬‚ow develops fronts, shocks and other pathologies

that one ever needs to think about tinkering with the algorithm to enforce conservation of

the discrete energy. Otherwise, one is incurring extra cost, both your time and the com-

puter™s, for no particular gain.

The second point is that when these remedies succeed, they succeed dangerously. That

is to say, these interpolation-with-physics-constraints force the computed ¬‚ow to be smooth

and the energy to stay constant even when the numerical errors are large. If the eddies have

a characteristic scale of 1000 km, it is clear that a very coarse grid with h=5000 km cannot

possibly furnish an accurate solution. However, with an Arakwa-type energy-conserving

scheme, the numerical solution won™t blow up. Indeed, the dreadfully-underresolved so-

lution may look quite smooth, even though it™s nonsense.

For his thesis, a colleague ported the energy-conserving Arakawa-Mintz model from

the atmosphere to the ocean. Unfortunately, because ocean eddies have a diameter of O(100

km),which is very small compared to the width of an ocean basin, it was not possible to

explicitly resolve the eddies in 2 B. C. [Before Cray2 ]. However, it was possible to make a

short run at higher resolution. The horror! The horror! With smaller h, everything changed!

Nevertheless, the production run was published. Although it certainly had major er-

rors, my friend was unable to convince his advisor of this because the Arakawa scheme

(and viscosity) made the computed solutions look smooth even though the ¬‚ows were actu-

ally rubbish.

2 The Cray-1, the ¬rst supercomputer, was introduced in 1976.

11.9. SUMMARY 221

In contrast, a code which is resolving a ¬‚ow ought to approximately conserve energy

for a long time, even if this property is not explicitly built into the discretization. If the code

isn™t resolving the ¬‚ow, well, perhaps it ought to blow up.

Smoothness and energy conservation are like the shining glitter of gold. Perhaps it

really is gold, or perhaps it is only iron pyrites, “fool™s gold”, which gleams as brightly

but is worth almost nothing. The number-cruncher must be as suspicious of glitter as the

mineralogist.

Chapter 12

Implicit Time-Marching, the Slow

Manifold and Nonlinear Galerkin

Methods

“In the terminology which you graciously ascribe to me, we might say that

the atmosphere is a musical instrument on which one can play many tunes.

High notes are sound waves, low notes are long inertial [Rossby] waves, and

nature is a musician more of the Beethoven than of Chopin type. He much

prefers the low notes and only occasionally plays arpeggios in the treble and

then only with a light hand. The oceans and the continents are the elephants in

Saint-Saens™ animal suite, marching in a slow cumbrous rhythm, one step every

day or so. Of course there are overtones: sound waves, billow clouds (gravity

waves), inertial oscillations, etc., but these are unimportant and are heard only

at N. Y. U. and M. I. T.”

“Jule Charney (1917-1982) (from a 1947 letter to Philip Thompson)

12.1 Introduction

When the partial differential equation is linear and the “method of lines” is applied to

discretize the spatial dependence, the system of ordinary differential equations in time is

linear, too, and can be written

du

(12.1)

= Λu + f (t)

dt

where Λ is a large square matrix of dimension N where N is the number of elements in the

column vector u and where f (t) is the discretization of the inhomogeneous terms in the

differential equation. (We assume that the boundary conditions have been dealt with by

basis recombination or whatever; the vector f may contain terms proportional to inhomo-

geneous boundary conditions in some or all of its elements.)

222

12.1. INTRODUCTION 223

The “con¬guration space” of the model is just the N -dimensional space of the vector

u, the spectral coef¬cients or grid point values. The instantaneous state of the model is

a point in con¬guration space; its evolution through time is a one-dimensional curve in

con¬guration space, parameterized by the single coordinate t.

Eq.(12.1) is completely general and applies whether the spatial discretization is spectral,

¬nite difference or ¬nite element and also whether the unknowns are grid point values or

Chebyshev polynomial coef¬cients. In general, the elements of the matrix Λ are functions

of time. In most of this chapter, however, we shall make the further assumption that Λ is

independent of time.

In a world which is highly nonlinear and time-dependent, these assumptions would

seem drastic. Indeed, such approximations cannot capture the mysteries of solitary waves,

chaos, aliasing instability and a wide variety of other phenomenon. Nevertheless, such a

simpli¬ed model as (12.1) can teach us much about time-marching algorithms. The reason

is that accurate time-stepping requires a time step which is short compared to the time

scales on which the coef¬cients of the differential equation and u itself are changing. This

implies that the matrix Λ will not change much between then and time t = t0 + „ where „

is the timestep; otherwise, we should shorten the timestep.

The conceptual advantage of the linear-with-time-independent coef¬cient approxima-

tion is that the linear, constant coef¬cient ODE system (12.1) has the general solution

t

(12.2)

u = exp(Λt)u0 ) + exp(Λt) exp(’Λs) f (s) ds

where u0 ≡ u(t = 0). The matrix exponential is most easily interpreted by expanding u in

terms of the eigenvectors of the square matrix Λ. If

(12.3)

Λej = »j ej

and

N

(12.4)

u= bj ej

j=1

then in terms of the eigenvector basis, Λ becomes a diagonal matrix, with its eigenvalues

»j as the diagonal elements, and the ODE system collapses to the uncoupled set of ordinary

differential equations

dbj

(12.5)

= »j bj + gj (t)

dt

where gj (t) is the j-th coef¬cient of the eigenexpansion of the forcing function f (t).

It follows that we can learn almost everything about time-marching schemes “ to the

extent that neglecting nonlinearity and other time-dependence of the coef¬cients of the

PDE is legitimate “ by understanding how different algorithms integrate

du

(12.6)

= »u + f (t)

dt

where all the unknowns are now scalars, not vectors. For a simple wave equation like