j=1

(7.55)

Λk+1 = 1/µk+1 + Λk

= uk+1 / ||uk+1 ||

uk+1

The disadvantage of the inverse power method is that the cheap matrix-vector mul-

tiply is replaced by solving a matrix equation, which is O(N ) more expensive (by direct

methods). However, as explained in Chapter 15, one can often solve the matrix equation

implicitly (and cheaply) through a preconditioned Richardson iteration instead.

The inverse power method is very powerful because it can be applied to compute any

eigenmode. Indeed, the routines in many software libraries compute the eigenfunctions

by the inverse power method after the eigenvalues have been accurately computed by a

different algorithm.

The power and inverse power algorithms are but two members of a wide family of

iterative eigenvalue solvers. The Arnoldi method, which does not require storing the full

matrix A, is particularly useful for very large problems. Although the algorithm is too

complicated to describe here, Navarra(1987) used Arnold™s method to solve geophysical

problems with as many as 13,000 spectral coef¬cients.

7.10 Mapping the Parameter Space: Combining Global and

Local Methods

In stability calculations, one often wants to compute growth rates as a function of two

or more parameters as shown schematically in Fig. 7.11. The so-called “neutral curve”,

which is the boundary between stability and instability in parameter space, is usually an

important goal. Unfortunately, for problems such as the barotropic vorticity equation, the

CHAPTER 7. LINEAR EIGENVALUE PROBLEMS

150

Q Q Q Q

Q

Parameter Two

Q Q Q Q

Q

Q Q Q Q

Q

Q Q Q Q

Q

Q

Q

Q Q Q

Parameter One

Figure 7.11: Schematic of a stability calculation in a two-dimensional parameter space. The

unstable region is shaded; it is bounded by the “neutral curve”. To map the parameter

space, it is ef¬cient to use the QR/QZ algorithm to solve the matrix eigenproblem on a

coarse grid (large Q™s) and ¬ll in the gaps with an inexpensive local iteration, such as the

power or inverse power method, applied to the pseudospectral matrix (solid disks). This

dual strategy is much cheaper than using the QR/QZ algorithm everywhere (which has

an O(10N 3 ) cost versus O(N 2 ) for either the power method or the inverse power method

(with ¬nite difference preconditioned iteration for the latter).) At the same time, it is much

safer than using a one-mode-at-a-time eigensolver everywhere because this may easily

miss important eigenmodes.

neutral curve is precisely where the linearized equation is singular on the interior of the

domain similar to a Sturm-Liouville eigenproblem of the Fourth Kind.

In the next section, we describe a good strategy for computing the eigenvalues even

when the differential equation is singular. However, there is a more serious problem:

to map a two-dimensional parameter space on a relatively coarse 32 — 32 grid requires

solving more than a thousand cases. If the QR/QZ algorithm is used for each, this may

be prohibitively expensive, especially in multiple space dimensions, even though a good

workstation never sleeps.

However, it is easy to miss easy modes with a local iterative method. Many an arith-

murgist has traced one branch of unstable modes with loving care, only to ¬nd, years later,

that another undiscovered mode had faster growth rates in at least part of the parameter

space. A safer strategy is to apply the expensive global algorithm (QR/QZ) on a coarse grid

and then “connect-the-dots” by using an iterative scheme as illustrated in Fig. 7.11.

7.11. DETOURING INTO THE COMPLEX PLANE 151

7.11 Detouring into the Complex Plane: Eigenproblems with

Singularities on the Interior of the Computational Do-

main

Hydrodynamic stability and wave problems often have solutions with branch points on

or near the computational domain. These singularities, usually called “critical latitudes”

or “critical points”, create severe numerical dif¬culties. However, a good remedy (Boyd,

1985a) is to make a transformation from the original variable y such that the problem is

solved on a curve in the complex y-plane rather than on the original interval on the real y-

axis. With the proper choice of map parameter, one can loop the curve of integration away

from the singularity so that it does not degrade numerical accuracy.

To illustrate, consider

1

’» u=0 with u(a) = u(b) = 0; (7.56)

uyy + a<0 & b>0

y

where » is the eigenvalue. If a and b were of the same sign so that the singularity was

not on the interior of the interval, y ∈ [a, b], then (7.56) would be a normal, self-adjoint

Sturm-Liouville problem. As it is, not only the differential equation but also the solution

are singular on the interior of the interval.

After we have made a simple linear stretching to shift the interval from [a, b] to [’1, 1],

an effective transformation for (7.56) is

(x2 ’ 1) (7.57)

y =x+i

where is a (possibly complex) mapping parameter. We solve the problem using a Cheby-

shev series in x ∈ [’1, 1] in the standard way. Because of the change of variable, however,

the real interval in x is an arc in the complex y-plane which detours away from the singu-

larity. Since u(y) has a branch point at y = 0, the choice of looping the contour above or

below the real y-axis is an implicit choice of branch cut. The correct choice can be made

only by a careful physical analysis; for geophysical problems, Boyd (1982c) explains that

the proper choice is to go below the real axis by choosing > 0 in (7.57); this implicitly

forces the branch cut to lie in the upper half of the y-plane.

Fig. 7.12a shows the contour of integration (dashed) while Table 7.2 illustrates the re-

sults for (7.56). The basis functions are sums of Chebyshev polynomials which vanish at

the endpoints so that each φn (x) individually satis¬es the boundary conditions at x = ±1.

Despite the singularity, only 6 basis functions ” a 6—6 matrix eigenproblem ” is suf¬cient

to yield both the real and imaginary parts of the lowest eigenvalue to within an error of

less than 1.4%.

The rightmost column also is a personal embarrassment. Boyd (1981b) solved this same

problem using an arti¬cial viscosity combined with an iterative ¬nite difference method

” and missed two eigenvalues with very small imaginary parts (Modes 3 and 7 in the

table). The mapped Chebyshev procedure does not require an arti¬cial viscosity or any ¬rst

guesses; the QR algorithm will ¬nd all the eigenvalues of the matrix eigenvalue problem

automatically.

It is also a method with a weakness in that the Chebyshev series for u(y) converges most

rapidly for real x ” but this is an arc of complex y. How is one to interpret an imaginary

latitude? The series converges more and more slowly as we move away from the arc that

corresponds to real x and it must diverge at the singularity at y = 0. Therefore, the detour

into the complex plane is directly useful only for computing the eigenvalues.

Once we have », of course, we can use a variety of methods to compute the correspond-

ing eigenfunction. Power series expansions about the branch point at y = 0 in combination

CHAPTER 7. LINEAR EIGENVALUE PROBLEMS

152

Table 7.2: Eigenvalues of a Singular Sturm-Liouville Problem: The First 7 Eigenvalues of

uyy + {1/y ’ »}u = 0 Subject to u(±6) = 0 as Computed Using the Complex Parabolic

Mapping y = x + i∆(x2 ’ 1) with ∆ = 1/2.

Note: N is the number of Chebyshev polynomials retained in the truncation. The errors for N < 40 are the differences

from the results for N = 40 for the modes shown. Modes that are wildly in error or violate the theorem that the imaginary

part of the eigenvalue is always positive have been omitted; thus only one eigenvalue is listed for N = 6 although the

Chebyshev-discretized matrix eigenproblem had ¬ve other eigenvalues.

Error (N = 6) Error (N = 20)

»(N = 6) »(N = 20) »(N = 40)

n

1 0.1268 1.4 % 0.125054 Negligible 0.125054

+ i 0.2852 0.09 % +i 0.284986 Negligible + i 0.284986

2 -0.296730 Negligible -0.296730

+ i 0.520320 Negligible + i 0.520320

3 -0.638802 Negligible -0.638802

+i 0.000262 Negligible + 0.000262

4 -1.21749 0.0008 % -1.21750

+ i 0.563350 0.0004 % + i 0.563349

5 -1.60228 0.024 % -1.60190

+ i 0.007844 0.24 % + i 0.007825

6 -2.72598

+ i 0.490012

7 -3.10114

+ i 0.070176

with shooting for larger y should work quite well. Since » is known, there is no need

for iteration; the Runge-Kutta method, initialized near y = 0 via the power series, will

automatically give u(a) = u(b) = 0. Still, it is unpleasant to be forced to compute the

eigenfunctions in a second, separate step.

For hydrodynamic instabilities which are strongly unstable, the critical points are at

complex y. As shown through a barotropic instability calculation in Boyd (1985b), the de-

tour into the complex y-plane may be unnecessary near the points of maximum growth.

However, most stability studies map the “neutral curve”, which is de¬ned to be the bound-

ary of the unstable region in parameter space. On the neutral curve, the critical points are

on the real axis just as for (7.56). Thus, the mapping trick, despite its inability to compute

the eigenfunctions, is very useful for stability calculations.

In more complicated cases, multiple critical points may require maps that loop both

above and below the real y-axis as shown in Fig. 7.12b. In addition, when the interval is

unbounded, it may be necessary to combine the complex detour with the stretching map

that transforms an in¬nite interval into a ¬nite interval so that we can apply Chebyshev

polynomials as shown in Fig. 7.12d. Boyd (1985b) gives a thorough discussion of these

variants.

Gill and Sneddon(1995, 1996) have given some useful extensions of Boyd™s paper. Their

¬rst article gives an analytic formula for optimizing the quadratic map: If there is only one

critical latitude and its location is y = yc , then the mapping y = x + i (1 ’ x2 ) is optimized

by

yc = ’i yc ± yc ’ 1 /2

2 (7.58)

When more than one critical latitude is suf¬ciently close to the domain to be troublesome,

more complicated transformations are useful. Gill and Sneddon show that the cubic map-

7.11. DETOURING INTO THE COMPLEX PLANE 153

ping

y = x ’ (± + i)(β0 + β1 x)(x2 ’ 1) (7.59)

is free of cusps and loops, that is, self-intersections, for all real values of the three parame-

ters ±, β0 , β1 .

When the singularity is very close to an endpoint, Gill and Sneddon (1996a) show that

the quadratic map is still effective: Even with yc = 0.99, the pseudospectral error is pro-

portional to O(1.44’N ). This can be improved still further by iterating the quadratic map

using an analytical formula for optimizing the composite transformation. However, each

iteration of the composition raises the order of the poles of the transformed solution, so

the composite map is useful only for large N and very high accuracy. In multiple preci-

sion, they give good illustrations of “cross-over”: the composite mapping, which is always

superior to the standard quadratic mapping in the asymptotic limit N ’ ∞, is worse for

small N .

Gill and Sneddon (unpublished preprint) extends the analysis to semi-in¬nite inter-

vals, weaving together ideas from Boyd(1987b) as well as Boyd(1985a). The good news is

that they obtained quite useful analytical formulas for optimizing two different families of

mappings. The theoretical estimates of the total error were not very accurate for reasonable

N , but the convergence rates were gratifyingly high for their test problems.

Singular problems, and other dif¬cult eigenproblems like the Orr-Sommerfeld equa-

tion, require much higher N than our ¬rst example. (Although not singular for real y, the

Orr-Sommerfeld eigenfunctions have thin internal layers and boundary layers.) Neverthe-

less, with N ≥ 40 and a complex-plane mapping when needed, even nasty eigenproblems

can be solved with high accuracy.

CHAPTER 7. LINEAR EIGENVALUE PROBLEMS

154

Im(y) Im(y)

Re(y) Re(y)

(a) (b)

Im(y) Im(y)

Re(y) Re(y)

(c) (d)

Figure 7.12: Four representative mappings for coping with singularities on or near the in-

terior of the integration interval. For each case, the real and imaginary y-axes are shown as

solid lines, the transformed path of integration as a dashed line, the branch cut proceeding

away from the singularity as a cross-hatched line, and the singularities are x™s.

(a) The solution has a branch point on the real axis. (The original and transformed integra-

tion paths intersect at the boundaries, y = a, b.)

(b) Two singularities on y ∈ [a, b] with branch cuts that must be taken in opposite direc-

tions (typical for a symmetric jet).

(c) Identical with (a) except for the presence of additional singularities near (but not on)

the interval which force the transformed path to hug the real axis.

(d) Identical with (a) except that the integration interval is in¬nite. The mappings for un-

bounded intervals (Chapter 17) may be freely combined with detours in the complex y-

plane.

7.12. COMMON ERRORS 155

7.12 Common Errors: Credulity, Negligence, and Coding

“Credulity” is a shorthand for believing that a calculation is accurate when it in fact is

nonsense. It is easier to be fooled than one might imagine.

In graduate school, I solved an eigenproblem with Chebyshev polynomials and beamed

happily at the smooth graphs of the eigenmodes. To be sure, the eigenvalues did seem to

jump around a bit when the resolution was varied, but the graphs of even the higher modes