IF (MOD(N,2).EQ.0) THEN

PHI(I) = TN - 1.

PHIX(I) = TNX

ELSE

PHI(I) = TN - X

PHIX(I)= TNX - 1.

CHAPTER 6. PSEUDOSPECTRAL METHODS FOR BVPS

126

ENDIF

PHIXX(I) = TNXX

100 CONTINUE

[Alternative formulas when X = ±1 ]

ELSE

DO 200 I=1,NBASIS

PHI(I) = 0.

N = I+1

IF (MOD(N,2).EQ.0) THEN

PHIX(I)= SGN(X,1.)*N*N

ELSE

PHIX(I)= N*N - 1

ENDIF

PHIXX(I) = (SGN(X,1.))**N * N*N * (N*N-1.)/3.

200 CONTINUE

ENDIF

RETURN

END

[ADDITIONAL USER-SUPPLIED ROUTINES: D0(X), D1(X), D2(X), F(X) PLUS THE

LINEAR ALGEBRA ROUTINE LINSOLV.]

Chapter 7

Linear Eigenvalue Problems

“It is not the process of linearization that limits insight. It is the nature of the state that we

choose to linearize about.”

” Erik Eady

7.1 Introduction: the No-Brain Method

The default, almost-no-thinking-required method for solving linear eigenvalue problems is

summarized in Table 7.1. There are two difference from a linear boundary value problem:

(i) the matrix equation is solved by a different linear algebra library routine (eigensolve

instead of matrixsolve) and (ii) comparisons for two different N must be mode-by-mode.

There would seem to be no need for a separate chapter on this topic.

Indeed, often eigenproblems are very simple. Unfortunately, there are a variety of com-

plications that can arise. This chapter might be more appropriately titled: “Damage Con-

trol for Eigenproblems”. No need to read it when solving easy, standard Sturm-Liouville

eigenproblems, but as valuable as a lifevest and a waterproof radio after your code has

been torpedoed by one of the dif¬culties explained below.

Some of these traps, mines and torpedoes are the following:

1. The QR/QZ algorithm, which costs O(10N 3 ) for an N — N matrix, may be too expen-

sive for multidimensional problems.

Table 7.1: Default Method for Eigenproblems

Step No. Procedure

One Apply spectral method to convert differential or integral equation

to a matrix problem, just as for BVP

Two Call a matrix eigensolver from linear algebra library

(The QR/QZ algorithm is a robust blackbox that ¬nds all eigenvalues

and eigenfunctions without user input except the matrix.)

Three Repeat for different N . Trust only those eigenvalues and eigenfunctions

which are the same for both resolutions

127

CHAPTER 7. LINEAR EIGENVALUE PROBLEMS

128

2. Instead of a discrete, countable in¬nity of eigenmodes, there may be a continuous

spectrum, or a few discrete modes plus a continuous spectrum.

3. It may be tedious to compare eigenvalues for different N to reject those which are

numerically inaccurate.

4. Generalized eigenproblems often have a couple of very large eigenvalues which are

physically spurious in the sense of resulting from inconsistent application of boundary

conditions.

5. In hydrodynamic stability theory, weakly unstable or neutral modes may be singular

or very nearly singular on the interior of the computational domain

6. The eigenparameter may occur as a polynomial in an otherwise linear problem.

7. High order differential equations seem to be quite common in eigenproblems and

may have poor numerical conditioning, i. e., large roundoff errors.

In the rest of this chapter, we explain how to survive these shipwrecks. First, though, we

begin with some basic de¬nitions and comments on the “no-brain” strategy of Table 7.1.

7.2 De¬nitions and the Strengths and Limitations of the

QR/QZ Algorithm

De¬nition 17 (Eigenvalue Problem) A LINEAR EIGENVALUE problem is an equation of the

form

(7.1)

Lu = »M u

where L and M are linear differential or integral operators or are square matrices and » is a number

called the EIGENV ALUE. The boundary conditions, if any, are homogeneous so that Eq.(7.1) has

solutions only when » is equal to a set of discrete values or is on a continuous range. When M is

the identity operator so the right side of (7.1) simpli¬es to »u, the problem is said to be a regular

eigenvalue problem; otherwise, it is a “generalized” eigenproblem.

De¬nition 18 (Eigenvectors/Eigenfunctions) The solutions u to a linear eigenvalue problem

are called “EIGENVECTORS” (if the operators L and M are matrices) or “EIGENFUNCTIONS”

(if L and M are differential or integral operators). The eigenvectors are labelled by an index “j”,

the “MODE NUMBER”, which is usually chosen so that the corresponding discrete eigenvalues

are ordered from smallest to largest with the lowest j (usually either 0 or 1) corresponding to the

smallest eigenvalue. When the eigenfunctions exist for a continuous interval in » (not possible

for matrix problems), the eigenfunctions are labelled by » and are said to be “CONTINUOUS”

eigenfunctions.

A simple illustration of a differential eigenproblem, later dubbed “Example One”, is

(7.2)

uxx + »u = 0, u(’1) = u(1) = 0

which has the exact eigenmodes

cos j π x , j = positive odd integer

2 (7.3)

uj (x) =

sin j π x , j = positive even integer

2

7.3. EIGENVALUE RULE-OF-THUMB 129

π2

2

(7.4)

»j = j , j = 1, 2, 3, . . .

4

When L and M are matrices, there is good and widely available software for solving

eigenproblems. Indeed, such commands are built-in to languages such as Matlab, Maple

and Mathematica. The Matlab command to ¬nd both eigenfunctions and eigenvalues is

simply: [Eigenfunctions,Eigenvectors]=eig(L,M). FORTRAN subroutines to do the same

may be found in the public domain EISPACK and LINPACK libraries as well in proprietary

libraries like NAG and IMSL. These codes are very robust, and rarely fail except by being

too expensive.

Library software for the algebraic eigenproblem usually employs an algorithm called

the QZ method (for the generalized eigenproblem) or its cousin the QR scheme (for the

regular eigenproblem). The good news is that (i) these algorithms require as input nothing

but the square matrices that de¬ne the eigenproblem and (ii) reliably compute all the ma-

trix eigenvalues and eigenfunctions. The bad news is that QR/QZ is slow: for an N — N

matrix, the cost is O(10N 3 ) operations. This is an order of magnitude slower than Gaussian

elimination (LU factorization) of a dense matrix of the same size. Unlike almost all other

matrix algorithms, QR/QZ schemes cannot exploit matrix sparsity because zero elements

are all ¬lled in as the algorithm proceeds.

Gary and Helgason (1970) pointed out that this implies: Hurrah for high order dis-

cretizations! The reason is that high order methods make it possible to resolve a given

number of eigenmodes with much smaller N than with lower order methods. Replacing

a 100-point second order computation by an equally accurate tenth order matrix which is

only 40 — 40 reduces the cost of the QR algorithm by a factor of 15!

Orszag (1971b) noted that Chebyshev pseudospectral methods are the ultimate in high

order, and reduce N still further from the values needed by the eighth and tenth order

¬nite difference methods of Gary and Helgason.

It is ironic: the QR/QZ algorithm has no direct connection with spectral methods. Nev-

ertheless, when this is the chosen matrix eigensolver, the QR algorithm cries out for a spec-

tral discretization.

On modern workstations, almost all one-dimensional and some two-dimensional and

three-dimensional eigenproblems can be solved ef¬ciently by the pseudospectral/QZ com-

bination. Unfortunately, when one needs to resolve multidimensional modes with lots of

¬ne structure, or worse still, needs to compute the eigenvalues throughout a multidimen-

sional parameter space, the QR algorithm may be unaffordable.

In later sections, we therefore describe two representative alternatives: the power and

inverse power methods. Both are “local” methods in the sense that they compute only one

eigenvalue at a time. In contrast, the QR/QZ method is “global” because it computes all N

matrix eigenvalues without requiring any input from the user except the matrices theme-

selves. With local methods, it is easy to miss eigenvalues; we offer some cautionary tales

below. However, the cost per point in parameter space is usually an order of magnitude

less than the QR scheme.

7.3 Numerical Examples and the Eigenvalue Rule-of-Thumb

Fig. 7.1 illustrates the errors in computing eigenvalues of Example One ( Eq. 7.2). The

error tolerance is of course both user-dependent and problem-dependent. For the sake of

discussion, we shall arbitrarily de¬ne a “good” eigenvalue as one whose absolute error is

less than or equal to 0.01, as marked by the horizontal dividing line in the graph.

By this criterion, the 16-point discretization returns seven “good” eigenvalues and nine

“bad” eigenvalues. Fig. 7.2 compares a “good” (top) and “bad” eigenfunction as computed

CHAPTER 7. LINEAR EIGENVALUE PROBLEMS

130

5

10

Absolute value of error in j-th eigenmode

0

10

0.01 error level

-5

10

-10

10

-15

10

0 5 10 15

mode number j

Figure 7.1: Example One: Absolute errors in the eigenvalues as given by a 16-point Cheby-

shev pseudospectral discretization. The horizontal dividing line separates the “good”

eigenvalues from the “bad” where the “good” eigenvalues are de¬ned (arbitrarily) to be

those whose absolute error is 0.01 or less.

N=16 3rd mode

1

Circles: Exact

0.5

0

-0.5

-1

-1 -0.5 0 0.5 1

N=16 15th mode

1

Solid: Numerical

0.5

0

-0.5

-1

-1 -0.5 0 0.5 1

x

Figure 7.2: Example One: 16-point Chebyshev pseudospectral discretization. Exact (cir-

cles) and numerical (solid) approximations to the eigenmodes. Upper panel: Third Mode.

Lower panel: Fifteenth mode.

7.3. EIGENVALUE RULE-OF-THUMB 131