<<

. 33
( 136 .)



>>

usual Chebyshev-Lobatto grid:
N ’1
π(x) ≡ (x ’ xi ) (7.37)
i=2

After the usual spectral discretization using these basis functions to give the generalized
matrix eigenproblem Au = » Bu, Huang and Sloan multiply both matrices by the diagonal
matrices whose elements are

Djj = (1 ’ x2 )2 , j = 2, . . . , N ’ 1 (7.38)
j

Like Heinrichs™ improvement for the non-cardinal basis, Huang and Sloan™s method
reduces the condition number to its square root. For a differential equation of order p, the
condition number is reduced from O(N 2p ) to O(N p ). Thus, it is possible to solve differen-
tial equations of quite high order with large N .
7.8. THE POWER METHOD 145

Although we have chosen to discuss condition number in the midst of a chapter on
eigenproblems because condition number is measured as a ratio of eigenvalues, the Hein-
richs and Huang and Sloan basis sets can be equally well employed for high order BOUND-
ARY VALUE problems.



7.8 Alternatives for Costly Problems, I: The Power Method
When the QR/QZ algorithm is too expensive, one needs iteration schemes which compute
only a single eigenvalue at a time. The simplest is the “power method”, which can be
applied both to the matrix discretization of an eigenproblem and also to the original time-
dependent equations from which the eigenproblem was derived. A couple of examples
will make the idea clearer.
First, suppose the goal is to calculate hydrodynamic instabilities. The ¬‚ow is split into
a user-speci¬ed “basic state” plus a perturbation, which is assumed to be initially small
compared to the basic state but is otherwise unrestricted. The ¬‚ow is unstable if the per-
turbation ampli¬es in time.
A useful illustration is the barotropic vorticity equation in a “beta-plane” approxima-
tion on a rotating planet:

ψxxt + ψyyt + ψx (ψxxy + ψyyy ) ’ ψy (ψxxx + ψxyy ) + βψx = 0 (7.39)

where ψ(x, y, t) is the streamfunction for two-dimensional ¬‚ow and β is the latitudinal
derivative of the Coriolis parameter. For simplicity, we shall consider only a basic state
which is independent of x, that is, denoting the basic state by upper case letters,
y
Ψ(y) = ’ (7.40)
U (z)dz

where U (y) is the “mean ¬‚ow”, a jet moving parallel to the x-axis.
The time-dependent variant of the power method is to integrate the equations of motion
from an arbitrary initial condition until the fastest-growing mode completely dominates the
solution, at which point the solution is the desired eigenmode and its growth rate and
spatial translation rate are the imaginary and real parts of the complex-valued phase speed
c, which is the eigenvalue.
There are two variants. The ¬rst is to linearize the equations of motion about the basic
state:
y
ψ ∼ O( ), (7.41)
ψ=ψ + U (z)dz, << 1

Since the perturbation is assumed to be very small, we can always expand the equations
of motion in a power series in the amplitude of the perturbation; the linearized equation
of motion is simply the O( ) term in this expansion. The linearized barotropic vorticity
equation, for example, is

ψxxt + (β ’ Uyy )ψx + U (y)(ψxxx + ψxyy ) = 0 (7.42)

where the prime on ψ denotes the perturbative part of the streamfunction.
In the linearized equation, feedback from the perturbation to the basic state, as happens
for the fully nonlinear equations of motion, is suppressed. This is a detriment because it
restricts the model to the early stages of the instability when the perturbation is very small.
However, it is also a virtue. The linearized stability problem is completely speci¬ed by the
speci¬cation of the basic state.
CHAPTER 7. LINEAR EIGENVALUE PROBLEMS
146

The second variant of the time-dependent power method is to solve the full, nonlinear
equations of motion by starting from a perturbation of amplitude and integrating until
the amplitude is O(a) where a >> , but a is still small compared to the basic state. The
result will be the same as that of solving the linearized problem (on the same time interval)
to within an absolute error of O(a2 ) [relative error O(a)] because this is the leading order of
the terms neglected in the linearization. One virtue of this method is that it is unnecessary
to create a second computer program, with slightly different (i. e., linearized) equations of
motion, to explore linearized stability. Another is that one can easily extend the integration
further in time to explore how nonlinear feedback alters the growth and structure of the
amplifying disturbances. The disadvantage is that if there are unstable modes of similar
growth rates, one may be forced to stop the integration (to keep a << 1) before the fastest
growing mode has become large compared to all other modes.
In either variant, the growth rate and amplitude are estimated by ¬tting the approxi-
mation
ψ(x, y, t2 ) ’ ψ(x, y, t1 ) ≈ Ξ(y) exp(ikx) {’ikct2 ) ’ exp(’ikct1 )}
+complex conjugate (7.43)
where t1 and t2 are two large but otherwise arbitrary times and Ξ(y) gives the complex-
valued latitudinal structure of the instability while c is the complex phase speed and k the
x-wavenumber, which must also be determined from the perturbation for large time.
Because the ¬tting is a little tricky, and it sometimes requires a huge number of time
steps before the fastest-growing mode dominates its slower-growing brethren, it is often
more convenient to convert the time-dependent equations into a matrix eigenproblem. The
justi¬cation for this step is that as long as the linearized equations have coef¬cients which
are independent of time, all discretizations of the spatial dependence will collapse the lin-
earized PDE into a system of ordinary differential equations in time of the form du/dt = Au
where A is a square matrix whose elements are independent of time. The usual theory of
constant coef¬cient ODEs, found in most undergraduate differential equation texts, then
asserts that the general solution is a superposition of the eigenmodes of A, each oscillating
as an exponential of time with a complex frequency which is the corresponding eigenvalue
of A, assuming these are all distinct. The matrix formulation of linearized stability theory
is simply the computation of these eigenvectors and eigenvalues of A.
When the coef¬cients of the linearized PDE do not vary with all the spatial coef¬cients,
then one can separate variables still further. For the barotropic vorticity equation, one can
expand the solution in a Fourier series in x; if linearized with respect to a mean current
which varies only with y, the x-wavenumbers are uncoupled. A large two-dimensional
eigenvalue problem therefore collapses into a set of differential eigenproblems in y only,
one for each wavenumber in x. Denoting the wavenumber by k, the eigenproblem is
β ’ Uyy
’ k2 ψ = 0 (7.44)
ψyy +
U ’c
(In a minor abuse of notation that is almost ubiquitous in the literature, ψ(y) is used for
the eigenfunction even though the same symbol has already been employed for the total
time-dependent streamfunction.)
The power method can be applied directly to the matrix form of an eigenvalue problem
by ¬rst discretizing the differential operator and then multiplying through by the appro-
priate inverse matrix to convert the problem into a standard (as opposed to a generalized)
matrix eigenproblem:

(7.45)
Au = »u
7.8. THE POWER METHOD 147

Note that if we assume that A has as complete set of eigenvectors ej and expand a vector
u0

(7.46)
u0 = aj ej
j=1


then the result of multiplying u0 by the square matrix A will be a vector whose eigencoef-
¬cients will be the same except that aj is multiplied by »j , the eigenvalue associated with
ej :

(7.47)
Au0 = »j aj ej
j=1


If we iterate by repeatedly multiplying by powers of A, it follows that as the iteration
number k increases, the eigencoef¬cients will be multiplied by »k and therefore the eigen-
j
function associated with the eigenvalue of largest magnitude will more and more dominate
the k-th power of A. As it does so, the ratio ||Au||/||u|| will tend more and more towards
the magnitude of the largest eigenvalue where || || denotes any reasonable matrix/vector
norm.
This suggests the following algorithm for computing the eigenfunction and eigenvalue
for the mode of largest (absolute value) of eigenvalue. First, choose an arbitrary starting
vector u0 and then divide it by its norm so as to rescale it to unit norm. (The algorithm will
fail if u0 is orthogonal to the target eigenmode, but if u0 is a column of numbers from a
random number generator, the probability of such failure is negligibly small.) Then, repeat
the following loop over the iteration number k until the eigenfunction and eigenvalue have
converged to within the user-chosen tolerance:

uk+1 = Auk , k = 0, 1, . . .
= ||uk+1 || (7.48)
»k+1
= uk+1 / ||uk+1 ||
uk+1

This is simply the matrix form of the power method: iterate until the fastest-growing mode
dominates. The third line of the loop is meant in the sense of a computational assignment
statement rather than mathematical equality ” uk+1 is replaced by the same vector divided
by its norm to rescale it to unit norm. This renormalization avoids over¬‚ow problems.
When » is complex-valued, the second line must be replaced by the average of the element-
by-element division of the new eigenvector by the old:
N
(7.49)
»k+1 = (1/N ) uk+1,j /uk,j
j=1

The matrix power method can be applied to any matrix, whether connected with sta-
bility problems or differential eigenproblems or not. It is very fast and cheap per iteration
because the most costly step is merely a matrix-vector multiply, which is a highly vectoriz-
able operation. However, the power method has the disadvantage that it can ¬nd only the
mode of largest absolute value of eigenvalue, which may not be the only mode of interest.
In stability problems, it is quite common to have two unstable modes whose growth rates
switch dominance at some point in parameter space. In such transition regions where two
modes have the same or almost the same growth rates, the power method will fail.
In the next section, we therefore describe another algorithm which ¬xes many of the
defects of the power method, albeit at a much higher cost per iteration. First, though, we
CHAPTER 7. LINEAR EIGENVALUE PROBLEMS
148


4
First Guess
3
2
1
0
-30 -20 -10 0 10 20 30


Dispersive Soliton
3

Transient
2

1

0
-30 -20 -10 0 10 20 30
x

Figure 7.10: Illustration of the power method applied to the KdV equation. The initializa-
tion (“¬rst guess”) is u = 4 exp(’x2 /2). At t = 20, this has split into a dispersing transient
(which moves left) and a single soliton (which moves rightward).


must note that the power method is not limited only to instability problems or only to
linear eigenvalue problems.
For example, the power method is a good way to compute solitary waves of the Korteweg-
deVries (KdV) equation:

(7.50)
ut + uux + uxxx = 0

The solitary waves are the solutions to the nonlinear eigenvalue problem

(u ’ c)uX + uXXX = 0 (7.51)

where X ≡ x ’ ct is the coordinate in a frame of reference which is travelling with the
wave and c is the phase speed. By the power method, the ODE nonlinear eigenproblem
can be bypassed and solitons computed directly from the time-dependent equation: Inte-
grating forward in time from an arbitrary initial condition until the dominant eigenmode
has separated itself from the transients.
In stability problems, the separation is by separation-by-amplitude: the dominant mode
is fastest-growing and therefore eventually becomes larger than the other modes every-
where in space. For the KdV equation, the separation is spatial: the dispersing transients
travel leftward (because the group velocity for small amplitude waves of all wavenumbers
is negative) whereas the velocities of all solitary waves are positive. Because the amplitude
of a solitary wave decreases exponentially fast with distance from the center of the soliton,
the overlap between a given solitary wave and the rest of the solution decreases exponen-
tially with time. Thus, even though there is no instability and the eigenvalue problem is
nonlinear, the power method converges geometrically for the KdV equation.
7.9. INVERSE POWER METHOD 149

7.9 Alternatives for Costly Problems, II: Inverse Power Method
The matrix variant of the power method has the disadvantage that it can ¬nd only the mode
of largest eigenvalue. The inverse power method, also sometimes called simply “inverse
iteration”, lacks this limitation. The algorithm is to apply the ordinary power method to
the matrix

M ≡ (A ’ ΛI)’1 (7.52)

where I is the identity matrix and Λ is a user-speci¬ed constant. The eigenvalues of M are

1
(7.53)
µj =
»j ’ Λ

where the »j are the eigenvalues of the original matrix A, so the largest eigenvalue of the
shifted-and-inverted matrix M is the reciprocal of whichever eigenvalue of A is closest to
the shift Λ. It follows that the inverse power method will converge geometrically to any
eigenvalue of the matrix A, provided we have a suf¬ciently good ¬rst guess (and set Λ
equal to it).
As usual, the inverse is not explicitly computed. Instead, one iterates

(A ’ Λk I)uk+1 = uk , k = 0, 1, . . .
N
(7.54)

<<

. 33
( 136 .)



>>