<<

. 109
( 136 .)



>>

the number of parameters by two.
For some differential equations, solutions exist only for discrete values of the eigenpa-
rameter. An example is van der Pol™s equation:
utt + u = “ ut (1 ’ u2 ) [Van der Pol™s Eq.] (C.36)
If we replaced the term in the () by a constant, then (C.36) would describe a harmonic
oscillator which was damped or excited depending on the sign of the constant. Because this
“constant” in (C.36) is actually a function of u(x), we see that the solution of the nonlinear
equation is self-excited when u is small and self-damping when u is large. When (C.36) is
given an arbitary initial condition, the solution eventually settles down to a nonlinear limit
cycle in which the damping and self-excitation balance out over the course of one period of
the cycle to create a permanent, self-sustaining oscillation.
We can solve for the period and structure of the limit cycle as a nonlinear eigenvalue
problem by adopting the following device. Let ω denote the (unknown) frequency of the
limit cycle. If we make the change of variable
x ≡ ωt, (C.37)
C.3. EIGENVALUE PROBLEMS 533

then the limit cycle will automatically be periodic with a period of 2π and we can compute
a solution by assuming a Fourier series in x1 . The problem is then
ω 2 uxx + u = “ ω ux (1 ’ u2 ) (C.38)
In contrast to the undamped Korteweg-deVries equation, it is not possible to pick an
arbitrary value of ω and ¬nd a solution. The reason is that the tug-of-war between the
damping and the self-excitation normally has only a single solution for a given value of
the parameter “. It is not possible to have arbitrarily large amplitudes for u, for example,
because the equation would then be damped over almost the whole cycle. If we started
the calculation with a very large value of u(t = 0), we would ¬nd that u(t) would quickly
decay into a limit cycle of moderate amplitude and a unique frequency ω. Consequently,
just as in a linear eigenvalue problem, we must vary both u(x) and the eigenparameter ω
until we have obtained a unique solution.
Van der Pol™s equation happens to also have the property of translational invariance. In
this respect, it is like the Korteweg-deVries equation: its solution is not unique, but rather
is a continuous family, and the linearized differential equation of the Newton-Kantorovich
has an eigenvalue with zero eigenfunction. Thus, the matrix which is the pseudospectral
discretization of the linearized ODE will be almost singular [exactly singular in the limit
N ’ ∞] and the iteration will diverge unless we add additional constraints or conditions
to restrict the solution to a unique member of the continuous family of solutions.
Translational invariance may be destroyed by imposing a phase condition such as
[Phase Condition] (C.39)
ux (x = 0) = 0
which demands that the origin be either a maximum or a minimum of u(x). For the KdV
solutions, it is possible to impose (C.39) implicitly by restricting the basis set to functions
which are symmetric about x = 0.
However, because of the presence of both even and odd order derivatives in van der
Pol™s equation, its solution is not symmetric about any point and we cannot impose a phase
condition by restricting the basis set; we have to use both sines and cosines. To obtain a
unique solution, we must therefore impose the phase condition (C.39) explicitly.
Since a solution does not exist for arbitary ω, but only for one particular frequency, we
must retain ω as an additional unknown. Thus, if we apply the pseudospectral method
with N coef¬cients and N interpolation points, we must solve an (N + 1) — (N + 1) matrix
equation at each iteration. The (N +1)-st row of the matrix is the spectral form of (C.39);
the (N +1)-st unknown is δω, the correction to the frequency at the i-th iteration.
The Newton-Kantorovich iteration is a modest generalization of that used for earlier
examples. We make the simultaneous replacements
’’ (C.40a)
u(x) u(x) + (x)
’’ (C.40b)
ω ω + (δ ω)
in (C.38), collect powers of , and then the iteration, as for the other examples, consists of
equating the ¬rst order terms (divided by ) to the negative of the zeroth order terms. In
particular, if we write (C.38) in the symbolic form
(C.41)
uxx = F (x, u, ux , ω)
then the linearized differential equation is

’ Fux ’ Fu ’ Fω (δ ω) = ’ uxx ’ F
(i)
(C.42)
xx x

1 Note that time-periodic solutions, usually with the frequency as an eigenparameter, are an exception to the
usual rule that spectral methods are applied only to spatial coordinates.
APPENDIX C. NEWTON ITERATION
534

where F and all its derivatives are evaluated with u = u(i) and ω = ω (i) . The only extra
term is the derivative with respect to ω. We expand (x) as a Fourier series with a total of
N terms, demand that (C.42) is satis¬ed at N interpolation points, add the spectral form of
the phase condition, and then solve the linear matrix equation for the column vector whose
¬rst N elements are the N spectral coef¬cients of (x) and whose (N+1)-st element is (δω).
We then add the corrections to u(i) and ω (i) and repeat the iteration until it has converged.
It should be clear, as stressed at the end of the preceding chapter, that the Newton-
Kantorovich method is always applicable and the iteration can always be derived by very
elementary methods. (“Elementary” is a euphemism for freshman calculus! Functional
analysis is not needed). However, the differences between the Korteweg-deVries and van
der Pol cases, the need to impose phase conditions in some problems but not others, the
possibility that the eigenparameter may assume a continuous range of values or just one ”
all in all, they show that the subject of nonlinear differential equations is very complicated.
It is important to thoroughly understand the particular problem you are trying to solve
because the range of possibilities ” no solution, a solution only for a discrete value of a
parameter, solution for a continuous range of values of the parameter ” is rather large.
The only sure statement is this: given a suf¬ciently good ¬rst guess, the Newton-
Kantorovich method will always compute a solution of the indicated form if it exists. If
the solution does not exist, or you have left out phase conditions so that the solution of the
numerical problem is not unique, the iteration will (probably!) diverge, or the pseudospec-
tral matrix for the iteration differential equation will be a singular matrix.


C.4 Summary
SUCCESS: Given a suf¬ciently good ¬rst guess, the Newton-Kantorovich method will con-
verge. Furthermore, the asymptotic convergence will be quadratic in the sense that the
magnitude of the correction (i+1) will be roughly the square of the magnitude of (i) for
suf¬ciently large i.

Notes: (i) The quadratic convergence does not occur until the iteration is close to the ¬nal
solution (ii) The corrections will not decrease inde¬nitely, but will “stall out” at some very
small magnitude which is determined by the roundoff of the computer ” typically O(10’12
to 10’14 ). (iii) The iteration is self-correcting, so it is possible to have small errors in solving
the linearized ODE while still obtaining the correct answer. The signature of such errors
is geometric rather than quadratic convergence. (iv) The iteration does not converge to the
exact solution to the differential equation, but only to the exact solution of the N -basis
function discretization of the differential equation. Remember to vary N and compare!

FAILURE: Newton™s method may diverge because of the following:
(i) Insuf¬ciently accurate ¬rst guess.
(ii) No solution exists for the nonlinear problem.
(iii) A family of solutions, parameterized by a continuous variable, exists, rather than a
single, unique solution.
(iv) Programming error.

Notes: (i) Close to a “limit” or “bifurcation” point as discussed in Appendix D, a very
accurate ¬rst guess is needed. (ii) Small perturbations may sometimes drastically modify
nonlinear solutions; perturbing the KdV equation (C.28) so as to make the coef¬cients vary
with x will destroy the translational invariance, and reduce an in¬nite number of solutions
to either one or none, depending upon whether one can ¬nd a soliton such that the ¬rst
C.4. SUMMARY 535


Table C.1: Nonlinear Boundary Value Problems Solved by Spectral Methods: A Select Bib-
liography

References Comments
Boyd(1986c) FKdV bicnoidal; Fourier pseudospectral
Yang&Keller(1986) Fourier/¬nite difference algorithm for pipe ¬‚ow; good use of
pseudoarclength continuation to follow multiple branches
Van Dooren&Janssen(1989) period-doubling for forced Duf¬ng equation
Fourier Galerkin
Haupt&Boyd(1988, 1991) FKdV cnoidal waves; KdV double cnoidal wave
Boyd(1989b) review; many examples
Boyd & Haupt(1991) Fourier pseudospectral
Heinrichs(1992d) quasi-linear elliptic problems
Boyd(1995h) multiple precision; ¬nite difference preconditioned
nonlinear Richardson™s iteration; nonlocal solitons
Mamun&Tuckerman(1995) good discussion of how a time-marching code can
be easily modi¬ed to a Nonlinear Richardson™s iteration
nonlinear steady ¬‚ow between concentric spheres
Yang&Akylas(1996) weakly nonlocal capillary-gravity solitons
Boyd (1997b) Camassa-Holm “peakons”; T B pseudospectral
Boyd (1998c) monograph on solitons; 3 chapters on spectral methods
Nicholls(1998) water waves; Fourier in 2 horizontal coords.
Schultz&Vanden-Broeck & spectral boundary integral; Newton continuation;
Jiang&Perlin(1998) water waves with surface tension




order perturbation is orthogonal to the eigenfunctions. (iii) Too many solutions is as bad as
too few: one must add “side” conditions like (C.39) until a unique solution is obtained. (iv)
Because Newton™s iteration is self-correcting, a code may pass tests for some parameters ”
(i+1)
and then fail in a different parameter range. Geometric convergence, that is, /
∼ ρ for some constant ρ < 1, indicates that the Jacobian matrix is inconsistent with
(i)

the N equations which are being solved. (This is true if the iteration is a “honest” Newton™s
method with recomputation of the linearized differential equation at each step; if the same
linearized ODE is used at every iteration to save money, then geometric convergence is the
expected price for this modi¬cation of Newton™s method.)
Appendix D

The Continuation Method

“Creeps in this petty pace from day to day”
” W. Shakespeare, Macbeth




D.1 Introduction
Iterative methods have the vice that they must be initialized with an approximate solution.
The “continuation” method is a strategy for systematically constructing a ¬rst guess and
tracing a branch of solutions.
The basic idea is very simple. Suppose that a problem depends upon a parameter ±.
Suppose further that we know the solution u0 for ± = 0. We can then compute u(± = δ) by
applying the Newton or Newton-Kantorovich iteration with u(0) (δ) = u0 , provided that δ
is suf¬ciently small. Similarly, we can set u(0) (2 δ) = u(δ), and so on: at each step, we use
the solution for u(n δ) as the ¬rst guess for calculating u([n + 1] δ).
The better our guess, the faster the iteration will converge. The basic scheme can be
re¬ned by using the linear extrapolation:

u(0) ([n + 1] δ) = u(n δ) + δ {u(n δ) ’ u([n ’ 1] δ)} (D.1)

In summary,

’ ’ ’’
’’
[Analytical solution] (D.2)
u(± = 0)


Newton iteration
’ ’ ’’
’’ ’ ’ ’ ’ ’ ’’
’ ’ ’ ’’
u(0) (δ) (D.3)
u(0) u(δ)


u([n ’ 1] δ) Linear Extrapolation Newton™s iter.
’’’’’’
’ ’ ’ ’ ’ ’’ u(0) ([n + 1] δ) ’ ’ ’ ’ ’’ u([n + 1] δ)
’ ’ ’’ (D.4)
u(n δ)

In words, we have the following.



536
D.2. EXAMPLES 537

De¬nition 50 (CONTINUATION METHOD) This procedure solves the equation

N (u; ±) = 0 (D.5)

for u by marching in small steps in ±. N is the nonlinear operator and ± is the parameter. The
march begins with a known solution ” typically, a linear solution which can be found analytically
or numerically ” and applies an iteration to compute u at each value of ±, USING PREVIOUSLY
COMPUTED V ALUES OF u FOR SMALLER ± TO EXTRAPOLATE A FIRST GUESS FOR
THE SOLUTION AT THE CURRENT ±.
N may include derivatives or integrations, and may represent a system of nonlinear equations.
The unknown u may be the solution to either an algebraic equation or system of equations, or the
solution to a differential equation or system of such equations.


Strictly speaking, “continuation method” should be “continuation methods” in De¬-
nition 50 because the basic idea of parameter-marching to obtain the ¬rst guess can be
applied with a wide variety of iteration and extrapolation schemes. We can always replace
Newton™s method by the secant method, for example; since the latter has a smaller radius
of convergence than Newton™s, the price we must pay is a smaller maximum step in ±.
Similarly, to de¬ne better extrapolation formulas, we can use either ODE integration
schemes or higher-order Lagrangian interpolation in ± ” very carefully since interpolation
may diverge as the degree of the interpolating polynomial becomes large, especially when
extrapolating.
The ODE methods are based on differentiating N (u[±], ±) with respect to ± to obtain

Nu (u, ±) u± + N± = 0 (D.6)

For the special case that N (u, ±) is a single algebraic equation, (D.6) is the ¬rst order ordi-
nary differential equation

du ‚N (u, ±)/‚±
=’ [“Davidenko equation”] (D.7)
d± ‚N (u, ±)/‚u

Given a single initial condition such as u(± = 0) = u0 , we can apply our favorite Runge-
Kutta or predictor-corrector method in ± to trace the solution branch.
This differential-equation-in-the-parameter is named after Davidenko who ¬rst observed
that nonlinear algebraic equations could be solved by integrating an ODE or system of
ODE™s in the parameter. When extended to systems of M nonlinear equations in M un-
knowns, u and N become M -dimensional vectors and ‚N /‚u becomes the Jacobian ma-
trix whose elements are given by ‚Ni /‚uj . Integrating (D.7) is then relatively expensive
because one must compute du/d± by Gaussian elimination.
In principle, Newton™s method can be completely replaced by the integration of the
Davidenko equation. However, the errors in a Runge-Kutta scheme will gradually accu-
mulate as we march, so it is a good idea to periodically apply Newton™s method so as to
reinitialize the march with a u(±) that is accurate to full machine precision.


D.2 Examples
Let the goal be to solve the polynomial equation

<<

. 109
( 136 .)



>>