. 110
( 136 .)


P (x; ±) = 0

where ± is a parameter. Let x0 be a root which is known for ± = 0. We can trace this
solution branch x(±) emanating from the point (x0 , 0) by solving the ODE initial-value

dx ‚P (x; ±)/‚±
=’ (D.9a)
d± ‚P (x; ±)/‚x
x(0) = x0

For example,

P (x; ±) ≡ ±x2 ’ 2 x + 1 ’’ (D.10)

‚P/‚± = x2 (D.11)

‚P/‚x = 2±x ’ 2 (D.12)

As ± ’ 0, this collapses into the linear equation (’2x + 1) = 0, which has the unique
solution x = 1/2. We can trace x(±) away from ± = 0 by solving

=’ (D.13a)
2(± x ’ 1)

x(0) =
Polynomials have multiple solutions, but we can apply the same method to trace all the
roots. For small ±, one can easily show that

x2 (±) ≈ (D.14)
± 1
which can be used to initialize the same differential equation, (D.13a), for small ± to trace
the second solution for all ± for which this root is real.
Two dif¬culties arise. First, what happens when two roots of the polynomial coincide?
The answer is that the simple marching procedure will probably fail. There are good strate-
gies for dealing with “limit points” and “bifurcation points”, explained in later sections.
The second problem is to obtain the ¬rst guess for continuation, and we turn to this next.

D.3 Initialization Strategies
To apply continuation, we need a solution for ± = 0 or equivalently, a ¬rst guess which
is suf¬ciently close so that the Newton-Kantorovich method will convert the guess into a
solution. Strategies for obtaining this needed solution or ¬rst guess include the following:
(i) a linear or perturbative solution
(ii) invention of an arti¬cial marching parameter and
(iii) low-order collocation.
The ¬rst strategy may be illustrated by cnoidal waves of the so-called FKdV equation.
The ODE and boundary conditions are

’uxxxxx + (u ’ c)ux = 0 (D.15)

u(x + 2 π) = u(x)

where c is the phase speed. For small amplitude, the nonlinear term can be ignored and
solving the linearized equation gives

u(x) ≈ a cos(x) c ≈ ’1, (D.17)
; a << 1

which can be used to initialize the march. The most obvious marching parameter is the
phase speed c, but we can also de¬ne the coef¬cient of cos(x) in the Fourier expansion of
u(x) to be the amplitude a, and march in a. Fig. D.1 shows c(a).
Since dc/da varies much more rapidly for small amplitude than for large, it is ef¬cient to
employ a variable step-size in the marching parameter a. One could, for example, monitor
the number of Newton™s iterations that are required at each value of a. When it falls below
a cutoff, one could automatically double the step-size. When the number of Newton™s
iterations is large, one may automatically halve the step-size. Library ODE-solvers, such
as might be applied to the Davidenko equation, use similar strategies.
Unfortunately, in the vicinity of a limit point, du/d± ’ ∞, so no variable step-size
strategy will work. However, there are other remedies as explained in the next section.
The second strategy of inventing an arti¬cial continuation parameter may be applied to
solve a polynomial equation

R(x) = 0

for all its N roots. There is an in¬nite variety of choices, but Wasserstrom (1973) de¬nes

P (x; ±) ≡ ±R(x) + (1 ’ ±)[xN ’ 1] (D.19)

When ± = 0, the roots of P (x; ±) are the N roots of unity. When ± = 1, P (x; 1) = R(x).
When ± is a physical parameter, we usually graph x(±). When ± is an arti¬cial marching
parameter, however, the intermediate values of x(±) are simply thrown away. Fig. D.2

Figure D.1: Phase speed c versus amplitude a for the cnoidal wave of the ¬fth-degree
KdV equation, taken from Boyd (1986b). For small amplitude, the differential equation is
linear and c ≈ ’1 while u(x) ≈ a cos(x). This linear solution can be used to initialize the
continuation method, which can then march to arbitrarily large amplitude in small steps.

Figure D.2: An illustration of the continuation method for a polynomial of degree 25.
(a) Trajectories of the coef¬cients of the polynomials as functions of the marching parameter
(b) Trajectories of the roots of the polynomial.
This example and graphs are taken from Wasserstrom (1973).

shows how the roots vary with ± for a 25-th degree polynomial taken from Wasserstrom
Is this a good way to solve polynomial equations? If there is a single solution of physical
interest and if ± is a physical parameter, then continuation is very ef¬cient. When roots
x(±) collide as ± varies, the continuation is defeated. However, for complex trajectories,
this is very unlikely. The technical jargon is that (D.19) is a “probability-one homotopy”
(Li, 1987, and Morgan, 1987). What this means is that if one picks the coef¬cients of R(x)
as random complex-valued numbers, the probability of root-collision is in¬nitesimal.
Unfortunately, the situation is different when the trajectories are real. If R(x) has any
complex roots, then continuation from a polynomial P (x; ±) which has only real roots
will inevitably fail because pairs of roots must merge before separating as a complex con-
jugate pair. Thus, one needs a complex-valued embedding strategy like Wasserstrom™s.
Furthermore, near-collisions will cause problems if the roots pass suf¬ciently close, and the
probability of such near-collisions is not zero ” although it is very small.
For a single polynomial equation, the simplest strategy is the Traub-Jenkins package
which is available in most software libraries. For systems of polynomial equations, how-
ever, little library code exists and the continuation strategy is quite attractive as described
in the book by Morgan (1987).
For a differential equation such as
Wrr + Wr ’ W ’ W 2 = 0 W ’ 0 as |r| ’ ∞ (D.20)
which is known as the “Petviashvili monopole vortex” problem in geophysical ¬‚uid dy-
namics, linearization fails because the linearized problem has only the solutions K0 (r),
which is singular at the origin, and I0 (r), which is unbounded for large r. This problem
is “essentially nonlinear” in the sense that the nonlinear term is necessary to remove the
singularity at the origin.
The physical background of this problem shows that the solution qualitatively resem-
bles exp(’r). One way to exploit this is to de¬ne an arti¬cial forcing function to be the
residual of the trial solution exp(’r), that is,

‚2 1‚
’ 1 ’ e’r e’r
f (r) ≡ (D.21)
‚r r ‚r

If we solve perturbed problem
ur ’ u ’ u2 = (1 ’ ±) f (r) (D.22)
urr +
then Newton™s iteration will converge for ± = 0 because exp(’r) is, by construction, the ex-
act solution. As we march in small steps in ±, u(r; ±) is slowly deformed from exp(’r)(± =
0) to the Petviashvili monopole, W (r)(± = 1).
This method of the “arti¬cial forcing function” has a broad range of usefulness. It helps
to choose a trial solution that qualitatively resembles the true solution because u(r; ±)
varies less between ± = 0 and ± = 1, but this is by no means mandatory.
A variant is to choose the forcing function for simplicity such as a constant δ:
urr + ur ’ u ’ u2 = δ (D.23)
For small δ, the perturbed differential equation has two solutions: (i) the monopole (large
amplitude) and (ii) the arti¬cial (small amplitude) solution which may be calculated by

ignoring the nonlinear term, solving the resulting linear BVP, and then using the linear so-
lution to initialize Newton™s iteration. If we are lucky, these two solution branches, u1 (r; δ)
and u2 (r; δ) will join at a limit point, δ = δlimit . Because of the critical point, marching in
δ will never “turn the corner” around the limit point onto the upper branch; this tactic of
a constant forcing function is practical only when combined with “pseudoarclength con-
tinuation”, which is brie¬‚y described below. Glowinski, Keller, and Reinhart (1985) have
successfully applied this method to a problem very closely related to (D.20).
The third numerical strategy for obtaining a ¬rst guess is to compute a crude solution
using a very low order approximation which can be obtained in closed form. For example,
if one can invent a two-parameter guess, one can then make a contour plot of the residual
when this trial solution is substituted into the differential equation, and pick the parame-
ters that minimize the residual. A more direct tactic in this same spirit is to apply low order
For example, the solution of (D.20) is (a) symmetric1 and (b) vanishes as |r| ’ ∞. It
follows that if we solve this using the orthogonal rational Chebyshev functions T Bn (r),
discussed in Chapter 17, then we need only the symmetric (even n) basis functions. Fur-
ther, we can create new basis functions which individually satisfy the boundary conditions
at in¬nity by taking linear combinations of the T Bn (r). In particular, choosing φ0 (r) ≡
T B2 (r) ’ T B0 (r) and truncating the basis to just this single function gives the approxima-
W (r) ≈ a0 (D.24)
1 + [y/L]2
where L is the user-choosable map parameter associated with the T Bn (r). Choosing L = 2,
which is known to be reasonable from calculating the expansion of functions that decay
as exp(’r), and applying the one-point pseudospectral algorithm gives a linear equation
whose solution is a0 = ’1. The resulting approximation has a maximum error of about
16% of the maximum of the monopole, but it is suf¬ciently close so that Newton™s iteration
with many basis functions will converge. Boyd (1986c) and Finlayson (1973) give other
It goes without saying that this approach is more risky than the slow, systematic march
of continuation in an arti¬cial parameter. It is also much cheaper in machine time in the
sense that if it works, low-order collocation may obviate the need for continuation, at least
in non-physical parameters. There is also the satisfaction of “making friends with the func-
tion”, to use Tai-Tsun Wu™s poetical phrase: brute force numerical computation is often no
better at giving insight into the physics or the structure of the solution than television is at
giving insight into real human behavior.

D.4 Limit Points
De¬nition 51 (LIMIT POINT) If N (u; ±) = 0 is a single nonlinear equation with a solution
branch u(±), then a LIMIT POINT occurs wherever
Nu (u[±]; ±) = 0 N± (u[±]; ±) = 0 (D.25)
The Davidenko equation implies that an equivalent de¬nition is
’∞ ± ’ ±limit
as (D.26)

1 In
cylindrical coordinates, the r-dependent coef¬cients of the terms in the Fourier series in polar angle θ are
symmetric about r = 0 for even wavenumber (including the axisymmetric term) and the coef¬cients of odd
wavenumber are antisymmetric in r as explained in Chapter 18.

Two distinct solution branches u1 (±) and u2 (±) meet at a limit point, and exist only on one side
of the limit point. That is, if u1 (±) and u2 (±) exist for ± ¤ ±limit , then they do not exist (as real
solutions) for ± > ±limit .
Newton™s iteration must fail at and near a limit point because the iteration is attracted equally
to both branches, and responds by converging to neither.
When N (u; ±) is a system of equations, (D.26) still applies. The interpretation of (D.25) is
that (i) the determinant of the Jacobian matrix Nu is zero and (ii) the column vector N± does not lie
in the range of Nu so that the matrix equation Nu du/d± = ’N± has no bounded solution.

As an example, the simple algebraic equation

(u ’ 1/2)2 + ±2 = 1 (D.27)

has limit points at ± = ±1 where u = 1/2. This is merely the equation of a circle in the
u-± plane as shown in Fig. D.3, but nonetheless it is impossible to trace the entire curve in
a single pass by marching in ±. By using a variable step-size, we can come as close to the
limit point as we want, but we can never turn the corner and march from the lower branch
onto the upper branch using small steps in either ± or u.
However, it is trivial to write the equation of the circle in (D.28) in parametric form as

u = cos(t) + 1/2 ; ± = sin(t)

Both u and ± are continuous, single-valued functions of t so that we can trace both branches
of the solution in one continuous path if we march in t. The new continuation parameter t
is the arc-length along the solution curve.
It is usually impossible to analytically de¬ne the arclength as in (D.28), but H. B. Keller
and various collaborators(Keller, 1977, Decker and Keller, 1980, Chan, 1984, and Glowinski,

Figure D.3: Solution curve u(±) for a quadratic equation. The two limit points are marked
by —™s. There, as indicated on the right, the solution curve is tangent to a vertical line
(dashed) and du/d± = ∞. For ’1 < ± < 1, there are two solutions for a given ±. At the
limit points, these upper and lower branches meet. Unable to decide whether to converge
to the upper or lower branch, Newton™s iteration diverges at the limit points.


. 110
( 136 .)