<<

. 96
( 136 .)



>>

1989a); the equivalent Maple program is Table 20.2;.
The solution is

a0 = 3(43243200 6 + 2162160 4 2
+ 27720 + 31)/D( )
a1 = 33(327600 4 + 41)/D( )
a2 = 429(840 2 ’ 13)/D( )
(20.8)
a3 = 6435/D( )

8 6 4 2
(20.9)
D( ) = 128(2027025 + 945945 + 51975 + 630 + 1)

Observe that the residual function R4 “ and therefore the elements of the matrix equa-
tion “ are quadratic in . The coef¬cients of the spectral series aj are rational functions of
with numerator and denominator polynomials of degree no higher than the degree of ele-
ments of H multiplied by N , that is 2 · 4 = 8. This rational dependence upon a parameter
is not special to this equation, but is a consequence of the following.

Let the
Theorem 39 (MATRICES WHOSE ELEMENTS DEPEND on a PARAMETER)
elements of a matrix H of dimension N be polynomials in a parameter of degree at most k and let
A denote the column vector which solves HA = F where F is a column vector whose elements are
polynomials in of degree at most m. Then (i) det(H) is a polynomial in of degree at most N k.
(ii) The elements of A are rational functions of . The degree of the numerator is k(N ’ 1) + m; the
degree of the denominator is kN .

The proof is a trivial consequence of Cramer™s Rule for solving matrix equations (Vino-
grade, 1967, pg. 60) and the de¬nition of a determinant (op. cit., pg. 169). Q. E. D.

The boundary layer is so narrow for = 1/10 that the half-width of the layer is only
0.07, that is, 50% of the drop of u(x) from its maximum to zero occurs between x = 0.93 and
x = 1. Nevertheless, the maximum pointwise error of the four-term Legendre-Galerkin
approximation is only 0.017. Table 20.3 shows that the error is no worse than 8% even for
as small as 1/20.
20.3. EXAMPLES 467

Like all spectral approximations, u4 (x; ) is highly uniform in the coordinate x, but this
does not imply uniformity in the parameters. In this example, the error is highly non-
uniform in the parameter .

Example Two: The Quartic Oscillator of Quantum Mechanics
The special challenge of this eigenvalue problem is that it is posed on an in¬nite inter-
val.

uyy + (E ’ y 4 )u = 0, y ∈ [’∞, ∞] (20.10)

with the boundary condition

|u| ’ 0 as |y| ’ ∞ (20.11)

where E is the eigenvalue, the energy of the quantum bound state. The rational Chebyshev
functions described in Boyd (1987a) and in Chapter 17 are a good basis set for the in¬nite
interval. To “polynomialize” these rational functions and simplify the integrals, make the
change of variable

y = Lx/ 1 ’ x2 (20.12)

where L is a user-choosable constant (“map parameter”). By using the chain rule or Table
E-5 here, this change-of-coordinate transforms the problem into

(1 ’ x2 )4 (1 ’ x2 )uxx ’ 3xux /L2 + (1 ’ x2 )2 E ’ L4 x4 u = 0 (20.13)

x ∈ [’1, 1], (20.14)
u(±1) = 0

The rational Chebyshev functions are converted into ordinary Chebyshev polynomials by
this transformation, so both the in¬nite interval and rational basis functions are swept
under the rug by the mapping (20.12).
The solution is not very sensitive to the map parameter L as long as it roughly matches
the scale of the basis functions to the width of u(x); we somewhat arbitrarily set L = 2.
One can prove that the eigenfunctions of (20.10) are all symmetric (for even mode num-
ber) or antisymmetric (for odd mode number). To compute the lowest few symmetric
modes, we set

u = (1 ’ x2 ){a1 + a2 x2 + a3 x4 + a4 x6 + a5 x8 } (20.15)



Table 20.4: Maple program for Example Two: Quartic Oscillator

Q:=1-x*x; L:=2; # L=map parameter;
u:=(1-x*x)*(a1+a2*x*x+a3*x**4+a4*x**6+a5*x**8);
R:=Q**4 *(Q*diff(u,x,x)-3*x*diff(u,x) )/(L*L) + (lamb*Q*Q - L**4 * x**4 )*u;
# E is a Maple reserved symbol, so we use “lamb” for the eigenvalue
# To use Gegenbauer polynomials of ¬rst order instead of Legendre,
# multiply the integrand in the loop by (1-x*x);
for ii from 1 by 1 to 5 do eq.ii:=integrate(x**(2*ii-2)*R,x=-1..1); od:
with(linalg); J:=matrix(5,5);
for ii from 1 by 1 to 5 do for j from 1 by 1 to 5 do
J[ii,j]:=coeff(eq.ii,a.j,1); od; od;
secdet:=simplify(det(J)); # secdet=D(E)
Eigenvalue:=fsolve(secdet,lamb);
CHAPTER 20. SYMBOLIC CALCULATIONS
468


Table 20.5: Errors for Example Two: Quartic Oscillator

w(x) = (1 ’ x2 ): Gegenbauer
Exact w(x) = 1: Legendre
Error Error
n Eexact Enumerical Enumerical
0 1.060 1.065 0.005 1.061 0.001
2 7.456 7.655 0.199 7.423 -0.033
4 16.26 17.38 1.12 17.17 0.91
6 26.528 64.32 Huge 42.53 Huge
8 37.92 715.66 Huge 330.7 Huge




We converted the differential equation into a 5-dimensional matrix problem by de-
manding that the integral of the residual with the test functions {1, x2 , x4 , x6 , x8 } should
be 0. Since the matrix equation is homogeneous, it has a solution only if its determinant
D(E) = 0. The vanishing of this “secular determinant”, to borrow the usual terminology
of quantum mechanics, determines the eigenvalue E.
Table 20.4 is a short Maple program which produces

D(E) = 1 ’ 1.143E + 0.203E 2 ’ 0.0102E 3 + 0.000124E 4 ’ 0.000000153E 5 (20.16)

after the coef¬cients have been converted to ¬‚oating point numbers. The exact determi-
nant, given in Boyd (1993), illustrates “integer swell”: the coef¬cients are ratios of as many
as 29 digits divided by 35 digits. Table 20.5 lists the roots, the exact eigenvalues (from Ben-
der and Orszag, 1978, pg. 523), and the absolute errors. As is usual in numerical solutions
to eigenvalue problems, the lowest numerical eigenvalue is very accurate, the eigenvalues
up to roughly N/2 are of decreasing accuracy, and the upper few eigenvalues of the N = 5
matrix bear no useful resemblance to those of the differential equation. The accuracy of the
lowest root is quite good for such a low order approximation. For comparison, we redid
the problem with the inner product weight of (1 ’ x2 ) instead of 1. As suggested in the
previous section, there is a striking improvement in accuracy, at least for the lowest two
modes.

Example Three: Branching (Bifurcation) for the Fifth-Order KdV Equation
The twist in this problem (Boyd, 1986a) is that the equation depends nonlinearly on the
eigenparameter a. This is a serious complication for conventional software, but computer
algebra problems are indifferent as to whether the matrix is linear or nonlinear in a.
The problem is to compute a solution to the homogeneous, linear equation

’∆xxxxx ’ c∆x + {U (x; a)∆}x = 0 (20.17)


∀x (20.18)
∆(x) = ∆(x + 2π)

where U (x; a) is the “cnoidal wave” of the Fifth-Order Korteweg-deVries equation (FKdV),
that is, a spatially periodic solution to the nonlinear eigenvalue problem

’Uxxxxx ’ cUx + U Ux = 0 (20.19)


for all x (20.20)
U (x) = U (x + π)

where c(a) is the eigenvalue and the one-parameter family of cnoidal waves is parame-
terized by a, the amplitude of cos(2x) in the Fourier series for U (x; a). The physical back-
ground is rather complicated (Boyd, 1986a), but the salient facts are these. The cnoidal
20.3. EXAMPLES 469

waves which solve (20.19) have a period of π, are symmetric with respect to x = 0, and
exist for all real values of a. For suf¬ciently large amplitudes, (20.19) also has “bicnoidal”
solutions which have a period of 2π “ double that of the cnoidal waves “ and two peaks
¬‚anking the origin. For some value of a, the bicnoidal waves are a subharmonic bifurcation
from the cnoidal wave. Away from the bifurcation, the linearized equation for small per-
turbations to the cnoidal wave, (20.17), has no solution except the trivial one. At the bifur-
cation point, however, (20.17) has a nontrivial eigensolution because the solution to (20.19)
is no longer unique. The cnoidal and bicnoidal solutions coexist at the bifurcation; near
but not actually at the bifurcation point, the bicnoidal wave is approximately described by
ubicnoidal (x) ≈ U (x; a) + g∆(x) for some constant g where ∆(x) is the eigensolution (with
zero eigenvalue) of (20.17).
To determine the bifurcation point, we must ¬rst compute U (x; a) and then substitute
the result into (20.17). When the linearized equation is discretized by a spectral method,
the resulting matrix equation will have a nontrivial solution if and only if the determinant
∆(a) = 0. Thus, the bifurcation point a = abif urcation can be found by computing the roots
of the determinant ∆(a).
To compute the cnoidal wave, we could apply the spectral method to (20.19) followed
by one or two Newton™s iteration. It turns out, however, that the bicnoidal wave bifurcates
at such a small amplitude that the cnoidal wave is adequately approximated at the bifur-
cation point by the ¬rst two terms of its perturbation series (“Stokes™ expansion”) (Boyd,
1986a):

U (x; a) ≈ a cos(2x) + [a2 /960] cos(4x) (20.21)


c(a) ≈ ’16 1 ’ a2 /30720 (20.22)

Exploiting the known symmetry of ∆(x), we assume a Fourier series and choose N = 4:

∆(x) ≈ a1 cos(x) + a2 cos(2x) + a3 cos(3x) + a4 cos(4x) (20.23)

Because the equation contains only odd derivatives of ∆, all the cosines are turned into
sines. Therefore, we demand that the residual should be orthogonal to the ¬rst four sine
functions, which converts (20.17) into the 4 x 4 matrix system

’28800 ’ 960a + a2 ’960a ’ a2
0 0 a1 0
’1920a
0 0 0 a2 =0
(20.24)
’960a ’ a2 124800 + a2
0 0 a3 0
’960a 460800 + a2
0 0 a4 0

In a standard eigenvalue problem, the matrix elements are all linear functions of the
eigenparameter a, but here the elements are quadratic in a. Nevertheless, taking the deter-
minant of (20.24) gives a polynomial in a which can be solved for the bifurcation point:

∆(a) = a2 (3a3 + 860a2 + 124800a + 3744000) (20.25)

The only nonzero root is abif urcation = ’39.10; the exact value is -39.37, an error of only
0.7%.
The other real root, a = 0, is the limit point of the cnoidal wave, legitimate but irrelevant
and trivial.
The checkerboard of zeros in the Galerkin matrix implies that even and odd terms in the
cosine series are uncoupled; we could have obtained the same answer by keeping only the
cos(x) and cos(3x) terms in (20.23) as done in Boyd (1986a). Overlooking a symmetry will
CHAPTER 20. SYMBOLIC CALCULATIONS
470

never give an incorrect answer, but it will give an unduly complicated and inaccurate one
for a given N . We would have obtained a similar checkerboard of zeros for Example Three
if we had not assumed a symmetric basis set to compute symmetric eigenmodes only.

Example Four: Reduction of a Partial Differential Equation to a Set of Ordinary Differ-
ential Equations: An Unorthodox Derivation of the Korteweg-deVries Equation

Spectral methods can be applied to reduce partial differential equations directly to an
algebraic system for the spectral coef¬cients. Sometimes, as illustrated by this example, it is
more useful to apply the spectral method only in one coordinate, reducing the problem to
a system of ordinary differential equations in the remaining dimension. The ODE system
can often be solved analytically, sometimes even when it is nonlinear as here.
To illustrate the possibilites, we analyze the travelling-wave solutions of the so-called
“Ageostrophic Equatorial Wave” or “AEW” equation (Boyd, 1991b):
uxx + uyy ’ y 2 u ’ u/c ’ yu2 /c = 0, x, y ∈ [’∞, ∞] (20.26)

|u(x, y)| ’ 0 as |y| ’ ∞ (20.27)
and various boundary conditions in x. The phase speed c is the eigenvalue In the limit of
in¬nitesimal amplitude, this nonlinear eigenvalue problem has solutions of the form
c = ’1/(2n + 1 + k 2 ) (20.28)
u(x, y) = cos(kx) ψn (y),
where ψn (y) is the n-th Hermite function. This suggests that the Hermite functions, which
individually satisfy the latitudinal boundary condition of decay, would be a good spectral
basis set.
The Hermite functions have de¬nite parity with respect to y = 0, the equator, and one
can show that the nonlinear term does not disrupt this symmetry. Therefore, to look for
general, nonlinear solutions which are antisymmetric with respect to y = 0, it is suf¬cient
to assume an expansion in the Hermite functions of odd n only:
u ≈ A1 (x)2y exp(’0.5y 2 ) + A3 (x) 8y 3 ’ 12y exp(’0.5y 2 ) (20.29)



Table 20.6: Maple Listing for Example Four: Reduction of a Partial Differential Equation

u:=exp(-y*y/2)*(a1(x)*2*y + a3(x)*(8*y*y*y-12*y));
R:=diff(u,x,x)+diff(u,y,y)-y*y*u-u/c-y*u*u/c;
# Next substitution is necessary because the presence of x-operators;
# will defeat the y-integration routine. This is a workaround.;
R:=subs(diff(a1(x),x,x)=A1xx,diff(a3(x),x,x)=A3xx,a1(x)=A1,a3(x)=A3,R);
integrand1:=R*2*y*exp(-y*y/2);
integrand3:=R*(8*y*y*y-12*y)*exp(-y*y/2);
# More workarounds. First, we apply Maple simpli¬cation &;
# expansion operators. Then, we use a special Maple command to do;
# the integrations term by term;
integrand1:=expand(simplify(integrand1)); integrand3:=expand(simplify(integrand3));
eq1:=0: eq3:=0:
for piece in integrand1 do eq1:=eq1+integrate(piece,y=0..in¬nity) od:
for piece in integrand3 do eq3:=eq3+integrate(piece,y=0..in¬nity) od:
eq1:=expand(eq1); eq3:=expand(eq3);

<<

. 96
( 136 .)



>>