<<

. 18
( 136 .)



>>

=
0 (’9/2) a2 (’9/2)

which gives a = (1, 1)T [the superscript “T” denotes the “transpose” of the row vector into
a column vector] and the solution


(3.45)
u2 (x) = cos(x) + cos(2x)

For this carefully contrived example, the approximation is exact.

EXAMPLE TWO:


uxx + {cos(x) + cos2 (x)}u = exp[’1 + cos(x)] (3.46)

subject to periodic boundary conditions. We assert without proof that a Fourier cosine
series is suf¬cient to solve (3.46). [A rigorous justi¬cation can be given through the discus-
sion of “parity” in Chapter 8]. The R. H. S. of (3.46) is also the exact solution, which is the
special case ∆ = 1 of the expansion

(3.47)
exp{∆[’1 + cos(x)]} = exp(’∆){I0 (∆) + 2 In (∆) cos(nx)}
n=1

Let

(3.48)
u3 (x) = a0 + a1 cos(x) + a2 cos(2x) + a3 cos(3x)


H ≡ ‚xx + cos(x) + cos2 (x); (3.49)
f (x) = exp[’1 + cos(x)]

The Galerkin method gives the following 4 x 4 matrix problem:

.
.
.
(1/2) (1/2) (1/4) (0) a0 f0
.
(’1/4) .
.
(1) (1/2) (1/4) a1 f1
(3.50)
.
. ’ ’ ’ = ’ ’ ’’
.
..... ....
(1/2) (1/2) (’7/2) (1/2) a2 f2
(0) (1/4) (1/2) (’17/2) a3 f3

As noted earlier, the acid test of the accuracy of any numerical calculation is to repeat it
with different truncations. The two-term Galerkin approximation to (3.46) can be obtained
by simply truncating (3.50) to the upper halves of the column vectors a and f ” the parts
above the dashes ” and truncating the square matrix H to its upper left 2 — 2 block, which
is the block partitioned from the rest of the matrix by the dotted lines.
The matrix elements in (3.50) are given symbolically by

≡ (cos(jx), {’k 2 + cos(x) + cos2 (x)} cos(kx)) (3.51)
Hjk
exp(’1) I0 (1) [j = 0]
≡ (cos(jx), exp(’1 + cos(x))) =
fj
2 exp(’1) Ij (1)[j = 0]
CHAPTER 3. GALERKIN & WEIGHTED RESIDUAL METHODS
72




1


0.8


0.6


0.4


0.2


0
0 0.5 1 1.5 2 2.5 3
x

Figure 3.1: Example Two: a comparison between the exact solution (solid) and the two“
term approximation (dashed with circles).




0.02

0.015

0.01

0.005

0

-0.005

-0.01

-0.015

-0.02

-0.025
0 0.5 1 1.5 2 2.5 3
x


Figure 3.2: Example Two: the errors for the three-term approximation (dashed with circles)
and four-term Galerkin solution (solid with x™s).
3.6. GALERKIN METHOD: CASE STUDIES 73


Table 3.1: Comparison of exact and approximate spectral coef¬cients for EXAMPLE TWO
for various truncations, and also the errors in these coef¬cients.
n Exact N=1 (two terms) N=2 (three terms) N=3 (four terms)
Errors Errors Errors
an an an an
0 0.4658 0.5190 -0.0532 0.4702 -0.0044 0.4658 0.0
1 0.4158 0.4126 0.0032 0.4126 0.0032 0.4159 -0.0001
2 0.0998 “ “ 0.0976 0.0022 0.0998 0.0
3 0.0163 “ “ “ “ 0.0162 0.0001
4 0.0020 “ “ “ “ “ “



Table 3.2: Comparison of upper bounds on the discretization error ED and truncation error
ET with the maximum pointwise error [ error in the L∞ norm].

[N=1] [N=2] [N=3]
ED (bound) 0.0564 0.0098 0.0002
ET (bound) 0.1181 0.0183 0.0020
L∞ error 0.153 0.021 0.0023

N
upper bound on ED = n=0 |aexact ’ aapprox. |
n n

upper bound on ET = n=N+1 |an |
L∞ error = max∀x [’π,π] |u(x) ’ uN (x)|



where j, k = 0, . . . , N . Numerical evaluation of these integrals then gives (3.50).
The coef¬cients as calculated with two, three, and four terms are in Table 3.1 along with
the differences between these coef¬cients and those of the exact solution. Table 3.2 of the
table shows upper bounds on the discretization error and the truncation error that were
calculated by taking the absolute value of each error (or neglected term) and summing.
Fig. 3.1 compares the exact solution with the two term approximation. The errors in the
higher approximations are so small that it is dif¬cult to see the difference between the
curves, so Fig. 3.2 graphs only the absolute errors for u2 (x) and u3 (x). The table and ¬gures
support many previous assertions.
First, it was claimed that the discretization error ED and the truncation error ET are of
the same order-of-magnitude so that it is legitimate to use the latter as an estimate of the
total error. Table 3.1 shows that this is true; the total error (sum of ED plus ET ) is only half
again as large as ET for all three truncations.
Second, Table 3.2 shows that the maximum pointwise error is only a little smaller than
the sum of the absolute values of the discretization and truncation errors. Put another way,
the Fourier Truncation Theorems give fairly tight bounds on the pointwise error rather
than wild overestimates.
Third, we asserted that the last retained coef¬cient aN was a loose but useful estimate of
the total error. Table 3.2 shows that this is true although rather conservative. For N = 3, for
instance, the last calculated coef¬cient is 0.0162, but the maximum error in u2 (x) is about
eight times smaller.
Fourth, it was claimed that spectral methods are much more accurate than ¬nite differ-
ence methods. Fig. 3.1 shows that even the N = 1 (two term) approximation is qualita-
tively similar to u(x); Fig. 3.2 shows that using just four terms gives a maximum error of
only 1 part in 500 relative to the maximum of u(x). It would be quite impossible to obtain
CHAPTER 3. GALERKIN & WEIGHTED RESIDUAL METHODS
74

anything like this accuracy with ¬nite difference methods. The Quasi-Sinusoidal Rule-of-
Thumb suggests that one would need about 20 grid points with a second order method to
achieve the same accuracy as the N = 3 spectral approximation.
Fifth, Fig. 3.2 shows that the error is highly uniform over the interval. The error oscillates
with several changes of sign, and different peaks of error differ in size by no more than a
factor of two for N = 2 and are almost identical in size for N = 3. These are very general
properties of spectral approximations: Basis function methods do not favor one part of the
domain over another.

EXAMPLE THREE:
The eigenvalue problem

(1 ’ x2 ) uxx ’ »x2 u = 0 (3.52)

where » is the eigenvalue and where the homogeneous boundary conditions are that u(x)
should be analytic at x = ±1 in spite of the singularities of the differential equation at
those points. This problem is a special case of Laplace™s Tidal Equation for zero zonal
wavenumber (Leovy, 1964, & Longuet-Higgins,1968).
We will use the Chebyshev approximation

(3.53)
u2 = a0 T0 + a2 T2 (x) + a4 T4 (x)

(The reason for using only the symmetric polynomials is explained in Chapter 8: All the
eigenfunctions have either even or odd parity.) We do not need to impose explicit condi-
tions on the approximation because every term in (3.53) is analytic at the endpoints. This
situation ” holomorphic eigensolutions at endpoints where the differential equation is sin-
gular ” is the only case where explicit boundary conditions are unnecessary in a problem
which is (a) nonperiodic and (b) de¬ned on a ¬nite interval. (The boundary conditions are
discussed further below.)
The algebraic equivalent of the differential equation is

(T0 , HT0 ) (T0 , HT2 ) (T0 , HT4 ) a0 0
(T2 , HT0 ) (T2 , HT2 ) (T2 , HT4 ) a2 =0
(T4 , HT0 ) (T4 , HT2 ) (T4 , HT4 ) a4 0

where the operator H is

H ≡ (1 ’ x2 )(d2 /dx2 ) ’ »x2 (3.54)

The integrals can be evaluated by making the change of variable x = cos(θ) and then ap-
plying a table of trigonometric integrals, or by numerical integration. The result, dividing
by a common factor of π, is


(2 ’ »/4)
(’»/2) (4) a0 0
(’2 ’ »/2) (8 ’ »/4)
(’»/2) a2 =0
(’12 ’ »/2)
(0) (’»/4) a4 0

Since this algebraic system of equations is homogeneous, it has a non-trivial solution if
and only if the determinant of the matrix is 0, that is, if

» (’»2 ’ 96» ’ 768) = 0 (3.55)
3.6. GALERKIN METHOD: CASE STUDIES 75

which has the three roots

» = 0, ’8.808, ’87.192 (3.56)

One dif¬culty which is peculiar to eigenvalue problems is to determine which eigen-
values are accurate and which are hopelessly inaccurate for a given truncation N. A full
discussion is postponed to Chapter 7, but we anticpate one rule-of-thumb: For a “nice”
eigenvalue problem, typically about N/2 eigenvalues are accurate.
If N is large, the error in the lowest mode may be only 1 part in 1010 , but will increase
rapidly with the mode number until there is no accuracy at all for (roughly!) the largest
N/2 modes. Repeating the calculation with different N is a good idea for any class of
problem to check the answer; linear eigenvalue problems are the only class where one has
to rerun the code simply to know what the answer is. (More precisely, to know how many
modes one has actually succeeded in computing.)
For this problem, the lowest mode is exact but useless: » = 0 corresponds to u(x) =
constant, which is a mathematically legitimate solution with no physical signi¬cance. The
¬rst non-trivial mode has the exact eigenvalue »1 = ’8.127, so that the three-term spectral
solution has an error of only 7.7%. The second root, -87.192, is an absolutely atrocious
approximation to the second mode, »3 ≈ ’12. However, we can calculate »3 as accurately
as desired by using a suf¬ciently large truncation.
Besides the eigenvalue rule-of-thumb, this example is useful in illustrating two addi-
tional general principles.
First, this is our ¬rst example of a “behavioral” (as opposed to “numerical”) boundary
condition for a non-periodic problem. “Numerical” boundary conditions are those that we
must explicitly impose on the numerical approximation to force it or its derivatives to equal
certain numbers at the boundaries. The Dirichlet conditions u(0) = 1, u(1) = ’1/2 are
“numerical” conditions. For Laplace™s Tidal Equation, however, the differential equation
itself forces the solution to behave properly at the boundaries without the need for any
intervention on our part. The only requirement is that the numerical solution must be

<<

. 18
( 136 .)



>>