<< Предыдущая стр. 14(из 136 стр.)ОГЛАВЛЕНИЕ Следующая >>
Figure 2.17: Left: absolute values of the coefп¬Ѓcients of f (x; p) = 1/(1 в€’ 2 p x + p2 )1/2 .
Legendre coefп¬Ѓcients: upper curve. Chebyshev coefп¬Ѓcients: bottom curve. Right: the
solid curve is the ratio of the absolute value of the n-th Legendre coefп¬Ѓcient to the n-th
Chebyshev coefп¬Ѓcient. The dashed line is the n1/2 .

with a so-called вЂњweakвЂќ formulation. One crucial point is that the weak formulation is
greatly simpliп¬Ѓed by using a basis of Legendre polynomials instead of Chebyshev.
The good news is that the convergence theory for Legendre polynomials is virtually
identical with that of Chebyshev polynomials вЂ” indeed, it is identical if one ignores alge-
braic factors of n, as done so often above. The bad news is that for a given arbitrary function
f (x), the maximum pointwise error of a Legendre series (or interpolant), truncated after N
terms, is worse than that of the corresponding Chebyshev series by a factor of the square
root of N .

Theorem 10 ( LEGENDRE POLYNOMIAL RATE of CONVERGENCE)
вЂў The domain of convergence for Legendre series is the interior of the largest ellipse, with foci
at В±1, which is free of singularities of the function being expanded. This domain is identical
with the domain of convergence of the Chebyshev series for the same function.
вЂў The asymptotic rate of convergence Вµ of a Legendre series is identical with that of the corre-
sponding Chebyshev series, that is, it is the elliptical coordinate of the boundary of the domain
of convergence in the complex plane.
вЂў In the asymptotic limit N в†’ в€ћ, the maximum pointwise error of a Legendre series is larger
than that of the Chebyshev series of the same order, also truncated after N terms, by O(N 1/2 ).

PROOF: The п¬Ѓrst part is proved in Davis (1975). The convergence domain implicitly
speciп¬Ѓes the asymptotic rate of convergence. Fox and Parker (1968, pg. 17) prove an
asymptotic relation that suggests Legendre series are worse than Chebyshev by a factor
of 2/(ПЂ N ).
An example is instructive. The function
в€ћ
1
pn Pn (x) (2.100)
=
1 в€’ 2 p x + p2 n=0
2.13. CONVERGENCE THEORY FOR LEGENDRE POLYNOMIALS 53

1

0.5

0

-0.5
-1 -0.5 0 0.5 1

Figure 2.18: Graph of P100 (x), the Legendre polynomial of degree one hundred. It is also a
schematic illustrating the form of all Legendre polynomials: Oscillations over most of the
interval whose amplitude at a п¬Ѓxed x is O(1/n1/2 ) plus narrow boundary layers near the
endpoints where the polynomial rises to В± 1.

has reciprocal-of-square root branch points in the complex plane which limit convergence.
The coefп¬Ѓcients of the Chebyshev series for this function are not known in closed form, but
it is known that they are asymptotically of the form
pn
в€ј constant в€љ , nв†’в€ћ
a(Chebyshev) (2.101)
n
n
Fig 2.17 compares the Chebyshev and Legendre coefп¬Ѓcients for this function. The left panel
shows that both sets asymptote to parallel straight lines, but the Chebyshev coefп¬Ѓcients are
smaller. (Since Pn (1) = 1 and |Pn (x)| в‰¤ 1 for all n, just as for the Chebyshev polynomials,
these differences in coefп¬Ѓcients translate directly into differences in errors between the two
basis sets.) The right panel shows that the ratio of the coefп¬Ѓcients rapidly asymptotes to
(ПЂ/2)N .
Another way of stating the same thing is to inspect the Legendre polynomials them-
selves (Fig. 2.18). In contrast to the Chebyshev polynomials, which oscillate uniformly over
the interval x в€€ [в€’1, 1] (as obvious from the relation Tn (cos(Оё)) в‰Ў cos(n Оё)), the Legendre
polynomials are nonuniform with small amplitude over most of the interval except in ex-
tremely narrow boundary layers where the polynomial rises to one or falls to minus one.
The even polynomials have local maxima at both x = 1 and x = 0; the ratio of these is

2
ПЃn в‰Ў |Pn (0)/Pn (1)| в€ј nв†’ в€ћ
n even, (2.102)
,
ПЂn
There are two reasons why this nonuniformity and the poorer error of Legendre poly-
nomials are not particularly upsetting to practitioners of spectral elements. First, the de-
gree of the polynomials on each element is rather small, rarely greater than N = 8. Sec-
ond, Fig. 2.18 shows that the boundary layers where the Legendre polynomial is large are
very thin. For arbitrary functions, this is still an embarrassment. However, the solutions
to boundary value problems are constrained by the boundary conditions. For spectral ele-
ments, some constraint exists even on interior subdomain walls because the expansions in
CHAPTER 2. CHEBYSHEV & FOURIER SERIES
54

each adjoining element must have common values on the common walls. Thus, spectral
element expansions in Legendre polynomials are constrained by matching and boundary
conditions to be close to the exact solution in precisely those regions where the errors in a
Legendre series would otherwise be large.
The Chebyshev-Legendre theory also applies to the larger family of Gegenbauer poly-
nomials. All members of the Gegenbauer family converge in the same ellipse-bounded
domain at the same asymptotic rate of convergence. However, for Gegenbauer polynomi-
als Cn (x), which are orthogonal with respect to the weight factor w(x) = (1в€’x2 )mв€’1/2 , the
m

maximum pointwise errors are worse than Chebyshev by a factor of O(nm ) for sufп¬Ѓciently
large m. Legendre polynomials are the special case m = 1/2.

2.14 Quasi-Sinusoidal Rule of Thumb
It is easy to determine the cost of an accurate computation after the fact. (One can repeat
with different N to be sure the calculation is accurate.) In planning a computation, how-
ever, it is important to estimate in advance the approximate cost because this may determine
whether the computation is feasible.

Rule-of-Thumb 3 (TELLERвЂ™S LAW) A state-of-the-art calculation requires 100 hours of CPU
time on the state-of-the-art computer, independent of the decade.

The point of this maxim is that people push the current technology to the limit. This
will be as true in 2100 as it was in the time of Charles Babbage. However, there have always
been bureaucrats to put some limits on the enthusiasts (for which moderate users should
be grateful). Because of these constraints, one can get in enormous trouble if a proposed 20
hour calculation in fact requires 200.
There are several strategies for estimating costs a priori. One is library research: looking
up the costs and resolution of previously published calculations for similar problems.
A second strategy is the вЂњMethod of Model FunctionsвЂќ. One chooses a simple analytic
function which has the same behavior as is expected of the unknown solution вЂ” boundary
layers, fronts, oscillations, whatever. One can then expand this as a series of basis functions
by performing a matrix multiplication or FFT. If the model has the same scales of variation
as the true solution, then one can obtain a good estimate of how many basis functions will
be needed. The rate at which the modelвЂ™s {an } decrease will be roughly the same as that of
the coefп¬Ѓcients of the unknown solution even though the individual numerical values will
be different.
A third strategy is to use rules-of-thumb.
The вЂњquasi-sinusoidalвЂќ rule-of-thumb is based on the observation that a wide variety
of phenomena have oscillations. If we can estimate or guess the approximate wavelength
вЂ” strictly speaking, we should say local wavelength since the scale of oscillation may vary
from place to place вЂ” then we ought to be able to estimate how many Chebyshev polyno-
mials are needed to resolve it. Gottlieb and OrszagвЂ™s method of estimation is quite simple:
solve the partial differential equation

x в€€ [в€’1, 1] (2.103)
u t + ux = 0

with initial and boundary conditions such that the exact solution is a steadily translating
sine wave, and then compare different numerical methods.
Table 2.1 shows the results. The solution is

u(x, t) = sin[M ПЂ(x в€’ t в€’ 1)] (2.104)
2.14. QUASI-SINUSOIDAL RULE OF THUMB 55

Table 2.1: RMS (L2 ) errors in the solution of the one-dimensional wave equation. The exact
solution is u(x) = sin(M ПЂ[x в€’ t в€’ 1]). There are a total of M wavelengths on the inter-
val [в€’1, 1]. N is the total number of grid points or Chebyshev polynomials. To achieve a
1% error, the second order method requires N/M > 40, the second order method needs
N/M > 15, while the spectral method needs N/M > 3.5. The Chebyshev-tau algorithm
was used here, but another graph in Gottlieb & Orszag (1977) shows that error is approx-
imately the same for GalerkinвЂ™s method and the pseudospectral method. The striking dif-
ferences are between the spectral methods and the п¬Ѓnite difference methods, not between
different variants within the family of spectral algorithms.

SECOND ORDER FOURTH ORDER CHEBYSHEV
N M ERROR N M ERROR N M ERROR
40 2 0.1 20 2 0.04 16 4 0.08
80 2 0.03 30 2 0.008 20 4 0.001
160 2 0.008 40 2 0.002 28 8 0.2
40 4 1. 40 4 0.07 32 8 0.008
80 4 0.2 80 4 0.005 42 12 0.2
160 4 0.06 160 4 0.0003 46 12 0.02

so that M = total number of wavelengths on the interval [в€’1, 1]. For all the numerical
methods, N is the total number of degrees of freedom.
As might be expected, the accuracy does not depend on either N or M alone, but only
on their ratio
number of grid points
N
в‰Ў (2.105)
wavelengths
M
We п¬Ѓnd
Rule-of-Thumb 4 (QUASI-SINUSOIDAL RESOLUTION REQUIREMENT)
To achieve 5% accuracy for a wave-like solution, one needs roughly

SECOND ORDER [40 pts/wave
20 grid pts./wavelength
FINITE DIFFERENCE for 1 % error]

FOURTH ORDER [15 pts/wave
10 grid pts./wavelength
FINITE DIFFERENCE for 1 % error]

CHEBYSHEV [3.5 polys./wave
3.5 polys./wavelength
PSEUDOSPECTRAL for 1 % error]

To estimate the number of wavelengths M , let x be the smallest scale that one must resolve on
x в€€ [в€’1, 1]. Then
ПЂ
Mв‰€
x
CHAPTER 2. CHEBYSHEV & FOURIER SERIES
56

0
10
-2
M=1
10
-4
10 M=5

M=20
-6
10

M=100
-8
10
-10
ПЂ
10
0 2 4 6 8 10
N/M

Figure 2.19: The errors in the approximation of f (x) в‰Ў 2в€’1/2 (cos(M ПЂx) + sin(M ПЂx)),
graphed versus N/M , that is, the number of Chebyshev polynomials retained in the trun-
cation divided by M , the number of wavelengths of f (x) on x в€€ [в€’1, 1]. The dashed vertical
line is the asymptotic limit M в†’ в€ћ, which is at N/M = ПЂ.

This is but a rule-of-thumb вЂ” an unkind wit might deп¬Ѓne вЂњrule-of-thumbвЂќ as a syn-
onym for вЂњeducated guessвЂќ вЂ” but it is representative of the extraordinary efп¬Ѓciency of
pseudospectral methods versus п¬Ѓnite difference methods for large N . The rule-of-thumb
lists the needed resolution for 1% error to emphasize that the advantage of pseudospectral
over п¬Ѓnite difference methods increases very rapidly as the error decreases, even for rather
moderate accuracy.
Fig. 2.19 shows the errors in the Chebyshev expansion of sin(M ПЂ x). There is almost no
accuracy at all until N/M > ПЂ, and then the error just falls off the table.
One must be a little careful in applying the rule when M and N are small. The reason is
that when we increase N by 1 or 2, we typically decrease the error (for a sine function) by
an order-of-magnitude whether N is 10 or 10,000. This increase of N (for the sake of more
accuracy) has a negligible effect on the ratio N/M when M is huge, but N/M must be
signiп¬Ѓcantly greater than 3 to obtain high accuracy when M is small. A more conservative
rule-of-thumb is to use

N = 6 + 4(M в€’ 1) (2.106)

coefп¬Ѓcients, allocating 6 polynomials to resolve the п¬Ѓrst wavelength and 4 for each addi-
tional wavelength.

2.15 Witch of Agnesi RuleвЂ“ofвЂ“Thumb
Another caution is that sin(M ПЂx) is an entire function, free of singularities, whereas most
real-life solutions have poles or branch-points which determine the asymptotic rate of con-
vergence. For this reason, we will derive a second rule-of-thumb, which we call poeti-
 << Предыдущая стр. 14(из 136 стр.)ОГЛАВЛЕНИЕ Следующая >>