. 13
( 136 .)


for all f (z), all N , and all z ∈ [’1, 1].

PROOF: The Chebyshev polynomials are bounded by one, that is, | Tn (z) | ¤ 1 for all z ∈
[’1, 1] & for all n. This implies that the n-th term is bounded by | an |. Subtracting the
truncated series from the in¬nite series, bounding each term in the difference, and sum-
ming the bounds gives the theorem. Q. E. D.
The mapping alters the shape of the convergence region as described by the following.
1 Iff (z) does have symmetry, the basis can be further reduced. If f (z) = f (’z), then only the even cosines,
1, cos(2 θ), cos(4 θ), . . . , are needed. Similarly, only the odd cosines (or odd degree Chebyshev polynomials in z)
are needed if f (z) = ’f (’z).

Theorem 7 (CONVERGENCE ELLIPSE: CHEBYSHEV SERIES) Let z = x + iy. Introduc-
ing the elliptical coordinates (µ, ·) via the equations

µ ∈ [0, ∞]
x= cosh(µ) cos(·),
= ’ sinh(µ) sin(·), · ∈ [0, 2π], (2.79)

the surfaces of constant µ de¬ne ellipses in the complex z-plane with foci at z = ’1 and z = 1. (See
Fig. 2.16.) For large µ, elliptical coordinates tend to polar coordinates with µ proportional to log(r)
and · ’ ’θ.)
Suppse f (z) has poles, branch points or other singularities at the points (in elliptical coordi-
nates) (µj , ·j ), j = 1, 2, . . . in the complex z-plane. De¬ne

µ0 ≡ minj |µj | (2.80)

Then the Chebyshev series

f (z) = an Tn (z)

converges within the region bounded by the ellipse µ = µ0 ,

µ < µ0 , for all ·, [convergence] (2.82)

and diverges for all (x, y) outside this ellipse.
The coef¬cients are related to µ0 , the quasi-radial coordinate of the ellipse, by the equation

lim sup log (| an /an+1 |) = µ0 , [if the limit exists] (2.83)

Note: the “ellipse” µ = 0 is the real interval [-1, 1]. If the coef¬cients an decrease algebraically
with n or if the convergence is exponential but subgeometric, then the Chebyshev series converges
only for µ = 0, i. e., only on the real interval z ∈ [’1, 1].

PROOF: Let the complex variable θ = · + iµ. Now a trigonometric identity shows that
for any (µ, ·),

cos(· + iµ) = cosh(µ) cos(·) ’ i sinh(µ) sin(·) (2.84)

(Abramowitz and Stegun,1965). Thus, the transformation z = cos(θ) relates (x, y) to (µ, ·)
just as de¬ned in (2.79). The straight lines which are the boundaries of the strip of Fourier
convergence in the complex θ-plane must be mapped into curves of constant µ in the z-
To show that µ = µ0 is indeed the equation of an ellipse, substitute Eq.(2.79) into

x2 /a2 + y 2 /b2 = 1 [central equation of ellipse] (2.85)

where the axes of the ellipse are a = cosh(µ0 ) and b = sinh(µ0 ). Cancelling the hyperbolic
functions reduces (2.85) to the identity cos2 (·) + sin2 (·) = 1. Q. E. D.
It is unusual to worry about imaginary values of z in applications, but since complex
singularities will limit the convergence of the series, we need to understand that a singu-
larity at one point on an ellipse will produce the same (asymptotic!) rate of convergence
as a pole or branch point on any other part of the same ellipse. This has two important
First, Fig. 2.16 shows that when µ is small, the ellipses of constant µ are long and narrow,
pass very close to z = ±1 and then cross the imaginary axis at much larger distances.

·=3 π/2

·=2 π


Figure 2.16: Elliptical coordinates (µ, ·) in the complex z-plane; z = x + i y via
x = cosh(µ) cos(·) & y = ’ sinh(µ) sin(·). which implies Tn (x + i y) = cos( n [· + i µ]) =
cosh(n µ) cos(n ·) ’ i sinh(n µ) sin(n ·). The surfaces of constant µ are ellipses; the
contours of constant “angle” · are hyperbolas. If a function f (z) has a pole or other singu-
larity anywhere on the ellipse µ = µ0 and no singularities on ellipses of smaller µ, then
the Chebyshev series of f (z) diverges everywhere outside the ellipse µ = µ0 and converges
everywhere inside this ellipse.

This implies that singularities very close to z = ±1 have no worse effect on convergence
than singularities on the imaginary axis that are much farther from the interval [-1, 1].
Chebyshev polynomials have a kind of inherent coordinate-stretching that makes them
much better at resolving singularities or boundary layers near the endpoints of [-1, 1] than
near the middle.
Second, Fig. 2.16 shows that singularities for real z will not destroy the property of
exponential convergence for the Chebyshev series as long as they lie outside the interval
[-1, 1]. Conversely, convergence on the “canonical interval”, even geometric convergence,
does not imply convergence for all real z, but no further than | z0 | where z0 is the location
of the nearest real singularity, and maybe not even that far if there are also poles or branch
points of f (z) for complex z.
If we know (or can estimate) the location of the convergence-limiting singularity, then
the following is very useful.

Theorem 8 ( CHEBYSHEV RATE of CONVERGENCE) The asymptotic rate of convergence
of a Chebyshev series for z ∈ [’1, 1] is equal to µ, the quasi-radial coordinate of the ellipse of
convergence. This is related to the location of the convergence-limiting singularity at (x0 , y0 ) in the
complex plane via

µ = Im{arccos[x0 + iy0 ]}
= log | z0 ± (z0 ’ 1)1/2 |
±2 ’ 1 (2.88)
= log ± +

where the sign in the second line is chosen to make the argument of the logarithm > 1 so that µ > 0
and where
1 1
±≡ (x0 ’ 1)2 + y0
2 2
(x0 + 1)2 + y0 + (2.89)
2 2

2.12 An Upper Bound on the Truncation Error: Last Coef¬-
cient Rule-of-Thumb

∞ ∞
f (x) = a0 + an cos(nx) + bn sin(nx)
n=1 n=1

and let fN (x) be the approximation to f (x) which is obtained by truncating each of the series in
(2.90) after N terms:
fN (x) ≡ a0 + (2.91)
an cos(nx) + bn sin(nx)
n=1 n=1

Then the truncation error de¬ned by

ET (N ) ≡| f (x) ’ fN (x) |, (2.92)
[“truncation error ]

is bounded from above by the sum of the absolute values of all the neglected coef¬cients:

EN (N ) ¤ | an | + | bn | (2.93)
n=N +1

PROOF: Similar to the analogous Chebyshev result, Theorem 6.
Other basis functions satisfy similar theorems with slightly different numerical factors;
for Hermite functions, for example, we multiply the R. H. S. of (2.93) by 0.785.
This theorem is very powerful because it tells us that the coef¬cients of the spectral
series are also (roughly) the errors, or at least an upper bound on the errors. It is also seem-
ingly quite useless because we normally do not know what all the neglected coef¬cients
are, but let us look at a representative case.
EXAMPLE: For the “Symmetric, Imbricated-Lorentzian” function of the previous sec-
tion, the expansion is

pn cos(nx) (2.94)
f (x) = 1 + 2

The Truncation Error Bound Theorem then implies

| f (x) ’ fN (x) | ¤ 2 pn = 2pN +1 /(1 ’ p) (2.95)
n=N +1

since the geometric series can be explicitly summed. Because the terms of (2.94) are all
positive at x = 0, Eq. (2.95) is a tight upper bound.

When computing the hitherto-unknown solution of a differential equation, we do not
know the exact form of the neglected coef¬cients. However, we have already seen that
almost all Fourier and Chebyshev series for smooth, in¬nitely differentiable functions con-
verge asymptotically like geometric series. In other words, it is the rule rather than the
exception that

a n ∼ [ ] pn , (2.96)
n >> 1

where [ ] is a slowly varying (algebraic) function of n or a constant. This implies that while
we cannot get a numerical bound on the truncation error for unknown functions, we can
predict that they will behave qualitatively like the geometrically-converging Fourier series
The pay-off is the following assertion:

Rule-of-Thumb 2 (LAST COEFFICIENT ERROR ESTIMATE) The truncation error is the
same order-of-magnitude as the last COEFFICIENT RETAINED in the truncation for series with
geometric convergence. Since the truncation error is a quantity we can only estimate anyway (in
the absence of a known, exact solution), we can loosely speak of the last retained coef¬cient as being
the truncation error, that is:

ET (N ) ∼ O(| aN |) (2.97)

Extension: if the series has algebraic convergence index k, i. e., if an ∼ O(1/nk ) for large n,

ET (N ) ∼ O(N | aN | ) (2.98)

JUSTIFICATION: If we accept that a large class of Fourier and Chebyshev series con-
verge geometrically, then (2.97) follows immediately from (2.95). Note that the last re-
tained coef¬cient is an = 2 pN , so that (2.97) is true with the proportionality constant
p/(1 ’ p) ∼ O(1).
For algebraically converging series, note that

1 1
∼ (2.99)
, N >> 1
(k ’ 1)N k’1
n=N +1

(Bender and Orszag, 1978, pg. 379.) Thus, the bound in the error truncation theorem is
O(1/N k’1 ) while the last retained coef¬cient is O(1/N k ), and this implies (2.98).
The ultimate test of a numerical solution is to repeat the calculation with different N
and compare results. The rule-of-thumb is not intended to substitute for that. Rather, it
has two uses. First, it provides a quick-and-dirty way of estimating the error from a single
calculation; if the last retained coef¬cient is not small in comparison to the desired error,
then we need larger N . If it is small, and the lower coef¬cients decrease smoothly towards
an , the calculation is probably okay. Second, it provides order-of-magnitude guidelines for
estimating the feasibility of a calculation. Without such guidelines, one would waste lots
of time by attempting problems which far exceed the available computing power.

2.13 Convergence Theory for Legendre Polynomials
When the computational domain is split into a large number of subdomains with a separate
spectral series on each subdomain, a popular strategy is to mimic the ¬nite element method

0 1
10 10




-10 0
10 10
0 1 2
0 50
10 10 10


. 13
( 136 .)