<<

. 10
( 136 .)



>>

Figure 2.11: Left panel: Graphs of Dn(x; p) (solid) and »(x; p) (dashed). Right panel:
Fourier coef¬cients. Top (solid): coef¬cients of ». Bottom (circles): coef¬cients of Dn.
CHAPTER 2. CHEBYSHEV & FOURIER SERIES
34

inary axis, instead of just two.

Dn(x; p) ≡ (1/2) + 2 [pn /(1 + p2n )] cos(nx) (2.26)
n=1

sech[B(x ’ 2πm)] (2.27)
=B
m=’∞
B(p) ≡ π/ [ 2 log(1/p)] (2.28)

Dn has simple poles at

all integers m, j (2.29)
x = 2πm + ia(2j + 1)

Either by matching poles and residues or by Taylor expanding the 1/(1 + p2n ) factor,
one can show

(’1)j {»(x; p2j+1 ) ’ 1} (2.30)
Dn(x; p) = (1/2) +
j=0

This is a partial fraction expansion, modi¬ed so that all poles at a given distance from
the real axis are combined into a single term, and it converges rather slowly. Graphed
as a function of x, this elliptic function Dn(x; p) resembles »(x; p) only in a general way
(Fig. 2.11).
However, the Fourier coef¬cients of the elliptic function are more and more dominated
by those of »(x; p) as the degree of the coef¬cient increases because

pn /(1 + p2n ) ≈ pn {1 + O(p2n )}, (2.31)
p << 1

If p is only slightly smaller than 1 (a slowly-converging series), then the lowest few elliptic
coef¬cients will be roughly half those of the “imbricated-Lorentzian” ». As the degree
of the coef¬cient n increases, however, the approximation (2.31) will eventually become
accurate. If we truncate the series at n = N and choose N suf¬ciently large so that we
obtain an accuracy of 10’t , then the relative error in the last retained coef¬cient, an , is
O(10’2t ).
The reason that Eq. (2.31) is so accurate is that the difference,

δ(x; p) ≡ Dn(x; p) ’ »(x; p), (2.32)

is a function which is no longer singular at x = ±i a, but instead converges for all | (x) |<
3a. It is a general truth that whenever the difference between the “model” and “target”
functions has a larger domain of convergence than f (x), the difference between the Fourier
(or Chebyshev or any kind of spectral) coef¬cients will decrease exponentially fast with n.
Of course, the situation is not always so favorable. For example, if a function is approx-
imated by

f (x) ≈ log(x ’ ia){1 + b1 (x ’ ia) + b2 (x ’ i a)2 + . . . } (2.33)

in the vicinity of its convergence-limiting poles, it is easy to match the gravest branch point,
log(x ’ i a). Unfortunately, the difference between f (x) and the model will still be singular
at x = ia with the weaker branch point, (x ’ i a) log(x ’ i a). The difference f (x) ’ log(x ’
i a) has a Fourier series that converges more rapidly than that of f (x) by a factor of n, so
the error in approximating the Fourier coef¬cients of f (x) by those of the logarithmically
singular model will decrease as O(1/n).
2.7. WHY TAYLOR SERIES FAIL 35

Even so, it is clear that each function is representative of a whole class of functions: The
class of all functions that have singularities of that type and at that convergence-limiting
location. In a weaker sense (that is, if algebraic factors of n are ignored), each function
is representative of all functions that have convergence-limiting singularities at a given
point, regardless of the type of singularity (pole, square root, logarithm or whatever). It
follows that to understand a few representative functions is to understand spectral series
in general.


2.7 Convergence Domains: Why Taylor Series Don™t Work
For all the standard basis functions, the spectral series converges in a domain of known
shape in the complex plane. Since the singularities of f (x) control the asymptotic form of
the spectral coef¬cients (Darboux™s Principle), it follows that the size of the convergence
domain and the rate of convergence at a given point within the domain are both controlled
by the singularities of f (x), too.


Theorem 2 (CONVERGENCE DOMAIN in COMPLEX PLANE)
Barring a few exceptional cases, a spectral series converges in the largest domain in the x-plane, of
a certain basis-dependent shape, which is free of singularities. For power series, the domain shape
is a disk. For Fourier and Hermite series, it is an in¬nite strip, symmetric around the real x-axis.
For Chebyshev, Legendre, and Gegenbauer polynomials, the convergence domain is the interior of
an ellipse with foci at x = ±1.
The exceptional cases are entire functions which grow very rapidly in certain parts of the complex
plane as described in Chapter 17 and Boyd (2001).

PROOF: Given, for various basis sets, in classical treatises on Fourier series and orthog-
onal polynomials.
Appendix A graphs the convergence domain for several basis sets, but we usually have
little direct interest in summing a spectral series for complex x! These convergence do-
mains are signi¬cant only in the indirect sense that the larger the convergence region, the
faster the spectral series converges for that interval of real x that we really care about.
Later in the chapter, we shall show that knowledge of the complex singularities of a func-
tion u(x) allows us to explicitly calculate the asymptotic Fourier or Chebyshev coef¬cients
of the function.
A power series™ disk-shaped convergence domain is a fatal weakness for many appli-
cations, such as approximating the solution to a boundary value problem. Figure 2.12
compares the domains of convergence for the power series and the Fourier series of the
function f (x) ≡ (1 ’ p2 )/ (1 + p2 ) ’ 2p cos(x) . This function is smooth, well-behaved
and in¬nitely differentiable for all real x. Nevertheless, both expansions have ¬nite do-
mains of convergence because f (x) has simple poles, where it blows up as 1/(x ’ x0 ),
along the imaginary x-axis.
Because the convergence domain for a Taylor series is a disk, the power series only
converges for a ¬nite interval along the real x-axis. If we want to solve a boundary value
problem on an interval x ∈ [’π, π] whose solution is f (x), or a function similar to it, a
power series will fail. In contrast, the strip of convergence for the Fourier series embraces
all real x.
Similarly, Chebyshev and Legendre polynomials are normally applied to solve prob-
lems on the canonical interval [-1, 1]. (If the domain is a different interval y ∈ [a, b], one can
always make the trivial change of variable x ≡ (2y ’ (b + a))/(b ’ a) to map the problem
into x ∈ [’1, 1].) Since the foci of an ellipse always lie inside it, the convergence domain for
CHAPTER 2. CHEBYSHEV & FOURIER SERIES
36



Im(z)




Re(z)
0 π
’π


Figure 2.12: A comparison of the regions of Fourier and Taylor series convergence in the
complex z-plane for the function f (z) ≡ (1 ’ p2 )/ (1 + p2 ) ’ 2p cos(z) for p = 0.35. The
Fourier series converges within the strip bounded by the two horizontal dashed lines. The
power series converges only within the disk bounded by the dotted circle.


these orthogonal polynomials, which is bounded by an ellipse with foci at x = ±1, always
includes the whole interval [-1, 1]. Thus, singularities of f (x) outside this interval (whether
at real or complex locations) can slow the convergence of a Legendre or Chebyshev series,
but can never destroy it.
It is for this reason that our series, instead of being the more familiar Taylor expansions,
will be Fourier or Chebyshev or Legendre series instead. If its disk of convergence is too
small to include the boundaries, the power series will give nonsense. In contrast, the suc-
cess of spectral methods is guaranteed as long as the target interval, x ∈ [’1, 1], is free of
singularities.


2.8 Stalking the Wild Singularity or Where the Branch Points
Are
It is sometimes possible to identify the type and nature of the singularities of the solution
to a differential equation a priori without knowing the solution.


Theorem 3 (SINGULARITIES of the SOLUTION to a LINEAR ODE)
The solution to a linear ordinary differential equation is singular only where the coef¬cients of the
differential equation or the inhomogeneous forcing are singular, and it may be analytic even at these
points.


PROOF: Given in undergraduate mathematics textbooks.
An equivalent way of stating the theorem is to say that at points where all the coef¬-
cients of the differential equation and the inhomogeneous term are analytic functions, one
may expand the solution as a Taylor series (Frobenius method) about that point with a
non-zero radius of convergence.
Thus,the solution of
1
(2.34)
uxx + u=0
1 + x2
2.8. LOCATION OF SINGULARITIES 37

on x ∈ [’1, 1], is singular only at the poles of the coef¬cient of the undifferentiated term
at x = ±i and at in¬nity. The Chebyshev and Legendre series of u(x) is, independent
of the boundary conditions, guaranteed to converge inside the ellipse in the complex x-
plane with foci at ±1 which intersects the locations of the poles of 1/(1 + x2 ). By using the
methods described in a later section, we can calculate the rate of geometric convergence to
show that the series coef¬cients |an | must decrease as O([ ] (0.17)n/2 ). The empty bracket
represents an algebraic factor of n; this can be deduced by performing a local analysis
around x = ± i to determine the type of the singularity of u(x) [this type need not match
that of the coef¬cients of the differential equation] and applying Table 2.2. All without
actually knowing u(x) itself or even specifying boundary conditions!
Unfortunately, the theorem does not extend to partial differential equations or to nonlin-
ear equations even in one dimension.

EXAMPLE: First Painlev´ Transcendent
e
The differential equation
uxx ’ u2 = x (2.35)

(2.36)
u(0) = ux (0) = 0
has coef¬cients and an inhomogeneous term which have no singularities. Nonetheless,
numerical integration and asymptotic analysis show that u(x) has poles at x=3.742, 8.376,
12.426, etc.: an in¬nite number of poles on the real axis. (Bender and Orszag, 1978, pg.
162). These poles are actually “movable” singularities, that is, their location depends on
the initial conditions, and not merely upon the form of the differential equation.
Movable singularities are generic properties of solutions to nonlinear differential equa-
tions. The reason can be understood by examining an ODE closely related to the Painlev´ e
transcendent:
uxx ’ U (x)u = x (2.37)
where U (x) is arbitrary. Because this is linear, Theorem 3 tells us that u(x) is singular only
where U (x) is singular. If we chose U (x) to be a solution of the Painlev´ equation, however,
e
then the linear ODE simply becomes (2.35). Thus, for a nonlinear ODE, the solution u(x)
itself furnishes the spatially-variable coef¬cients. Theorem 3 actually still applies; unfortu-
nately, it is useless because we cannot apply it until we already know the singularities of
u(x), which is of course the very information we want from the theorem.
For many nonlinear equations, a problem-speci¬c analysis will deduce some a priori
information about the location of singularities, but no general methods are known.
On a more positive note, often the physics of a problem shows that the solution must
be “nice” on the problem domain. This implies, if the interval is rescaled to x ∈ [’1, 1]
and a Chebyshev or Legendre expansion applied, that the spectral series is guaranteed to
converge exponentially fast. The only catch is that the asymptotic rate of convergence µ,
where the spectral coef¬cients are asymptotically of the form an ∼ [ ] exp(’ n µ ), could
be very large or very small or anywhere in between ” without advance knowledge of the
singurities of u(x) outside the problem domain, we cannot be more speci¬c than to assert
“exponential convergence”. (However, some rules-of-thumb will be offered later.)

2.8.1 Corner Singularities & Compatibility Conditions
Unfortunately, for partial differential equations, it is usual for the solution to even a lin-
ear, constant coef¬cient equation to be weakly singular in the corners of the domain, if the
boundary has sharp corners.
CHAPTER 2. CHEBYSHEV & FOURIER SERIES
38

EXAMPLE: Poisson equation on a rectangle. If


u = ’1;
2
(2.38)
u=0 on the sides of the rectangle,

then the solution is weakly singular in the four corners. In terms of a polar coordinate
system (r, θ) centered on one of the corners, the singularity is of the form

u = (constant) r2 log(r) sin(2θ) + other terms (2.39)

(Birkhoff and Lynch, 1984). The singularity is “weak” in the sense that u(x, y) and its ¬rst
two derivatives are bounded; it is only the third derivative that is in¬nite in the corners.
Constant coef¬cient, constant forcing, singular solution? It seems a contradiction. How-
ever, the boundary curve of a square or any other domain with a corner cannot be repre-
sented by a smooth, in¬nitely differentiable curve. At a right-angled corner, for example,
the boundary curve must abruptly shift from vertical to horizontal: the curve is continuous,
but its slope has a jump discontinuity.
This argument suggests, correctly, that corner singularities can be eliminated by slighly
rounding the corners so that both the boundary curve and the values of u upon it can be
parameterized by smooth, in¬nitely differentiable curves. This is true physically as well as
mathematically.
In solid mechanics, corners are regions of very high stress, and the corner singularities
are merely a mathematical re¬‚ection of this. In a house, cracks in paint or drywall often
radiate from the corners of windows and door openings. The ¬rst commercial jet aircraft,
the British Comet, was grounded in the early 1950s after three catastrophic, no-survivor
crashes. One of the surviving airframes was tested to destruction in a water tank that was
repeatedly pressurized. After six weeks, the airframe failed abruptly. A crack began at a
corner of one of the airliner™s square windows and then propagated backwards until it was
ten meters long!
Modern airliners all have windows with rounded corners. Unfortunately, in other con-
texts, it is often necessary to solve problems with unrounded corners.
There are two pieces of good news. First, corner singularities are often so weak as to
be effectively ignorable even for high accuracy Chebyshev solutions (Boyd, 1986c, Lee,
Schultz and Boyd, 1989b). Second, there are good methods for dealing with singularities
including mapping and singularity subtraction as will be described in Chapter 16.

EXAMPLE: One-Dimensional Diffusion Equation


(2.40)
ut = uxx , u(0) = u(π) = 0

<<

. 10
( 136 .)



>>