The second consequence of the convergence of the meridians is that differential equations

are singular at r = 0. For example, the Laplace operator is

‚2 1 ‚2

1‚

≡ 2+ 2

(18.2)

+

r ‚r r2 ‚θ2

‚r

Thus, the coef¬cients of the ¬rst radial derivative and second polar derivative are both

singular at r = 0. And yet, when the Laplacian is written in Cartesian coordinates, it is

constant coef¬cient:

‚2 ‚2

≡

2

(18.3)

+2

‚x2 ‚y

18.4. POLAR COORDINATES: PARITY THEOREM 383

A simple function like f (x, y) = x2 + y 2 has a Laplacian which is a constant. What could

be less singular!

The resolution of the paradox is that the singularities of the Laplace operator, when

written in polar coordinates, are only “apparent”. The solution to a differential equation is

usually smooth and in¬nitely differentiable at the pole.

To avoid evaluating differential equation coef¬cients where they are in¬nite, the spec-

tral grid usually excludes the origin. This is only a trivial complication, however; the inter-

esting issue is: How does one impose boundary conditions at the origin?

For ¬nite difference methods, one must normally impose explicit, numerical boundary

conditions, based on careful analysis of the behavior of the solution near the origin, at the

grid point nearest r = 0. For spectral methods, however, the singularity actually simpli-

¬es the algorithm. To a spectral method, the boundary condition at the origin is behavioral

rather than numerical: the correct behavior is that the solution should be analytic at the

origin even though the coef¬cients of the differential equation are not. Since the terms of

a spectral series are individually analytic, it follows that a Chebyshev or similar series in

radius automatically satis¬es the boundary condition at r = 0, and no additional explicit

conditions are necessary. If the differential equation were not singular at r = 0, it would

be necessary (for a second order equation) to impose an explicit numerical boundary con-

dition at the origin. Thus, the singularity of the coordinate has ironically simpli¬ed the use

of a spectral method.

For best results, one must respect the parity symmetry at the origin which is explained

in the next two sections. However, one only needs to constrain the spectral series through

a numerical boundary condition at the outer boundary of the disk. If one applies polar

coordinates to an in¬nite domain, then both boundary conditions are behavioral, and no

conditions need to be explicitly imposed on the spectral series.

18.4 Polar Coordinates: Parity Theorem

Theorem 35 (Polar Coordinates: Parity in Radius) In polar coordinates where r is radius and

θ is angle, expand an arbitary function as a Fourier series in θ:

∞

(18.4)

f (r, θ) = fm (r) cos(mθ) + gm (r) sin(mθ)

m=0

Suppose that f (r, θ) is a SCALAR or the z-velocity in cylindrical coordinates or is the product of

r with the radial or tangential components of a vector in polar coordinates. Then if the function is

analytic at r = 0, continuity of the function and its derivatives demands the following:

(i) fm (r) and gm (r) have m-th order zeros at r = 0.

(ii) If m is EVEN, then fm (r) and gm (r) are both about r = 0 and their power series

SYMMETRIC

contain only EVEN powers of r.

(iii) If m is ODD, then fm (r) and gm (r) are both about r = 0 and their power

ANTISYMMETRIC

series contain only ODD powers of r.

Proof: A geometric argument for spherical coordinates is given in Sec. 18.8 below. This

visual argument applies also to polar coordinates because latitude and longitude become a

local polar coordinate system in a small neighborhood of the north and south poles. Here,

we present a different line of proof, couched as an informal, heuristic argument. Because

CHAPTER 18. SPHERICAL & CYLINDRICAL GEOMETRY

384

there seems to have been much confusion about this theorem in the literature and multiple

rederivations of its conclusions, we give this argument in some detail, but the rest of this

section can be skipped without loss of continuity. A careful, rigorous proof is given in

Eisen, Heinrichs &Witsch(1991).

First, note that any function f (x, y) which is analytic at and near x = y = 0 can be

locally approximated, to any desired accuracy, by a bivariate polynomial of suf¬ciently

high order. (Indeed, a double Taylor series will suf¬ce). Substituting x = r cos(θ) and

y = r sin(θ), it follows that a bivariate polynomial in r cos(θ) and r sin(θ) will suf¬ce as a

proxy for f (r, θ) to any desired accuracy. The monomials that appear in this polynomial

approximation have properties described by the following two lemmas.

Lemma 1 (Polar Wavenumbers of Cartesian Powers) Let “polar wavenumber m” denote the

Fourier terms cos(mθ) or sin(mθ). When a monomial

Mjk ≡ xj y k (18.5)

in powers of the Cartesian coordinates x and y is converted into polar coordinates and expanded as

a Fourier series in the polar angle θ, polar wavenumber m will only appear in the expansion of the

monomial if the total degree of the monomial is greater than or equal to m, that is, if

d≡j+k ≥m (18.6)

Proof: A monomial of total degree d can be written, expressing the sine and cosine as

complex exponentials, in the form

d

(’i)k

x y ≡r (exp(iθ) ± exp(’iθ) )

jk j+k

(18.7)

2d n=1

The crucial fact is that each exponential in the product is proportional to polar wavenumber

one, that is, to exp(±iθ). To obtain a term of wavenumber 7, for example, for example, it is

necessary to multiply such wavenumber one terms together at least 7 times. This requires

that the number of factors in the product is at least 7. Since the number of factors is the

total degree d, it follows that the total degree of the monomial must be at least 7. The same

reasoning applies to arbitrary wavenumber m.

The factor of rd in front of the product in Eq. (18.7) then immediately implies the fol-

lowing.

Lemma 2 (Radial Degree of Cartesian Powers) A Cartesian monomial xj y k is proportional to

rd where d ≡ j + k is the total degree of the monomial.

Because of Lemma 1, it follows that a Cartesian monomial will contain terms like cos(mθ) or

sin(mθ) only if it is proportional to rd where d ≥ m.

Together, these lemmas show that the terms proportional to either cos(mθ) or sin(mθ)

must also be proportional to rm , and thus the coef¬cients of these terms in the Fourier

series of f (r, θ) must have an m-th order zero at r = 0.

The parity of the Fourier coef¬cients with respect to r = 0 follows from a third lemma.

Lemma 3 (Monomial Parity) A monomial xj y k , when expanded as a Fourier series in polar

angle θ, is a trigonometric polynomial of degree j + k which contains contains only those polar

wavenumbers of the same parity as the degree d = j +k. That is, if d is odd, then the Fourier series of

xj y k contains only {cos(dθ), sin(dθ), cos([d’2]θ), sin([d’2]θ), cos([d’4]θ), sin([d’4]θ), . . . , }.

Odd polar wavenumbers m are thus multiplied only by odd powers of r, and even powers only

by even.

18.5. RADIAL BASIS SETS AND RADIAL GRIDS 385

Proof: The product

d

(’i)k

x y ≡r (exp(iθ) ± exp(’iθ) ) ,

jk j+k

(18.8)

2d n=1

can be expanded into terms of the form exp(inθ) where n depends on how many factors

of exp(iθ) and how many factors of its complex conjugate are combined in a given term.

The term of highest polar wavenumber is exp(idθ) when all d exponentials exp(iθ) are

combined. The crucial point is that when exp(iθ) is replaced by its complex conjugate

exp(’iθ), which is the other term inside each factor in the product, the degree j is lowered

by two. Thus, all wavenumbers j in the expansion of the product will be of the same parity,

either even or odd, as the total degree d.

The radial dependence of the monomial xj y k is rd where d is the sum of j and k. If

d is even, then rd is symmetric with respect to r = 0. (We have already seen that if d

is even, then all the Fourier coef¬cients of the monomial are of even polar wavenumber,

too.). Similarly, if d is odd, then its radial factor rd is antisymmetric (i. e., odd parity) with

respect to r = 0. Thus odd powers of r are paired only with cos(mθ) and sin(mθ) where m

is odd and even powers of r are paired only with even m.

The parity theorem has major implications for the choice of grids and basis functions as

explained in the next section.

18.5 Radial Basis Sets and Radial Grids

On a ¬nite domain in radius, which can be always be normalized to r ∈ [0, 1] by a linear

change-of-coordinate, there are several options (where m denotes the angular wavenum-

ber):

1. Bessel Functions: Jm (jm,j r) cos(mθ), sin(mθ)

2. Polar Robert Functions: rm Tj (r), j = 0, 1, 2, . . .

3. Shifted Chebyshev Polynomials of Linear Argument:

Tj (2r ’ 1), j = 0, 1, 2, . . .

4. Shifted Chebyshev Polynomials of Quadratic Argument:

Tj (2r2 ’ 1), (m even), r Tj (2r2 ’ 1), (m odd), j = 0, 1, 2, . . .

5. Unshifted Chebyshev Polynomials of Appropriate Parity:

T2j (r) cos(2mθ), T2j’1 (r) cos([2m ’ 1]θ], j = 0, 1, 2, . . .

6. One-Sided Jacobi Polynomials:

0,|m|

(2r2 ’ 1) cos(mθ), sin(mθ)

r m Pj

The “cylindrical harmonics” are

cos(mθ)

Jn (r, θ) ≡ Jm (jm,n r)

m

(18.9)

sin(mθ)

where there are two basis functions, one proportional to cos(mθ) and the other to sin(mθ),

for all pairs of integers (m, n) with m > 0 and where jm,n denotes the n-th zero of the m-th

Bessel function Jm (r). The cylindrical harmonics are popular in engineering because these

are the eigenfunctions of the Laplace operator in polar or cylindrical coordinates, which

made it easy in the old pre-computer days to apply a Galerkin method. However, as noted

CHAPTER 18. SPHERICAL & CYLINDRICAL GEOMETRY

386

by Gottlieb and Orszag(1977), the Bessel series usually converges only at an algebraic rate.

(An analysis of the precise rate of convergence is given in Boyd and Flyer(1999).)

Bouaoudia and Marcus(1991) developed a fast transform for the polar “Robert” func-

tions, which mimic the spherical basis invented by Robert(1966):

cos(mθ)

φm,n (r, θ) = rm Tn (r) (18.10)

sin(mθ)

where n is restricted to have the same parity (i. e., even or odd) as m. Unfortunately,

Merilees(1973a) showed that the analogous spherical basis is horribly ill-conditioned and

therefore useless except at very low resolution. The problem is that for large m and small

n, the basis functions are zero over almost all the interval because of the rm factor; note that

rm = exp(m log(r)) ≈ exp(’m(1 ’ r)) for r ≈ 1, and thus decays exponentially fast away

from the boundary at r = 1. In the narrow region around r = 1 where the small n functions

are non-zero, T0 , T2 and other low degree basis functions vary so slowly that φm,0 ≈ φm,2 ,

destroying the linear independence of the functions in the basis. Marcus abandoned the

polar Robert functions in his later work, and instead went to One-sided Jacobi polynomials

as explained below.

The third bad option is to expand fm (r) as a series of Shifted-Chebyshev polynomials

with a linear argument:

Tj— (r) ≡ Tj (2r ’ 1), r ∈ [0, 1] (18.11)

where the asterisk is the usual notation for the Shifted-Chebyshev polynomials. The reason

that this option is bad is that the Shifted-Chebyshev grid has points clustered near both

r = 0 and r = 1. However, the disk bounded by r = ρ has an area which is only the

fraction ρ2 of the area of the unit disk. Near the origin, points are separated by O(1/N 2 ).

It follows that the high density of points near the origin is giving high resolution of only a

tiny, O(1/N 4 )-in-area portion of the disk. Unless the physics imposes near-singularities or

high gradients near the origin, clustering grid points at r = 0 is a bad idea. And even in this

special situation of small-scale features near the origin, it is conceptually better to regard

the use of Shifted-Chebyshev polynomials as “mapping to resolve large gradients near the

origin” rather than “Shifted-Chebyshev polynomials are a good general representation for

radial dependence”.

Fig. 18.2 compares the Shifted-Chebyshev series (dashed) with two other options. The

shallow slope of the coef¬cients of the Shifted-Chebyshev series shows that one needs

roughly twice as many terms to obtain a given degree of accuracy with the Shifted-Chebyshev

polynomials as with the alternatives.

The Shifted-Chebyshev polynomials of quadratic argument and the Parity-Restricted

Chebyshev polynomials, which are also shown in the ¬gure, both work well, but generate

only a single curve on the graph. The reason is that these two options, although seemingly

very different, are in fact the same because of the identity

Tj (2r2 ’ 1) ≡ T2j (r), ∀j, r (18.12)

parity

The grid with parity uses rj = cos(tj /2) whereas the shifted grid with quadratic argu-

ment has rj = {(1 + cos(tj ))/2}1/2 where the tj are an evenly spaced grid on the interval

t ∈ [0, π] for Lobatto, Chebyshev-roots, or Radau grids. (The Radau grid includes r = 1 but

not r = 0.) The trigonometric identity

t 1 + cos(t)

(18.13)

cos =

2 2

18.5. RADIAL BASIS SETS AND RADIAL GRIDS 387

shows that

parity

∀j (18.14)

rj = rj ,

Thus, both the basis functions and the interpolation grid are identical for these two meth-

ods. The distance between adjacent points close to origin is O(1/N ).

Nevertheless, when there is a large number M of points in the polar angle θ and N

points in radius, the distance between adjacent gridpoints on the circle of radius r1 , the

smallest radial gridpoint, will be O(1/(M N )). Since the explicit time-stepping limit is

roughly the amount of time required for advection or diffusion from one grid point to

the next, it follows that the Parity-Restricted Chebyshev polynomials will give a time step

limit which is O(1/(M N )) for advective stability and O(M ’2 N ’2 ) for diffusion. Ouch! On

the sphere, this motivates the choice of spherical harmonics as the basis, which eliminates

this “pole problem” of a very short time-step. A similarly-motivated basis for the disk is

described in the next section.

18.5.1 One-Sided Jacobi Basis for the Radial Coordinate

Matsushima and Marcus(1995) and Verkley(1997a,1997b) independently proposed One-

Sided Jacobi polynomials, multiplied by rm :