instead of the “interior” grid where the wj are the quadrature weights. Neglecting the

exponential error terms, we can rearrange the terms in the equation IGauss = ILobatto to

obtain

± ±

N N

G L G L

wj wj wj wj

’ ’ xj

G L

(19.35)

ρ = xj

f (xL ) j=1 f (xL )

f (xG f (xG )

j j j j

j=1

Dividing by the sum on the left gives an explicit formula for the root ρ as a ratio of weighted

values of f (x). For Chebyshev quadrature with the usual Chebyshev weight function in

the integral Eq.(19.32), the weights are all identical except for the two endpoints, and the

quadrature points for both grids combined are the images of an evenly spaced grid under

the cosine mapping. This greatly simpli¬es the ¬nal approximation to the root to

± ±

2N 2N 1

xj

j j

(19.36)

ρ= ” (’1) / ” (’1)

f (xj ) f (xj )

j=0 j=0

where the double prime on the sums denotes that the ¬rst and last terms in both the nu-

merator and denominator should be multiplied by (1/2) and

1 jπ

(a + b) ’ (a ’ b) cos (19.37)

xj =

2 2N

Fig. 19.3 shows that for a typical example, the convergence of the approximation with

N is indeed geometric. Ioakimidis (unpublished preprint) has shown that the idea can be

generalized to ¬nd the roots of a pair of equations in two unknowns.

19.7. HILBERT TRANSFORM 453

f=sin(x-pi/4)/ sqrt(1+10 x**2)

0

10

-2

10

-4

10

errors

-6

10

-8

10

-10

10

-12

10

0 5 10 15 20 25 30

N

Figure 19.3: Absolute value of the absolute error in approximating the root nearest the

origin of the function sin(x ’ π/4)/(1 + 10 x2 )1/2 using the quadrature interval [-1, 1] in

Ioakimidis™ non-iterative Chebyshev root-¬nding algorithm. The number of evaluations of

f (x) is 2N + 1 where N is the order of the method.

19.7 Spectrally-Accurate Algorithms for the Hilbert Trans-

form

The Hilbert transform of a function f (x) is de¬ned as the Principal Value (PV) integral

∞

1 f (x)

H{f }(y) ≡ P V (19.38)

dy

x’y

π ’∞

If f (x) ∈ L2 (’∞, ∞), then the Hilbert transform is in the function space L2 also. The

Hilbert transform is important in signal processing, optics, and water waves. For example,

the famous Benjamin-Ono equation, which has solitary wave solutions, is

ut + uux + H{uxx } = 0 (19.39)

For functions that decay exponentially fast in |x|, there are at least three Hilbert trans-

form algorithms whose accuracy increases exponentially but subgeometrically with N , the

number of degrees of freedom:

1. sinc quadrature (Stenger(1981); ¬rst proposed by Kress and Martensen):

N

cos(π[y ’ jh]/h) ’ 1

h

H{f }(y) ≈ (19.40)

f (jh)

y ’ jh

π

j=’N

where h is the grid spacing

2. Fourier method (Weideman and James, 1992):

H{f }(y) = F ’1 {i sgn(k)F {f }(k)} (19.41)

where F denotes the Fourier transform and F ’1 is the inverse transform.

CHAPTER 19. SPECIAL TRICKS

454

3. Rational Chebyshev series (Weideman, 1995b, Weideman and James, 1992))

∞

H{f }(y) = i sgn(j)aj ρj (y) (19.42)

j=’∞

where

∞

(19.43)

f (y) = aj ρj (y)

j=’∞

(1 + ix)j

ρj ≡ (19.44)

(1 ’ ix)j+1

The Fourier algorithm is the easiest to program. In words, the procedure is to take

the Fast Fourier Transform of f (x), multiply wavenumber k by sgn(k), and then take the

inverse Fast Fourier Transform of the result.

19.8 Spectrally-Accurate Quadrature Methods

19.8.1 Introduction: Gaussian and Clenshaw-Curtis Quadrature

The “quadrature problem” is to evaluate an integral such as

b

I≡ (19.45)

f (x)w(x)dx

a

by an approximation of the form

N

I≈ (19.46)

wj f (xj )

j=1

Here, w(x) ≥ 0 is a user-chosen weight function and f (x) is a smooth but otherwise arbi-

trary function. The “quadrature weights” wj are invariably different from the values of w(x)

evaluated at the “quadrature abscissas” xj . However, both quadrature weights and abscis-

sas are independent of f (x), and depend only on the interval [a, b] and the weight function

w(x).

To obtain spectral accuracy, the underlying strategy of all algorithms discussed here is

to expand f (x) as a truncated spectral series and then integrate term-by-term, analytically.

For each weight function w(x) and choice of interval, x ∈ [a, b], there is a unique choice

of basis set such that the spectral integration yields a “Gaussian quadrature” or “Gauss-

Jacobi integration” scheme. As has already been discussed in Chapter 4, Sec. 3, when

w ≡ 1 and x ∈ [’1, 1], for example, the ordinary Legendre polynomials are the basis which

gives a Gaussian quadrature.

The special advantage of Gaussian quadrature is that the N -point formula is exact when

f (x) is a polynomial of degree (2N + 1) or less that. That is, Gaussian quadrature is exact

(ignoring roundoff) when the degree of the polynomial is twice the number of quadrature

points. Non-Gaussian quadratures are also possible, such as that obtained by using the

Chebyshev roots as abscissas for a weight function w(x) = 1. The disadvantage of a non-

Gaussian scheme is that such a quadrature is exact only for polynomials of degree (N ’ 1),

that is, only about half the degree for which Gaussian quadrature is error-free.

19.8. SPECTRALLY-ACCURATE QUADRATURE METHODS 455

Clenshaw and Curtis(1960) pointed out, however, that non-Gaussian schemes could

have some advantages over Gaussian quadrature. First, the Gaussian abscissas are the

roots of transcendental polynomials, which must be computed or looked up in tables. In

contrast, the roots and quadrature weights for Chebyshev polynomials are given by simple

analytical formulas. Second, for all Gaussian quadratures, when the integral is computed

twice using different N to check for accuracy, the evaluations of f (x) for the ¬rst compu-

tation are completely useless for the second computation. The reason is that for Gaussian

integration formulas, the abscissas for a given N never equal the abscissas of a different

N (except for the endpoints). This is unfortunate when the integrand f (x) is expensive to

evaluate. In contrast, for certain Chebyshev polynomial schemes, previous evaluations of

f (x) can be recycled when N is doubled. This inexpensive adaptive quadrature schemes

which double N until two successive approximations to the integral agree to within a user-

speci¬ed tolerance, or satisfy other criteria for accuracy explained below.

Thus, “Clenshaw-Curtis” quadratures have the twin virtues of simplicity and easy and

inexpensive adaptivity.

19.8.2 Clenshaw-Curtis Adaptivity

Clenshaw and Curtis(1960) noted that the Chebyshev-Lobatto grid of (N + 2) points

πj

(19.47)

xj = cos , j = 0, 1, . . . , (N + 1)

N +1

has the property that when N + 1 is doubled, all points on the old grid are also part of the

new, higher resolution grid. This allows inexpensive computation of the same integral for

multiple resolutions because it is only necessary to evaluate f (x) on the ¬nest grid, and

this provides all the evaluations of f needed for the coarser grids, too.

The Chebyshev polynomials are the images of cosine functions under the mapping

x = cos(t). Thus, the Clenshaw-Curtis grid is an evenly spaced grid in the trigonometric

coordinate t. Although also evenly spaced in t, the usual Chebyshev “interior” or “roots”

grid does not have the property that the points at small N are contained in points of higher

N , so Clenshaw and Curtis rejected it in favor of the Lobatto, endpoints-and-extrema grid.

Automatic adaptivity also requires an estimate for the error EN . Several possibilities

have been described in the literature, such as Clenshaw and Curtis™ own estimate: the

maximum of the absolute value of the three highest degree coef¬cients of the Chebyshev

series of the inde¬nite integral of f . However, the simplest and most conservative is

≡ | IN ’ IN/2 |

estimate

(19.48)

EN

where IN and IN/2 denote the (N + 2)-point and (N + 1)/2 + 1-point approximations to

the integral I. In words, when N + 1 is doubled, the error is almost certainly less than the

difference between two successive estimates of the integral, provided that this difference is

small. Gentleman(1972a), who discusses and compares several estimates, says, ”Consider-

able experience with the subroutine CQUAD (Gentleman, 1972c), however, indicates that

this simple estimate [Eq.(19.48)] appears unusually realistic here [for quadrature]. ... The

naive estimate also has the advantage that when the integral is being computed to essen-

tially full-word accuracy, a reasonable indication of the representational rounding error in

the answer is frequently given. The alternative error estimates ... ignore this ” often to

one™s embarrassment.”

We agree whole-heartedly. Computers have improved greatly in the 26 years since his

article was published, and there is little excuse for trading reliability for a slightly smaller

value of the maximum value of N .

CHAPTER 19. SPECIAL TRICKS

456

His comment on rounding error is a reminder that spectral methods are so accurate that

machine precision will limit accuracy rather than theoretical rates of convergence. The dif-

ference between IN and IN/2 cannot decrease below ten or one hundred times the machine

epsilon, that is, the minimum error of a single ¬‚oating point operation on a given species of

computer (usually about 10’16 on most contemporary machines). In contrast, a theoretical

estimate can be smaller than machine-epsilon, which is meaningless unless one calculates

using multiple precision arithmetic.

The strategy for adaptive quadrature is simple: pick a starting value of N , compute an

approximation, double N + 1, compute another approximation, compare the difference to

the desired error tolerance, and then double N + 1 again and again until two successive

approximations differ by a satisfactorily tiny amount.

19.8.3 Mechanics

Boyd(1987c) has given a general treatment for Curtis-Clenshaw quadrature that embraces

the in¬nite and semi-in¬nite intervals as well as x ∈ [’1, 1]. The ¬rst step is to transform

the interval in x to the trigonometric coordinate t. The second step is to approximate the

integral by an evenly spaced quadrature formula on the interval t ∈ [0, π]. The quadrature

weights are simply the integrals of the trigonometric cardinal functios with the “metric”

factor that results from the change of coordinates. The results may be summarized as

follows:

1. x ∈ [’1, 1]: The transformation of the integral is

(19.49)

x= cos(t)

1 π

(19.50)

I = f (x)dx = f (cos(t)) sin(t) dt

’1 0

The quadrature approximation is

N

≡ (19.51)

IN wj f (cos(tj ))

j=1

≡ πj/(N + 1), (19.52)

tj j = 1, 2, . . . , N

N

2

≡ sin(tj ) sin(mtj ) [1 ’ cos(mπ)]/m (19.53)

wj

N +1 m=1