<<

. 59
( 83 .)



>>

j = 0; 1; : : : ;
( j+1) ( j)
Bn’1 ’ n Bn’1
Bnj)
(
= ; j = 0; 1; : : : ; n = 1; 2; : : : ; (1.9)
1’ n


where we have now deÿned

= = ck ; k = 1; 2; : : : ; (1.10)
2k’1 2k


with the cn as deÿned in (1.6).
Before going on, we would like to mention that the problem we have described above arises nat-
1
urally in the numerical evaluation of integrals of the form B = 0 (log x)x g(x) d x, where R ¿ ’ 1
1
and g ∈ C ∞ [0; 1]. It is easy to see that B = (d=d )A, where A = 0 x g(x) d x. Furthermore, the
trapezoidal rule approximation B(h) to B with stepsize h has an Euler“Maclaurin (E“M) expan-
sion that is obtained by di erentiating with respect to the E“M expansion of the trapezoidal
rule approximation A(h) to A. With this knowledge available, B can be approximated by apply-
ing a generalized Richardson extrapolation process to B(h). Traditionally, this approach has been
adopted in multidimensional integration of singular functions as well. For a detailed discussion see
[3,9].
If we arrange the A( j) and Bnj) in two-dimensional arrays of the form
(
n

(0)
Q0
(1) (0)
Q0 Q1
(2) (1) (0)
Q0 Q1 Q2 (1.11)
(3) (2) (1) (0)
Q0 Q1 Q2 Q3
. . . . ..
. . . . .
. . . .

then the diagonal sequences {Qnj) }n=0 with ÿxed j have much better convergence properties than
(

the column sequences {Qnj) }j=0 with ÿxed n. In particular, the following convergence results are
(
A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273 255


known:
1. The column sequences satisfy

A( j) ’ A = O(|cn+1 |j ) as j ’ ∞;
n

( j)
B2m+s ’ B = O(j 1’s |cm+1 |j ) as j ’ ∞; s = 0; 1: (1.12)

2. Under the additional condition that
R ’ R k ¿d ¿ 0; k = 1; 2; : : : ; for some ÿxed d (1.13)
k+1

and assuming that k ; ™k , and k ™k grow with k at most like exp(ÿk Á ) for some ÿ¿0 and
Á ¡ 2, the diagonal sequences satisfy, for all practical purposes,
n
:
A( j) ’A=O |ci | as n ’ ∞;
n
i=1

n
:
Bnj)
(
’B=O | i| as n ’ ∞: (1.14)
i=1


The results pertaining to A( j) in (1.12) and (1.14), with real k , are due to Bulirsch and Stoer
n
[2]. The case of complex k is contained in [12], and so are the results on Bnj) . Actually, [12] gives
(

a complete treatment of the general case in which
qk

i
A(y) ∼ A + ki (logy) y as y ’ 0+; (1.15)
k


i=0
k=1

where qk are known arbitrary nonnegative integers, and are constants independent of y, and the
ki
k satisfy the condition

= 0; k = 1; 2; : : : ; R 1 6R 2 6 · · · ; lim R = +∞ (1.16)
k k
k’∞

that is much weaker than that in (1.2). Thus, the asymptotic expansions in (1.1) and (1.7) are special
cases of that in (1.15) with qk = 0; k = 1; 2; : : : ; and qk = 1; k = 1; 2; : : : ; respectively.
∞ ∞
Comparison of the diagonal sequences A( j) n=0 and Bnj) n=0 (with j ÿxed) with the help of
(
n
(1.14) reveals that the latter has inferior convergence properties, even though the computational
costs of A( j) and Bnj) are almost identical. (They involve the computation of A(yl ); j6l6j + n, and
(
n
B(yl ); j6l6j + n, respectively). As a matter of fact, from (1.6), (1.10), and (1.13) it follows that
the bound on |A( j) ’ A| is smaller than that of |B2m ’ B| by a factor of O( m |cm+i =ci |) = O(!dm )
( j) 2
2m i=1
as m ’ ∞. This theoretical observation is also supported by numerical experiments. Judging from
(1.14) again, we see that, when R k+1 ’ R k = d for all k in (1.13), B( j) 2 n will have an accuracy


comparable to that of A( j) . This, however, increases the cost of the extrapolation substantially, as the
n
cost of computing A(yl ) and B(yl ) increases drastically with increasing l in most cases of interest.
This quantitative discussion makes it clear that the inferiority of Bnj) relative to A( j) is actually
(
n
mathematical and has nothing to do with numerics.
256 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273


From what we have so far it is easy to identify the Richardson extrapolation of (1.3) as method
E0 and the generalized Richardson extrapolation of (1.8) as method E1 . We now turn to the new
procedure “(d=d )E0 ”.
™( j)

Let us now approximate A by (d=d )A( j) = An . This can be achieved computationally by di er-
n
entiating the recursion relation in (1.5), the result being the following recursive algorithm:

™( j) ™
A( j) = A(yj ) and A0 = A(yj ); j = 0; 1; : : : ;
0


A( j+1) ’ cn A( j)
= n’1 n’1
A( j) and
n
1 ’ cn


™( j+1) ™( j)
An’1 ’ cnAn’1 cn

™( j) (A( j) ’ A( j) );
An = + j = 0; 1; : : : ; n = 1; 2; : : : : (1.17)
n’1
1 ’ cn n
1 ’ cn
Here cn ≡ (d=d )cn ; n = 1; 2; : : : : This shows that we need two tables of the form given in (1.11),

™( j) ™( j) ™
one for A( j) and another for An . We also see that the computation of the An involves both A(y)
n
and A(y).
™( j) ∞ ™
The column sequences {An }j=0 converge to A almost in the same way the corresponding

sequences {A( j) }j=0 converge to A, cf. (1.12). We have
n


™( j) ™
An ’ A = O(j|cn+1 |j ) as j ’ ∞: (1.18)

™( j) ∞ ™
The diagonal sequences {An }n=0 converge to A also practically the same way the corresponding

{A( j) }n=0 converge to A, subject to the mild conditions that ∞ |ci | ¡ ∞ and n |ci =ci | = O(na )
i=1 ™ i=1 ™
n
as n ’ ∞ for some a¿0, in addition to (1.13). We have for all practical purposes, cf. (1.14),
n
™:
™( j)
An ’ A = O |ci | as n ’ ∞: (1.19)
i=1


™( j)
The stability properties of the column and diagonal sequences of the An are likewise analyzed in
[14] and are shown to be very similar to those of the A( j) . We refer the reader to [14] for details.
n
This completes our review of the motivation and results of [14]. In the next section we present the
extension of the procedure of [14] to GREP(1) . We derive the recursive algorithm for computing the
approximations and for assessing their numerical stability. In Section 3 we discuss the stability and
convergence properties of the new procedure subject to a set of appropriate su cient conditions that
are met in many cases of interest. The main results of this section are Theorem 3.3 on stability and
Theorem 3.4 on convergence and both are optimal asymptotically. In Section 4 we show how the
method and theory of Sections 2 and 3 apply to the summation of some inÿnite series of logarithmic
type via the d(1) -transformation. Finally, in Section 5 we give two numerical examples that illustrate
the theory and show the superiority of the new approach to derivatives of limits over the direct
one. In the ÿrst example we apply the new approach to the computation of the derivative of the
Riemann zeta function. In the second example we compute (d=d )F( ; 1 ; 3 ; 1), where F(a; b; c; z) is
22
A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273 257


the Gauss hypergeometric function. This example shows clearly that our approach is very e ective
for computing derivatives of special functions such as the hypergeometric functions with respect to
their parameters.


2. GREP (1) and its derivative

2.1. General preliminaries on GREP(1)

As GREP(1) applies to functions A(y) that are in the class F(1) , we start by describing F(1) .

Deÿnition 2.1. We shall say that a function A(y), deÿned for 0 ¡ y6b, for some b ¿ 0, where y
can be a discrete or continuous variable, belongs to the set F(1) , if there exist functions (y) and
ÿ(y) and a constant A, such that
A(y) = A + (y)ÿ(y); (2.1)
where ÿ(x), as a function of the continuous variable x and for some Á6b, is continuous for 06x6Á,
and, for some constant r ¿ 0, has a Poincarà -type asymptotic expansion of the form
e

ÿi xir
ÿ(x) ∼ as x ’ 0 + : (2.2)
i=0

If, in addition, the function B(t) ≡ ÿ(t 1=r ), as a function of the continuous variable t, is inÿnitely
di erentiable for 06t6Ár , we shall say that A(y) belongs to the set F(1) . Note that F(1) ‚ F(1) .
∞ ∞


Remark. A = limy’0+ A(y) whenever this limit exists. If limy’0+ A(y) does not exist, then A is said
to be the antilimit of A(y). In this case limy’0+ (y) does not exist as is obvious from (2.1) and
(2.2).
It is assumed that the functions A(y) and (y) are computable for 0 ¡ y6b (keeping in mind
that y may be discrete or continuous depending on the situation) and that the constant r is known.
The constants A and ÿi are not assumed to be known. The problem is to ÿnd (or approximate) A
whether it is the limit or the antilimit of A(y) as y ’ 0+, and GREP(1) , the extrapolation procedure
that corresponds to F(1) , is designed to tackle precisely this problem.

Deÿnition 2.2. Let A(y) ∈ F(1) , with (y); ÿ(y); A, and r being exactly as in Deÿnition 2.1.
Pick yl ∈ (0; b]; l = 0; 1; 2; : : : ; such that y0 ¿ y1 ¿ y2 ¿ · · · ; and liml’∞ yl = 0. Then A( j) , the
n
approximation to A, and the parameters ÿi ; i = 0; 1; : : : ; n ’ 1; are deÿned to be the solution of the
system of n + 1 linear equations
n’1
A( j) ir
= A(yl ) + (yl ) ÿ i yl ; j6l6j + n; (2.3)
n
i=0

provided the matrix of this system is nonsingular. It is this process that generates the approximations
A( j) that we call GREP(1) .
n
258 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273


As is seen, GREP(1) produces a two-dimensional table of approximations of the form given in
(1.1).
Before going on we let t = yr and tl = yl ; l = 0; 1; : : : ; and deÿne a(t) ≡ A(y) and ™(t) ≡ (y).
r

Then the equations in (2.3) take on the more convenient form

<<

. 59
( 83 .)



>>