[6] G.A. Baker Jr. P.R. Graves Morris, PadÃ approximants, Encyclopedia of Mathematics and its Applications, 2nd

e

Edition, Cambridge University Press, Cambridge.

1=(q n + r n ), J. Number Theory 37 (1991) 253“259.

[7] P. Borwein, On the irrationality of

[8] P. Borwein, On the irrationality of certain series, Math. Proc. Cambridge Philos Soc. 112 (1992) 141“146.

[9] F. Brafmann, On Touchard polynomials, Canad. J. Math. 9 (1957) 191“192.

[10] C. Brezinski, in: PadÃ -type approximation and general orthogonal polynomials, ISM, Vol. 50, Birkauser Verlag,

e

Basel, 1980.

[11] C. Brezinski, J. Van Iseghem, PadÃ Approximations, in: P.G. Ciarlet, J.L. Lions (Eds.), Handbook of Numerical

e

Analysis, Vol. III, North Holland, Amsterdam, 1994.

[12] L. Carlitz, Some polynomials of Touchard connected with the Bernoulli numbers, Canad. J. Math. 9 (1957)

188“190.

[13] L. Carlitz, Bernouilli and Euler numbers and orthogonal polynomials, Duke Math. J. 26 (1959) 1“16.

250 M. PrÃ vost / Journal of Computational and Applied Mathematics 122 (2000) 231“250

e

[14] H. Cohen, DÃ monstration de l™irrationalitÃ de (3), d™apr‚ s ApÃ ry, sÃ minaire de ThÃ orie des nombres de Bordeaux,

e e e e e e

5 octobre 1978.

[15] A. Draux, in: Polynˆ mes orthogonaux formels ” Applications, Lecture Notes in Mathematics, Vol. 974, Springer,

o

Berlin, 1983.

[16] K.A. Driver, D.S. Lubinsky, Convergence of PadÃ approximants for a q-hypergeometric series Wynn™s Power Series

e

III, Aequationes Math. 45 (1993) 1“23.

[17] D. Duverney, Approximation Diophantienne et irrationalitÃ de la somme de certaines sÃ ries de nombres rationnels”,

e e

Th‚ se, UniversitÃ de Lille I, 1993.

e e

[18] D. Duverney, Approximants de PadÃ et U-dÃ rivation, Bull. Soc. Math. France 122 (1994) 553“570.

e e

n n

[19] D. Duverney, Sur l™irrationalitÃ de

e r =(q ’ r), C.R. Acad. Sci. Paris, SÃ r. I 320 (1995) 1“4.

e

[20] M. Hata, On the linear independance of the values of polylogarithmic functions, J. Math. Pures Appl. 69 (1990)

133“173.

[21] M. Huttner, IrrationalitÃ de certaines intÃ grales hypergÃ omÃ triques, J. Number Theory 26 (1987) 166“178.

e e ee

[22] T. Matala Aho, On the irrationality measures for values of some q-hypergeometric series, Acta Univ. Oulu Ser. A

Sci. Rerum Natur. 219 (1991) 1“112.

[23] E.M. Nikischin, On irrationality of the values of the functions F(x; s), Math. USSR Sbornik 37 (1980) 381“388.

[24] M. PrÃ vost, Rate of convergence of PadÃ approximants for a particular Wynn series, Appl. Numer. Math. 17 (1995)

e e

461“469.

t n =(A n + Bÿn ), J. Number Theory 73 (1998) 139“161.

[25] M. PrÃ vost, On the irrationality of

e

[26] D. Shanks, Non linear transformations of divergent and slowly convergent series, J. Math. Phys. 34 (1955) 1“42.

[27] V.N. Sorokin, On the irrationality of the values of hypergeometric functions, Math. USSR Sb. 55 (1) (1986) 243“257.

[28] T.J. Stieltjes, Sur quelques intÃ grales dÃ ÿnies et leur dÃ veloppement en fractions continues, Quart. J. Math. 24 (1880)

e e e

370 “382; oeuvres, Vol. 2, Noordho , Groningen, 1918, pp. 378“394.

[29] T.J. Stieltjes, Recherches sur les fractions continues, Ann. FacultÃ Sci. Toulouse 8, (1894), J1-122; 9 (1895), A1-47;

e

oeuvres, Vol. 2, pp. 398“566.

[30] G. Szego, in: Orthogonal Polynomials, Amer. Math. Soc. Coll. Pub., Vol. XXIII, American Mathematical Society,

Providence, RI, 1939.

[31] J. Touchard, Nombres exponentiels et nombres de Bernoulli, Canad. J. Math. 8 (1956) 305“320.

[32] G.N. Watson, E.T. Whittaker, A Course of Modern Analysis, Cambridge University Press, London, 1958.

[33] J. Wilson, Some hypergeometric orthogonal polynomials, SIAM J. Math. Anal. 11 (1980) 690“701.

[34] M. Wymann, L. Moser, On some polynomials of Touchard, Canad. J. Math. 8 (1956) 321“322.

[35] P. Wynn, l™ -algorithmo e la tavola di PadÃ , Rend. Mat. Roma 20 (1961) 403.

e

Journal of Computational and Applied Mathematics 122 (2000) 251“273

www.elsevier.nl/locate/cam

The generalized Richardson extrapolation process GREP(1) and

computation of derivatives of limits of sequences with

applications to the d(1)-transformation

Avram Sidi

Computer Science Department, Technion, Israel Institute of Technology, Haifa 32000, Israel

Received 3 May 1999; received in revised form 15 December 1999

Abstract

Let {Sm } be an inÿnite sequence whose limit or antilimit S can be approximated very e ciently by applying a suitable

extrapolation method E0 to {Sm }. Assume that the Sm and hence also S are di erentiable functions of some parameter

; (d=d )S being the limit or antilimit of {(d=d )Sm }, and that we need to approximate (d=d )S. A direct way of achieving

this would be by applying again a suitable extrapolation method E1 to the sequence {(d=d )Sm }, and this approach has

often been used e ciently in various problems of practical importance. Unfortunately, as has been observed at least in some

important cases, when (d=d )Sm and Sm have essentially di erent asymptotic behaviors as m ’ ∞, the approximations to

(d=d )S produced by this approach, despite the fact that they are good, do not converge as quickly as those obtained for

S, and this is puzzling. In a recent paper (A. Sidi, Extrapolation methods and derivatives of limits of sequences, Math.

Comp., 69 (2000) 305 “323) we gave a rigorous mathematical explanation of this phenomenon for the cases in which E0

is the Richardson extrapolation process and E1 is a generalization of it, and we showed that the phenomenon has nothing

to do with numerics. Following that we proposed a very e ective procedure to overcome this problem that amounts

to ÿrst applying the extrapolation method E0 to {Sm } and then di erentiating the resulting approximations to S. As a

practical means of implementing this procedure we also proposed the direct di erentiation of the recursion relations of the

extrapolation method E0 used in approximating S. We additionally provided a thorough convergence and stability analysis

in conjunction with the Richardson extrapolation process from which we deduced that the new procedure for (d=d )S has

practically the same convergence properties as E0 for S. Finally, we presented an application to the computation of integrals

with algebraic=logarithmic endpoint singularities via the Romberg integration. In this paper we continue this research by

treating Sidi™s generalized Richardson extrapolation process GREP(1) in detail. We then apply the new procedure to various

inÿnite series of logarithmic type (whether convergent or divergent) in conjunction with the d(1) -transformation of Levin

and Sidi. Both the theory and the numerical results of this paper too indicate that this approach is the preferred one for

computing derivatives of limits of inÿnite sequences and series. c 2000 Elsevier Science B.V. All rights reserved.

MSC: 40A25; 41A60; 65B05; 65B10; 65D30

E-mail address: asidi@cs.technion.ac.il (A. Sidi)

0377-0427/00/$ - see front matter c 2000 Elsevier Science B.V. All rights reserved.

PII: S 0 3 7 7 - 0 4 2 7 ( 0 0 ) 0 0 3 6 2 - 9

252 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273

1. Introduction and review of recent developments

Let {Sm } be an inÿnite sequence whose limit or antilimit S can be approximated very e ciently

by applying a suitable extrapolation method E0 to {Sm }. Assume that the Sm and hence also S are

di erentiable functions of some parameter ; (d=d )S being the limit or antilimit of {(d=d )Sm }, and

that we need to approximate (d=d )S. A direct way of achieving this would be by applying again

a suitable extrapolation method E1 to the sequence {(d=d )Sm }, and this approach has often been

used e ciently in various problems of practical importance. When Sm and (d=d )Sm have essentially

di erent asymptotic behaviors as m ’ ∞, the approximations to (d=d )S produced by applying

E1 to {(d=d )Sm } do not converge to (d=d )S as quickly as the approximations to S obtained by

applying E0 to {Sm } even though they may be good. This is a curious and disturbing phenomenon

that calls for an explanation and a beÿtting remedy, and both of these issues were addressed by the

author in the recent paper [14] via the Richardson extrapolation. As far as is known to us [14] is

the ÿrst work that handles this problem.

The procedure to cope with the problem above that was proposed in [14] amounts to ÿrst applying

the extrapolation method E0 to {Sm } and then di erentiating the resulting approximations to S. As

far as practical implementation of this procedure is concerned, it was proposed in [14] to actually

di erentiate the recursion relations satisÿed by the method E0 .

In the present work we continue this new line of research by extending the approach of [14]

to GREP(1) that is the simplest case of the generalized Richardson extrapolation process GREP of

Sidi [7]. Following this, we consider the application of the d(1) -transformation, the simplest of the

d-transformations of Levin and Sidi [6], to computing derivatives of sums of inÿnite series. Now

GREP is a most powerful extrapolation procedure that can be applied to a very large class of

sequences and the d-transformations are GREPs that can be applied successfully again to a very

large class of inÿnite series. Indeed, it is known theoretically and has been observed numerically

that GREP in general and the d-transformations in particular have scopes larger than most known

extrapolation methods.

Before we go on to the main theme of this paper, we will give a short review of the motivation

and results of [14]. This will also help establish some of the notation that we will use in the

remainder of this work and set the stage for further developments. As we did in [14], here too we

will keep the treatment general by recalling that inÿnite sequences are either directly related to or

can be formally associated with a function A(y), where y may be a continuous or discrete variable.

Let a function A(y) be known and hence computable for y ∈ (0; b] with some b ¿ 0, the variable

y being continuous or discrete. Assume, furthermore, that A(y) has an asymptotic expansion of the

form

∞

A(y) ∼ A + ky as y ’ 0+; (1.1)

k

k=1

where are known scalars satisfying

k

= 0; k = 1; 2; : : : ; R ¡R ¡···; lim R = +∞; (1.2)

k 1 2 k

k’∞

and A and k; k = 1; 2; : : : ; are constants independent of y that are not necessarily known.

A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273 253

From (1.1) and (1.2) it is clear that A = limy’0+ A(y) when this limit exists. When limy’0+ A(y)

does not exist, A is the antilimit of A(y) for y ’ 0+; and in this case R 1 60 necessarily. In any

case, A can be approximated very e ectively by the Richardson extrapolation process that is deÿned

via the linear systems of equations

n

A( j)

A(yl ) = + k yl ; j6l6j + n; (1.3)

k

n

k=1

with the yl picked as

yl = y0 !l ; l = 0; 1; : : : ; for some y0 ∈ (0; b] and ! ∈ (0; 1): (1.4)

Here A( j) are the approximations to A and the k are additional (auxiliary) unknowns. As is well

n

known, A( j) can be computed very e ciently by the following algorithm due to Bulirsch and Stoer

n

[2]:

A( j) = A(yj ); j = 0; 1; : : : ;

0

A( j+1) ’ cn A( j)

= n’1 n’1

A( j) ; j = 0; 1; : : : ; n = 1; 2; : : : ; (1.5)

n

1 ’ cn

where we have deÿned

cn = ! n ; n = 1; 2; : : : : (1.6)

Let us now consider the situation in which A(y) and hence A depend on some real or complex

parameter and are continuously di erentiable in for in some set X of the real line or the

™

complex plane, and we are interested in computing (d=d )A ≡ A: Let us assume in addition to

™

the above that (d=d )A(y) ≡ A(y) has an asymptotic expansion for y ’ 0+ that is obtained by

di erentiating that in (1.1) term by term. (This assumption is satisÿed at least in some cases of

practical interest as can be shown rigorously.) Finally, let us assume that the k and k , as well as

A(y) and A, depend on and that they are continuously di erentiable for ∈ X . As a consequence

of these assumptions we have

∞

™ ™

A(y) ∼ A + ( ™k + ™k log y)y as y ’ 0+; (1.7)

k

k

k=1

™

where ™k ≡ (d=d ) k and ™k ≡ (d=d ) k : Obviously, A and the ™k and ™k are independent of y. As a

result, the inÿnite sum on the right-hand side of (1.7) is simply of the form ∞ ( k0 + k1 log y)y k

k=1

with k0 and k1 constants independent of y.

Note that when the k do not depend on , we have ™k =0 for all k, and, therefore, the asymptotic

expansion in (1.7) becomes of exactly the same form as that given in (1.1). This means that

™

we can apply the Richardson extrapolation process above directly to A(y) and obtain very good

™ ™

approximations to A. This amounts to replacing A(yj ) in (1.5) by A(yj ), keeping everything else

the same. However, when the k are functions of , the asymptotic expansion in (1.7) is essentially

di erent from that in (1.1). This is so since y k log y and y k behave entirely di erently as y ’ 0+.

™

In this case the application of the Richardson extrapolation process directly to A(y) does not produce

™

approximations to A that are of practical value.

254 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273

™

The existence of an asymptotic expansion for A(y) of the form given in (1.7), however, suggests

immediately that a generalized Richardson extrapolation process can be applied to produce approx-

™

imations to A in an e cient manner. In keeping with the convention introduced by the author in

[12], this extrapolation process is deÿned via the linear systems

(n+1)=2 n=2

Bnj)

(

B(yl ) = + k0 yl + k1 yl log yl ; j6l6j + n; (1.8)

k k

k=1 k=1

™ ™

where B(y) ≡ A(y), Bnj) are the approximations to B ≡ A, and k0 and k1 are additional (auxiliary)

(

unknowns. (This amounts to “eliminating” from (1.7) the functions y 1 ; y 1 log y; y 2 ; y 2 log y; : : : ;

in this order.) With the yl as in (1.4), the approximations Bnj) can be computed very e ciently by

(

the following algorithm developed in Sidi [12] and denoted the SGRom-algorithm there:

B0j) = B(yj );

(