ñòð. 63 |

( j)

than {B2m }j=0 . Again, we have veriÃ¿ed the superiority of our new approach to the direct approach,

at least with respect to column sequences.

We would like to add that the theory of [11] applies to the more general class of functions A(y)

that have asymptotic expansions of the form A(y) âˆ¼ A + âˆž k (y)( qk ki (log y)i ) as y â†’ 0+,

k=1 i=0

where qk are arbitrary nonnegative integers.

4. Application to inÃ¿nite series via the d (1) -transformation: the (d=d )d (1) -transformation

4.1. General usage

âˆž

Let {Sm } be the sequence of partial sums of the inÃ¿nite series vk , namely,

k=1

m

Sm = vk ; m = 1; 2; : : : : (4.1)

k=1

Assume that

âˆž

âˆ’i

vm âˆ¼ Ã‚i m as m â†’ âˆž; Ã‚0 = 0; + 1 = 0; 1; 2; : : : : (4.2)

i=0

As is known, limmâ†’âˆž Sm exists and is Ã¿nite if and only if R + 1 Â¡ 0. When R + 1Â¿0 but

+ 1 = 0; 1; 2; : : : ; {Sm } diverges but has a well deÃ¿ned and useful antilimit as has been shown in

Theorem 4.1 of [10]. For all in (4.2) this theorem reads as follows:

Theorem 4.1. With Sm as in (4:1) and (4:2); we have

âˆž

Ã¿i mâˆ’i

Sm âˆ¼ S + mvm as m â†’ âˆž; Ã¿0 = 0: (4.3)

i=0

Here S = limmâ†’âˆž Sm when R + 1 Â¡ 0; and S is the antilimit of {Sm } otherwise.

The part of Theorem 4.1 concerning convergent sequences {Sm } is already contained in

Theorem 2 of [6].

268 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273

From Theorem 4.1 it is clear that GREP(1) can be applied to the sequence {Sm } by drawing the

analogy a(t) â†” Sm ; t â†” mâˆ’1 ; â€™(t) â†” mvm , and A â†” S, and by picking tl = 1=Rl for some positive

integers Rl ; 16R0 Â¡ R1 Â¡ R2 Â¡ Â· Â· Â· ; and the W-algorithm can be used to implement it. This GREP(1)

is simply the Levinâ€“Sidi d(1) -transformation, and we denote its A( j) by Sn j) .

(

n

As already explained in [10,4], for the type of sequences considered here we should pick the Rl

such that {Rl } increases exponentially to ensure the best stability and convergence properties in the

Sn j) . Exponential increase in the Rl can be achieved by picking them, for example, as in

(

R0 = 1 and Rl+1 = Rl + 1; l = 0; 1; : : : ; for some Â¿ 1: (4.4)

(With = 1 we have Rl = l + 1; l = 0; 1; : : : ; for which the d(1) -transformation becomes the Levin

[5] u-transformation.) This gives Rl = O( l ) as l â†’ âˆž: Needless to say, should not be picked

too far from 1 to avoid too quick a growth in the Rl . We have found that between 1:1 and 1:5 is

su cient for most purposes. Since tl = 1=Rl , (4.4) implies that

tl tl

6tl+1 Â¡ ; l = 0; 1; : : : (4.5)

+ tl

as a result of which {tl } satisÃ¿es (3.1) with ! = 1= âˆˆ (0; 1). Therefore, Theorem 3.1 applies to the

approximations Smj) to S obtained via the d(1) -transformation, as has been shown in [10]. Clearly,

(

= âˆ’ âˆ’ 1 in (3.2) and (3.6) for this case.

Ë™

If, in addition, vm and S are di erentiable functions of a parameter , S is the limit or antilimit

Ë™

of {S m }, and

vm = vm [K log m + L + o(1)]

Ë™ as m â†’ âˆž; for some constants K = 0 and L (4.6)

and the asymptotic expansion in (4.3) can be di erentiated with respect to term by term, then

Ë™( j) âˆž

Theorems 3.2â€“3.4 apply to {S n }j=0 without any modiÃ¿cations. We shall denote this method that

Ë™( j) Ë™( j) Ë™

produces the S n the (d=d )d(1) -transformation for short. The rate of convergence of the S n to S is

almost identical to the rate of convergence of the Sn j) to S as we have observed in many numerical

(

examples, and as we have proved in Theorem 3.4 for the column sequences.

To summarize the relevant convergence results for the d(1) - and (d=d )d(1) -transformations as these

Ë™

are applied to {Sm } and {S m } above, we have from Theorems 3.1 and 3.4

Sn j) âˆ’ S = O(vRj Rj

( âˆ’n+1 ( +1âˆ’n) j

) = O( ) as j â†’ âˆž;

Ë™( j) Ë™ âˆ’n+1 ( +1âˆ’n) j

S n âˆ’ S = O(vRj Rj log Rj ) = O(j ) as j â†’ âˆž: (4.7)

Of course, these results are not optimal. Optimal results follow from (3.3) and (3.19), and we leave

them to the reader. The results for nj) and nj) that pertain to stability can be obtained from

( (

Theorems 3.1â€“3.3.

Rj

For the sake of completeness we note that the (d=d )W-algorithm takes tj = 1=Rj ; a(tj ) = k=1 vk ;

Rj

a(tj ) = k=1 vk ; â€™(tj ) = Rj vRj , and â€™(tj ) = Rj vRj as input for this problem.

Ë™ Ë™ Ë™ Ë™

Ë™

It is worth mentioning that we can also compute S by applying the d(2) -transformation directly

Ë™

to {S m }. The d(2) -transformation is a GREP(2) . As we mentioned earlier, this is less e ective than

the application of the (d=d )d(1) -transformation to {Sm }. We shall see this also through numerical

examples in the next section.

A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273 269

4.2. A special application

We next turn to an interesting application of the (d=d )d(1) -transformation to the summation of a

class of inÃ¿nite series âˆž vk , where vm has the form

k=1 Ëœ Ëœ

âˆž

âˆ’i

vm = [log (m)]vm ;

Ëœ (m) âˆ¼ im as m â†’ âˆž; = 0 and = 0; (4.8)

0

i=0

âˆž

with vm as in (4.2). (When = 0 the d(1) -transformation is very e ective on the series vk .) To

Ëœ

k=1

this end Ã¿rst let us consider the inÃ¿nite series âˆž uk ( ), where

k=1

um ( ) = vm [ (m)] ; m = 1; 2; : : : : (4.9)

(Here vm and (m) do not depend on ). Now it can be shown that [ (m)] âˆ¼ âˆž i m âˆ’i as i=0

m â†’ âˆž, where 0 = 0 = 0 and = . Consequently, um ( ) âˆ¼ âˆž Ã‚i m âˆ’i as m â†’ âˆž, where i=0

Ã‚0 =Ã‚0 = 0 and = + , so that um ( ) is of the form described in (4.1) for all . That is to say,

the d(1) -transformation can be applied to sum âˆž uk ( ) for any . Next, u m ( ) = um ( ) log (m) âˆ¼

Ë™

k=1

um ( )[ log m + log 0 + o(1)] as m â†’ âˆž; cf. (4.6). Therefore, the (d=d )d(1) -transformation can

be used for summing âˆž u k ( ) for any . Finally, um (0) = vm and u m (0) = vm , and hence the

k=1 Ë™ Ë™ Ëœ

âˆž

(1)

(d=d )d -transformation can be used for summing k=1 vk in particular. This can be done by setting

Ëœ

Rj Rj

tj = 1=Rj ; a(tj ) = k=1 vk ; a(tj ) = k=1 vk ; â€™(tj ) = Rj vRj , and â€™(tj ) = Rj vRj in the (d=d )W-algorithm.

Ë™ Ëœ Ë™ Ëœ

5. Numerical examples

In this section we wish to demonstrate numerically the e ectiveness of (d=d )GREP(1) via the

(d=d )d(1) -transformation on some inÃ¿nite series, convergent or divergent. We will do this with two

examples. The Ã¿rst one of these examples has already been treated in [14] within the framework of

the Richardson extrapolation process.

Example 5.1. Consider the series âˆž k âˆ’ âˆ’1 that converges for R Â¿ 0 and deÃ¿nes the Riemann

k=1

zeta function ( + 1). As is known, (z) can be continued analytically to the entire complex plane

except z = 1, where it has a simple pole. As the term vm = mâˆ’ âˆ’1 is of the form described in the

previous section, Theorem 4.1 applies to Sm = m k âˆ’ âˆ’1 with S = ( + 1) and = , whether

k=1

Ë™

limmâ†’âˆž Sm exists or not. Furthermore, the asymptotic expansion of S m = m (âˆ’log k)k âˆ’ âˆ’1 can be

k=1

obtained by term-by-term di erentiation of the expansion in (4.3), as has already been mentioned in

Ë™

[14]. This implies that the (d=d )d(1) -transformation can be applied to the computation of S= ( +1),

and Theorems 3.2â€“3.4 are valid with = . In particular, (4.7) is valid with = âˆ’ âˆ’ 1 there.

Ë™

We applied the (d=d )d(1) -transformation to this problem to compute S = ( + 1). We picked

the integers Rl as in (4.4) with = 1:2 there. We considered the two cases (i) = 1 and (ii)

Ë™ Ë™

= âˆ’0:5. Note that in case (i) both limmâ†’âˆž Sm and limmâ†’âˆž S m exist and are S = (2) and S = (2),

Ë™

respectively, while in case (ii) these limits do not exist and S = (0:5) and S = (0:5) are the

Ë™

corresponding antilimits. We also applied the d(2) -transformation directly to {S m } with the same

Rl â€™s, the resulting approximations being denoted Bnj) , as in (2.19). The numerical results are shown

(

in Tables 1â€“3.

270 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273

Table 1

Numerical results on Process I for (z) in Example 5.1, where (z) is the Riemann zeta function,

Ë™

with z=2. The d(1) - and (d=d )d(1) -transformations on {Sm } and {S m } and the d(2) -transformation

Ë™( j+1) âˆ’

Ë™

on {S m } are implemented with =1:2 in (4.4). Here Pn j) =|Sn j+1) âˆ’S|=|Sn j) âˆ’S|; Qn j) =|S n

( ( ( (

Ë™ Ë™( j) Ë™ Ë™( j)

Ë™ Ë™

S|=|S n âˆ’ S|, and Zn j) = |Bn j+1) âˆ’ S|=|Bn j) âˆ’ S|, where Sn j) , S n , and Bn j) are the approximations

( ( ( ( (

obtained from the d(1) -, (d=d )d(1) -, and d(2) -transformations, respectively. All six columns are

tending to âˆ’7 = 0:279 : : :

P5 j)

(

Q5 j)

( ( j)

P6 j)

(

Q6 j)

( ( j)

j Z10 Z12

0 1:53D âˆ’ 01 1:62D âˆ’ 01 3:18D âˆ’ 01 1:09D âˆ’ 03 2:25D âˆ’ 02 3:08D âˆ’ 02

2 1:94D âˆ’ 01 2:09D âˆ’ 01 2:01D âˆ’ 01 3:23D âˆ’ 01 3:19D âˆ’ 01 1:06D âˆ’ 01

4 1:97D âˆ’ 01 2:10D âˆ’ 01 1:58D âˆ’ 01 2:30D âˆ’ 01 2:41D âˆ’ 01 1:58D âˆ’ 02

6 2:33D âˆ’ 01 2:46D âˆ’ 01 2:02D âˆ’ 01 2:44D âˆ’ 01 2:56D âˆ’ 01 4:27D âˆ’ 01

8 2:45D âˆ’ 01 2:57D âˆ’ 01 4:95D âˆ’ 01 2:51D âˆ’ 01 2:63D âˆ’ 01 2:93D âˆ’ 01

10 2:50D âˆ’ 01 2:61D âˆ’ 01 3:56D âˆ’ 01 2:56D âˆ’ 01 2:66D âˆ’ 01 2:65D âˆ’ 01

12 2:65D âˆ’ 01 2:75D âˆ’ 01 3:22D âˆ’ 01 2:66D âˆ’ 01 2:75D âˆ’ 01 2:58D âˆ’ 01

14 2:67D âˆ’ 01 2:76D âˆ’ 01 3:07D âˆ’ 01 2:68D âˆ’ 01 2:77D âˆ’ 01 2:60D âˆ’ 01

16 2:70D âˆ’ 01 2:79D âˆ’ 01 3:03D âˆ’ 01 2:71D âˆ’ 01 2:79D âˆ’ 01 2:69D âˆ’ 01

18 2:70D âˆ’ 01 2:79D âˆ’ 01 2:99D âˆ’ 01 2:71D âˆ’ 01 2:79D âˆ’ 01 2:78D âˆ’ 01

20 2:74D âˆ’ 01 2:82D âˆ’ 01 2:97D âˆ’ 01 2:74D âˆ’ 01 2:82D âˆ’ 01 2:85D âˆ’ 01

Table 2

Numerical results on Process II for (z) in Example 5.1, where (z) is the Riemann zeta function,

Ë™

with z=2. The d(1) - and (d=d )d(1) -transformations on {Sm } and {S m } and the d(2) -transformation

( j)

Ë™ Ë™

on {S m } are implemented with = 1:2 in (4.4). Here Sn j) , S n , and Bn j) are the approximations

( (

obtained from the d(1) -, (d=d )d(1) -, and d(2) -transformations, respectively. (The inÃ¿nite series

converge.)

(0)

Ë™ Ë™ Ë™ Ë™ Ë™

(0) (0)

n Rn |SRn âˆ’ S| |Sn âˆ’ S| |S Rn âˆ’ S| |S n âˆ’ S| |Bn âˆ’ S|

0 1 6:45D âˆ’ 01 1:64D + 00 9:38D âˆ’ 01 9:38D âˆ’ 01 9:38D âˆ’ 01

2 3 2:84D âˆ’ 01 1:99D âˆ’ 02 6:42D âˆ’ 01 3:67D âˆ’ 02 4:56D âˆ’ 01

4 5 1:81D âˆ’ 01 3:12D âˆ’ 05 4:91D âˆ’ 01 1:07D âˆ’ 04 3:28D âˆ’ 03

6 9 1:05D âˆ’ 01 7:08D âˆ’ 07 3:42D âˆ’ 01 1:56D âˆ’ 06 6:19D âˆ’ 04

8 14 6:89D âˆ’ 02 8:18D âˆ’ 09 2:53D âˆ’ 01 2:35D âˆ’ 08 3:26D âˆ’ 05

10 21 4:65D âˆ’ 02 3:71D âˆ’ 11 1:89D âˆ’ 01 1:25D âˆ’ 10 8:26D âˆ’ 07

12 32 3:08D âˆ’ 02 6:95D âˆ’ 14 1:38D âˆ’ 01 2:70D âˆ’ 13 5:11D âˆ’ 07

14 47 2:11D âˆ’ 02 2:55D âˆ’ 17 1:02D âˆ’ 01 1:44D âˆ’ 16 4:17D âˆ’ 09

16 69 1:44D âˆ’ 02 8:28D âˆ’ 20 7:54D âˆ’ 02 3:03D âˆ’ 19 1:60D âˆ’ 11

18 100 9:95D âˆ’ 03 1:14D âˆ’ 22 5:58D âˆ’ 02 4:90D âˆ’ 22 4:32D âˆ’ 13

20 146 6:83D âˆ’ 03 5:75D âˆ’ 26 4:09D âˆ’ 02 2:72D âˆ’ 25 2:14D âˆ’ 16

22 212 4:71D âˆ’ 03 1:52D âˆ’ 29 2:99D âˆ’ 02 4:53D âˆ’ 29 4:53D âˆ’ 17

24 307 3:25D âˆ’ 03 2:44D âˆ’ 30 2:19D âˆ’ 02 3:52D âˆ’ 29 1:97D âˆ’ 19

Table 1 shows the validity of the theory for Process I given in Sections 2â€“ 4 very clearly. The

results of this table that have been computed with = 1 can be understood as follows:

A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273 271

Table 3

Numerical results on Process II for (z) in Example 5.1, where (z) is the Riemann zeta

Ë™

function, with z = 0:5. The d(1) - and (d=d )d(1) -transformations on {Sm } and {S m } and the

( j)

Ë™ Ë™

d(2) -transformation on {S m } are implemented with = 1:2 in (4.4). Here Sn j) , S n , and Bn j) are

( (

ñòð. 63 |