ñòð. 79 |

[ n+2k+2 n+2k+2

(n) (n+2)

g2k+2 = g2k + âˆ’ ; k âˆˆ N; n âˆˆ N0 : (3.17c)

(n) (n+2)

âˆ’ g2k âˆ’ g2kâˆ’2

n+2k+1 n+2k+1

(n)

If we deÃ¿ne gâˆ’2 = âˆž, then (3.17b) follows from (3.17a) and (3.17c).

(n) (n) (n) (n)

The z-independent parts c2k of r2k (z) and g2k of â€™2k (z), respectively, are connected. A comparison

of (3.9), (3.11), (3.14), and (3.16) yields

(n) (n)

g2k = c2k + n+2k+1 : (3.18)

Concerning the choice of the approximation to the limit, we proceed in the case of the epsilon

algorithm just like in the case of Aitkenâ€™s iterated 2 process and compute a new approximation to

the limit after the computation of each new partial sum. Thus, if the index m of the last partial sum

fm (z) is even, m = 2 , we use as approximation to the limit f(z) the transformation

(0)

{f0 (z); f1 (z); : : : ; f2 (z)} â†’ (3.19)

2

and if m is odd, m = 2 + 1, we use the transformation

(1)

{f1 (z); f2 (z); : : : ; f2 +1 (z)} â†’ 2: (3.20)

These two relationships can be combined into a single equation, yielding [95, Eq. (4:3â€“6)]

(mâˆ’2<m=2=)

{fmâˆ’2<m=2= (z); fmâˆ’2<m=2=+1 (z); : : : ; fm (z)} â†’ ; m âˆˆ N0 : (3.21)

2<m=2=

4. The iteration of Brezinskiâ€™s theta algorithm

Brezinskiâ€™s theta algorithm is the following recursive scheme [13]:

(n) (n)

#âˆ’1 = 0; #0 = sn ; n âˆˆ N0 ; (4.1a)

(n) (n+1) (n)

#2k+1 = #2kâˆ’1 + 1=[ #2k ]; k; n âˆˆ N0 ; (4.1b)

(n+1) (n+1)

[ #2k ][ #2k+1 ]

(n) (n+1)

#2k+2 = #2k + ; k; n âˆˆ N0 : (4.1c)

2 #(n)

2k+1

342 E.J. Weniger / Journal of Computational and Applied Mathematics 122 (2000) 329â€“356

(n)

As in the case of Wynnâ€™s epsilon algorithm (3:1), only the elements #2k with even subscripts provide

(n)

approximations to the (generalized) limit of the sequence to be transformed. The elements #2k+1 with

odd subscripts are only auxiliary quantities which diverge if the whole process converges.

The theta algorithm was derived from Wynnâ€™s epsilon algorithm (3:1) with the intention of over-

coming the inability of the epsilon algorithm to accelerate logarithmic convergence. In that respect,

the theta algorithm was a great success. Extensive numerical studies of Smith and Ford [87,88]

showed that the theta algorithm is not only very powerful, but also much more versatile than the

epsilon algorithm. Like the epsilon algorithm, it is an e cient accelerator for linear convergence and

it is also able to sum many divergent series. However, it is also able to accelerate the convergence

of many logarithmically convergent sequences and series.

As for example discussed in [97], new sequence transformations can be constructed by iterating

explicit expressions for sequence transformations with low transformation orders. The best known

example of such an iterated sequence transformation is probably Aitkenâ€™s iterated 2 process (2:4)

which is obtained by iterating Aitkenâ€™s 2 formula (2.2).

The same approach is also possible in the case of the theta algorithm. A suitable closed-form

expression, which may be iterated, is [95, Eq. (10:3â€“1)]

[ sn ][ sn+1 ][ 2 sn+1 ]

(n)

#2 = sn+1 âˆ’ ; n âˆˆ N0 : (4.2)

[ sn+2 ][ 2 sn ] âˆ’ [ sn ][ 2 sn+1 ]

The iteration of this expression yields the following nonlinear recursive scheme [95, Eq. (10:3â€“6)]:

(n)

J0 = sn ; n âˆˆ N0 ; (4.3a)

(n) (n+1) (n+1)

[ Jk ][ Jk ][ 2 Jk ]

(n) (n+1)

Jk+1 = Jk âˆ’ ; k; n âˆˆ N0 : (4.3b)

(n+2) (n) (n) (n+1)

[ Jk ][ 2 Jk ] âˆ’ [ Jk ][ 2 Jk ]

(n)

In convergence acceleration and summation processes, the iterated transformation Jk has similar

properties as the theta algorithm from which it was derived: They are both very powerful as well as

(n)

very versatile. Jk is not only an e ective accelerator for linear convergence as well as able to sum

divergent series, but it is also able to accelerate the convergence of many logarithmically convergent

sequences and series [11,74 â€“77,95,97,100].

(n)

In spite of all these similarities, the iterated transformation Jk has one undeniable advantage

(n)

over the theta algorithm, which ultimately explains why in this article only Jk is studied, but

(n)

not the theta algorithm: The recursive scheme (4:3) for Jk is slightly less complicated than the

recursive scheme (4:1) for the theta algorithm. On p. 282 of Weniger [95] it was emphasized that

a replacement of (4.1b) by the simpler recursion

(n) (n)

#2k+1 = 1=[ #2k ]; k; n âˆˆ N0 (4.4)

(n) (n)

would lead to a modiÃ¿ed theta algorithm which satisÃ¿es #2k = Jk .

It is a direct consequence of the recursive scheme (4:3) that 3k + 1 sequence elements sn ,

(n)

sn+1 ; : : : ; sn+3k are needed for the computation of Jk . Thus, we now choose as input data the

partial sums (1.4) of the (formal) power series (1.3) according to sn = fn (z), and conjecture that all

(n)

coe cients 0 ; 1 ; : : : ; n+3k , which were used for the construction of Jk , are exactly reproduced by

a Taylor expansion. This means that we have to look for an accuracy-through-order relationship of

the following kind:

(n)

f(z) âˆ’ Jk = O(z n+3k+1 ); z â†’ 0: (4.5)

E.J. Weniger / Journal of Computational and Applied Mathematics 122 (2000) 329â€“356 343

(n)

Such an accuracy-through-order relationship would imply that Jk can be expressed as follows:

(n) (n)

Jk = fn+3k (z) + Gk z n+3k+1 + O(z n+3k+2 ); z â†’ 0: (4.6)

(n)

The constant Gk is the prediction made for the coe cient n+3k+1 , which is the Ã¿rst coe cient of

(n)

the power series (1.3) not used for the computation of Jk .

Unfortunately, the recursive scheme (4:3) is not suited for our purposes. This can be shown by

(n)

computing J1 from the partial sums fn (z); fn+1 (z); fn+2 (z), and fn+3 (z):

n+2

n+1 n+2 [ n+3 z âˆ’ n+2 ]z

(n)

J1 = fn+1 (z) âˆ’ : (4.7)

n+3 z[ n+2 z âˆ’ n+1 ] âˆ’ n+1 [ n+3 z âˆ’ n+2 ]

(n)

SuperÃ¿cially, it looks as if the accuracy-through-order relationship (4.5) is not satisÃ¿ed by J1 .

However, the rational expression on the right-hand side contains the missing terms n+2 z n+2 and

n+3

n+3 z , as shown by the Taylor expansion

n+2

n+1 n+2 [ n+3 z âˆ’ n+2 ]z

âˆ’

n+3 z[ n+2 z âˆ’ n+1 ] âˆ’ n+1 [ n+3 z âˆ’ n+2 ]

2 n+4

n+3 {[ n+2 ] âˆ’2 n+1 n+3 }z

n+2 n+3

+ O(z n+5 ):

= n+2 z + n+3 z âˆ’ (4.8)

n+1 n+2

Thus, an expression, which is in agreement with (4.6), can be obtained easily in the case of the

(n) (n)

simplest transform J1 . Moreover, the Taylor expansion (4.8) shows that J1 makes the prediction

2

n+3 {[ n+2 ] âˆ’2 n+1 n+3 }

(n)

G1 =âˆ’ (4.9)

n+1 n+2

(n)

for the Ã¿rst series coe cient n+4 not used for the computation of J1 . Of course, by including

additional terms in the Taylor expansion (4.8) further predictions on series coe cients with higher

indices can be made.

(n)

However, in the case of more complicated transforms Jk with k Â¿ 1 it by no means is obvious

whether and how an expression, which is in agreement with (4.6), can be constructed. Consequently,

it is certainly a good idea to replace the recursive scheme (4:3) by an alternative recursive scheme,

(n)

which directly leads to appropriate expressions for Jk with k Â¿ 1.

(n)

Many di erent expressions for #2 in terms of sn ; sn+1 ; sn+2 , and sn+3 are known [95, Section 10.4].

For our purposes the appropriate expression is

[ sn+2 ]{[ sn+2 ][ 2 sn ] + [ sn+1 ]2 âˆ’ [ sn+2 ][ sn ]}

(n)

#2 = sn+3 âˆ’ : (4.10)

[ sn+2 ][ 2 sn ] âˆ’ [ sn ][ 2 sn+1 ]

Just like (4.2), this expression can be iterated and yields

(n)

J0 = sn ; n âˆˆ N0 ; (4.11a)

(n)

Ak+1

(n) (n+3)

Jk+1 = Jk âˆ’ (n) ; k; n âˆˆ N0 ; (4.11b)

Bk+1

(n) (n+2) (n+2) (n) (n+1) (n) (n+2)

Jk ] + [ Jk ]2 âˆ’ [ Jk ][ Jk ]};

2

Ak+1 = [ Jk ]{[ Jk ][ (4.11c)

(n) (n+2) (n) (n) (n+1)

2 2

Bk+1 = [ Jk ][ Jk ] âˆ’ [ Jk ][ Jk ]: (4.11d)

344 E.J. Weniger / Journal of Computational and Applied Mathematics 122 (2000) 329â€“356

(n)

If we now use either (4.10) or (4:11) to compute J1 from the partial sums fn (z), fn+1 (z), fn+2 (z),

and fn+3 (z), we obtain the following expression which obviously possesses the desired features:

+ [ n+2 ]2 âˆ’ n+1 n+4

n+3 { n+3 [ n+2 z âˆ’ n+1 ] n+3 }z

(n)

J1 = fn+3 (z) âˆ’ : (4.12)

n+3 z[ n+2 z âˆ’ n+1 ] âˆ’ n+1 [ n+3 z âˆ’ n+2 ]

Next, we use in (4:11) the partial sums (1.4) of the (formal) power series (1.3) in the form of

(2.12). This yields

(n) (n)

Jk = f(z) + z n+3k+1 Rk (z); k; n âˆˆ N0 : (4.13)

(n)

The quantities Rk (z) can be computed with the help of the following recursive scheme which is a

(n)

direct consequence of the recursive scheme (4:11) for Jk :

âˆž

fn (z) âˆ’ f(z)

(n)

R0 (z) =âˆ’ n+ +1 z = ; n âˆˆ N0 ; (4.14a)

z n+1

=0

(n)

Nk+1 (z)

(n) (n+3)

Rk+1 (z) = Rk (z) âˆ’ (n) ; k; n âˆˆ N0 ; (4.14b)

Dk+1 (z)

(n) (n+2) (n+2) (n) (n+1) (n) (n+2)

Nk+1 (z) = [ Rk (z)]{[ Rk (z)][ 2 Rk (z)] + [ Rk (z)]2 âˆ’ [ Rk (z)][ Rk (z)]};

(4.14c)

(n) (n+2) (n) (n) (n+1)

Dk+1 (z) = z[ Rk (z)][ 2 Rk (z)] âˆ’ [ Rk (z)][ 2 Rk (z)]: (4.14d)

(n+2) (n+2)

Here, Rk (z) and 2 Rk (z) are deÃ¿ned by (2:15).

Similar to the analogous accuracy-through-order relationships (2.13) and (3.9) for Aitkenâ€™s iterated

2

process and the epsilon algorithm, respectively, (4.13) has the right structure to serve as an

accuracy-through-order relationship for the iterated theta algorithm. Thus, it seems that we have

accomplished our aim. However, we are faced with the same complications as in the case of (2.13)

(n)

and (3.9). If z n+3k+1 R2k (z) in (4.13) is to be of order O(z n+3k+1 ) as z â†’ 0, then the z-independent

(n) (n)

part Ck of Rk (z) deÃ¿ned by

(n) (n)

Rk (z) = Ck + O(z); z â†’ 0; (4.15)

has to satisfy

(n)

Ck = 0; k; n âˆˆ N0 : (4.16)

If this condition is satisÃ¿ed, then it is guaranteed that (4.13) is indeed the accuracy-through-order

relationship we have been looking for.

As in the case of Aitkenâ€™s iterated 2 process or the epsilon algorithm, it is by no means obvious

(n)

whether and how it can be proven that a given power series gives rise to truncation errors Rk (z)

satisfying (4.15) and (4.16). Fortunately, it can easily be checked numerically whether a given

(formal) power series leads to truncations errors whose z-independent parts are nonzero. If we set

E.J. Weniger / Journal of Computational and Applied Mathematics 122 (2000) 329â€“356 345

z = 0 in (4:14) and use (4.15), we obtain the following recursive scheme:

(n)

C0 = âˆ’ n+1 ; n âˆˆ N0 ; (4.17a)

(n+2) (n) (n+2) (n+1)

âˆ’ [Ck ]2 }

Ck {2Ck Ck

(n) (n+3)

Ck+1 = Ck âˆ’ ; k; n âˆˆ N0 : (4.17b)

(n) (n+1)

Ck Ck

Let us now assume that we know for a given (formal) power series that the z-independent parts

(n) (n)

Ck of the truncation errors Rk (z) in (4.13) are nonzero â€” either from a mathematical proof

or from a brute force calculation using (4:17). Then, (4.13) is indeed the accuracy-through-order

(n)

relationship we have been looking for. This implies that Jk can be expressed as follows:

(n) (n)

Jk = fn+3k (z) + z n+3k+1 k (z); k; n âˆˆ N0 : (4.18)

If we use this ansatz in (4:11), we obtain the following recursive scheme:

(n)

0 (z) = 0; n âˆˆ N0 ; (4.19a)

2

n+3 { n+3 [ n+2 z âˆ’ n+1 ] +[ n+2 ]

âˆ’ n+1 n+3 }

ñòð. 79 |