<<

. 62
( 83 .)



>>

T n (z) =’ z; (3.9)
[™(tj+i )]2
i=0


as a result of which we have

™ ( j) n
T n (z) ( j) ™(tj+i ) i

=’ z: (3.10)
ni
( j)
™(tj+i )
Tn (1) i=0


( j)
n
Substituting (3.10) in (3.8) and using the fact that = 1, we ÿnally get
ni
i=0


n n n n
( j) ™(tj+i ) i
™ ( j) ™(tj+i )

( j) i
™( j) z i =’ z + ni z : (3.11)
ni ni
ni
™(tj+i ) ™(tj+i )
i=0 i=0 i=0 i=0


We have now come to the point where we have to make a suitable assumption on ™(t). The

following assumption seems to be quite realistic for many examples that involve logarithmically
convergent sequences and some others as well:

™(t) = ™(t)[K log t + L + o(1)]
™ as t ’ 0 + for some constants K = 0 and L: (3.12)

Now the condition limm’∞ (tm+1 =tm )=! in (3.1) implies that tm+1 =tm =!(1+ m ); where limm’∞ m =0.
i’1
Therefore, tj+i = tj !i s=0 (1 + j+s ), and hence, for each ÿxed i¿0

( j) ( j)
log tj+i = log tj + i log ! + i; lim = 0; (3.13)
i
j’∞


( j)
since = O(max{| j |; | j+1 |; : : : ; | j+1’i |}): Next, (3.12) and (3.13) imply that, for each ÿxed i¿0,
i


™(tj+i )
™ ( j) ( j)
= (K log tj + L) + Ki log ! + i; lim = 0; (3.14)
i
™(tj+i ) j’∞



since limm’∞ tm = 0.
Substituting (3.14) in (3.11), we see that the problematic term (K log tj + L) that is unbounded
as j ’ ∞ disappears altogether, and we obtain

n n n n
( j) ( j) i ( j) i ( j) ( j)
™( j) z i =’ ni (Ki log ! + i )z + ni z ni (Ki log ! + i) : (3.15)
ni
i=0 i=0 i=0 i=0
A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273 265

( j)
Letting j ’ ∞ in (3.15) and invoking limj’∞ = 0 and recalling from Theorem 3.1 that
i
limj’∞ ( j) = ˜ni , we obtain the ÿnite limit
ni
n n n n
™( j) z i i
i ˜ni z i
lim = K log ! ˜ni z i ˜ni ’ : (3.16)
ni
j’∞
i=0 i=0 i=0 i=0

The following theorem summarizes the developments of this section up to this point.

Theorem 3.2. Subject to the conditions concerning the tl and ™(t) that are given in (3:1); (3:2);
and (3:12); n ™( j) z i has a ÿnite limit as j ’ ∞ that is given by
i=0 ni
n n
˜ni z i ;
™( j) z i
lim = K log ![Un (z)Un (1) ’ zUn (z)] ≡ Wn (z) ≡ ™ (3.17)
ni
j’∞
i=0 i=0
n (z’ci ) +i’1
where Un (z) = and ci = ! ; i = 1; 2; : : : ; and Un (z) = (d=d z)Un (z).
i=1 (1’ci )

( j) ∞

Theorem 3.2 is the key to the study of stability and convergence of column sequences {An }j=0
that follows.
( j) ∞

3.1. Stability of column sequences {An }j=0

( j) ∞

Theorem 3.3. Under the conditions of Theorem 3:2; the sequences {An }j=0 are stable in the sense
that supj nj) ¡ ∞.
(



Proof. The result follows from the facts that limj’∞ ( j) = ˜ni and limj’∞ ™( j) = ˜ni for all n and i,

ni ni
which in turn follow from Theorems 3.1 and 3.2, respectively.

( j) ∞

3.2. Convergence of column sequences {An }j=0

Theorem 3.4. Under the conditions of Theorem 3:2 and with the notation therein we have
™( j) ™
An ’ A = O(™(tj )tjn log tj ) as j ’ ∞: (3.18)
A more reÿned result can be stated as follows: If ÿn+ is the ÿrst nonzero ÿi with i¿n in (2:2)
™ ™
and if ÿn+ is the ÿrst nonzero ÿi with i¿n; then
™( j) ™™
An ’ A = ÿn+ Un (cn+ +1 )™(tj )tjn+ [1 + o(1)]
n+
+ Kÿn+ Un (cn+ +1 )™(tj )tj log tj [1 + o(1)] as j ’ ∞: (3.19)
™( j) ™
Thus; when 6 the second term dominates in An ’ A; while the ÿrst one does when ¿ . In
particular; if ÿn = 0; we have
™( j) ™
An ’ A ∼ Kÿn Un (cn+1 )™(tj )tjn log tj as j ’ ∞: (3.20)
266 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273


Proof. We start with the fact that
n n
( j) ( j)
A( j) ’A= ni [a(tj+i ) ’ A] = ni ™(tj+i )Bn (tj+i ); (3.21)
n
i=0 i=0

where
n’1 ∞
i
ÿi t i
Bn (t) = B(t) ’ ÿi t ∼ as t ’ 0 + : (3.22)
i=n
i=0

Di erentiating (3.21) with respect to , we obtain
™( j) ™
An ’ A = En;j) + En;j) + En;j)
( ( (
(3.23)
1 2 3

with
n
En;j)
(
™( j) ™(tj+i )Bn (tj+i );
=
1 ni
i=0

n

En;j)
( ( j)
= ni ™(tj+i )Bn (tj+i );
2
i=0

n
En;j)
( ( j)
= ni ™(tj+i )Bn (tj+i ):
™ (3.24)
3
i=0

By the conditions in (3.1) and (3.2), and by (3.14) that follows from the condition in (3.12), it can
be shown that
tj+i ∼ tj !i ; ™(tj+i ) ∼ !i ™(tj ); ™(tj+i ) ∼ K!i ™(tj ) log tj
and ™ as j ’ ∞: (3.25)


Substituting these in (3.24), noting that Bn (t) ∼ ÿn+ t n+ and Bn (t) ∼ ÿn+ t n+ as t ’ 0+, and
recalling (3.4) and (3.17), we obtain

En;j) = ÿn+ Wn (cn+
( n+
+ o(™(tj )tjn+ )
+1 )™(tj )tj as j ’ ∞;
1


En;j) ∼ ÿn+ Un (cn+ +1 )™(tj )tjn+
(
as j ’ ∞;
2


En;j) ∼ Kÿn+ Un (cn+
( n+
+1 )™(tj )tj log tj as j ’ ∞; (3.26)
3

with Wn (z) as deÿned in (3.17). Note that we have written the result for En;j) di erently than for
(
1
En; 2 and En; 3 since we cannot be sure that Wn (cn+ +1 ) = 0. The asymptotic equalities for En;j) and
( j) ( j) (
2
( j)
En; 3 , however, are valid as Un (ci ) = 0 for all i¿n + 1: The result now follows by substituting (3.26)
in (3.23) and observing also that En;j) = o(En;j) ) as j ’ ∞, so that either En;j) or En;j) determines the
( ( ( (
1 3 2 3
( j)
™ ™
asymptotic nature of An ’ A. We leave the details to the reader.

™( j) ™
Remark. Comparing (3.19) pertaining to An ’ A with (3.3) pertaining to A( j) ’ A, we realize that,
n
subject to the additional assumption in (3.12), the two behave practically the same way asymptoti-
cally. In addition, their computational costs are generally similar. (In many problems of interest A(y)
A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251“273 267


and A(y) can be computed simultaneously, the total cost of this being almost the same as that of

computing A(y) only or A(y) only. An immediate example is that of numerical integration discussed

in Section 1.) In contrast, the convergence of {Bnj) }j=0 obtained by applying GREP(2) directly to
(


A(y) ≡ a(t) (recall (2.18) and (2.19)), is inferior to that of {A( j) }j=0 . This can be shown rigorously
™ n
for the case in which ™ (y) ≡ ™(t) = K™(t)(log t + constant) exactly. In this case the asymptotic


expansion in (2.18) assumes the form a(t) ∼ A + ∞ ™(t)( k0 + k1 log t)t k as t ’ 0 + : Therefore,
™ k=1
under the additional condition that limm’∞ m log tm = 0; where m is as deÿned following (3.12),
Theorem 2:2 of [11] applies and we have
( j)
B2m ’ B = O(™(tj )tjm log tj ) as j ’ ∞: (3.27)

™( j) ™( j) ∞ ™
( j)
Now the computational costs of A2m and B2m are similar, but {A2m }j=0 converges to A much faster

<<

. 62
( 83 .)



>>