<< Ïðåäûäóùàÿ ñòð. 62(èç 83 ñòð.)ÎÃËÀÂËÅÍÈÅ Ñëåäóþùàÿ >>
T n (z) =âˆ’ z; (3.9)
[â€™(tj+i )]2
i=0

as a result of which we have

Ë™ ( j) n
T n (z) ( j) â€™(tj+i ) i
Ë™
=âˆ’ z: (3.10)
ni
( j)
â€™(tj+i )
Tn (1) i=0

( j)
n
Substituting (3.10) in (3.8) and using the fact that = 1, we Ã¿nally get
ni
i=0

n n n n
( j) â€™(tj+i ) i
Ë™ ( j) â€™(tj+i )
Ë™
( j) i
Ë™( j) z i =âˆ’ z + ni z : (3.11)
ni ni
ni
â€™(tj+i ) â€™(tj+i )
i=0 i=0 i=0 i=0

We have now come to the point where we have to make a suitable assumption on â€™(t). The
Ë™
following assumption seems to be quite realistic for many examples that involve logarithmically
convergent sequences and some others as well:

â€™(t) = â€™(t)[K log t + L + o(1)]
Ë™ as t â†’ 0 + for some constants K = 0 and L: (3.12)

Now the condition limmâ†’âˆž (tm+1 =tm )=! in (3.1) implies that tm+1 =tm =!(1+ m ); where limmâ†’âˆž m =0.
iâˆ’1
Therefore, tj+i = tj !i s=0 (1 + j+s ), and hence, for each Ã¿xed iÂ¿0

( j) ( j)
log tj+i = log tj + i log ! + i; lim = 0; (3.13)
i
jâ†’âˆž

( j)
since = O(max{| j |; | j+1 |; : : : ; | j+1âˆ’i |}): Next, (3.12) and (3.13) imply that, for each Ã¿xed iÂ¿0,
i

â€™(tj+i )
Ë™ ( j) ( j)
= (K log tj + L) + Ki log ! + i; lim = 0; (3.14)
i
â€™(tj+i ) jâ†’âˆž

since limmâ†’âˆž tm = 0.
Substituting (3.14) in (3.11), we see that the problematic term (K log tj + L) that is unbounded
as j â†’ âˆž disappears altogether, and we obtain

n n n n
( j) ( j) i ( j) i ( j) ( j)
Ë™( j) z i =âˆ’ ni (Ki log ! + i )z + ni z ni (Ki log ! + i) : (3.15)
ni
i=0 i=0 i=0 i=0
A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273 265

( j)
Letting j â†’ âˆž in (3.15) and invoking limjâ†’âˆž = 0 and recalling from Theorem 3.1 that
i
limjâ†’âˆž ( j) = Ëœni , we obtain the Ã¿nite limit
ni
n n n n
Ë™( j) z i i
i Ëœni z i
lim = K log ! Ëœni z i Ëœni âˆ’ : (3.16)
ni
jâ†’âˆž
i=0 i=0 i=0 i=0

The following theorem summarizes the developments of this section up to this point.

Theorem 3.2. Subject to the conditions concerning the tl and â€™(t) that are given in (3:1); (3:2);
and (3:12); n Ë™( j) z i has a Ã¿nite limit as j â†’ âˆž that is given by
i=0 ni
n n
Ëœni z i ;
Ë™( j) z i
lim = K log ![Un (z)Un (1) âˆ’ zUn (z)] â‰¡ Wn (z) â‰¡ Ë™ (3.17)
ni
jâ†’âˆž
i=0 i=0
n (zâˆ’ci ) +iâˆ’1
where Un (z) = and ci = ! ; i = 1; 2; : : : ; and Un (z) = (d=d z)Un (z).
i=1 (1âˆ’ci )

( j) âˆž
Ë™
Theorem 3.2 is the key to the study of stability and convergence of column sequences {An }j=0
that follows.
( j) âˆž
Ë™
3.1. Stability of column sequences {An }j=0

( j) âˆž
Ë™
Theorem 3.3. Under the conditions of Theorem 3:2; the sequences {An }j=0 are stable in the sense
that supj nj) Â¡ âˆž.
(

Proof. The result follows from the facts that limjâ†’âˆž ( j) = Ëœni and limjâ†’âˆž Ë™( j) = Ëœni for all n and i,
Ë™
ni ni
which in turn follow from Theorems 3.1 and 3.2, respectively.

( j) âˆž
Ë™
3.2. Convergence of column sequences {An }j=0

Theorem 3.4. Under the conditions of Theorem 3:2 and with the notation therein we have
Ë™( j) Ë™
An âˆ’ A = O(â€™(tj )tjn log tj ) as j â†’ âˆž: (3.18)
A more reÃ¿ned result can be stated as follows: If Ã¿n+ is the Ã¿rst nonzero Ã¿i with iÂ¿n in (2:2)
Ë™ Ë™
and if Ã¿n+ is the Ã¿rst nonzero Ã¿i with iÂ¿n; then
Ë™( j) Ë™Ë™
An âˆ’ A = Ã¿n+ Un (cn+ +1 )â€™(tj )tjn+ [1 + o(1)]
n+
+ KÃ¿n+ Un (cn+ +1 )â€™(tj )tj log tj [1 + o(1)] as j â†’ âˆž: (3.19)
Ë™( j) Ë™
Thus; when 6 the second term dominates in An âˆ’ A; while the Ã¿rst one does when Â¿ . In
particular; if Ã¿n = 0; we have
Ë™( j) Ë™
An âˆ’ A âˆ¼ KÃ¿n Un (cn+1 )â€™(tj )tjn log tj as j â†’ âˆž: (3.20)
266 A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273

Proof. We start with the fact that
n n
( j) ( j)
A( j) âˆ’A= ni [a(tj+i ) âˆ’ A] = ni â€™(tj+i )Bn (tj+i ); (3.21)
n
i=0 i=0

where
nâˆ’1 âˆž
i
Ã¿i t i
Bn (t) = B(t) âˆ’ Ã¿i t âˆ¼ as t â†’ 0 + : (3.22)
i=n
i=0

Di erentiating (3.21) with respect to , we obtain
Ë™( j) Ë™
An âˆ’ A = En;j) + En;j) + En;j)
( ( (
(3.23)
1 2 3

with
n
En;j)
(
Ë™( j) â€™(tj+i )Bn (tj+i );
=
1 ni
i=0

n
Ë™
En;j)
( ( j)
= ni â€™(tj+i )Bn (tj+i );
2
i=0

n
En;j)
( ( j)
= ni â€™(tj+i )Bn (tj+i ):
Ë™ (3.24)
3
i=0

By the conditions in (3.1) and (3.2), and by (3.14) that follows from the condition in (3.12), it can
be shown that
tj+i âˆ¼ tj !i ; â€™(tj+i ) âˆ¼ !i â€™(tj ); â€™(tj+i ) âˆ¼ K!i â€™(tj ) log tj
and Ë™ as j â†’ âˆž: (3.25)
Ë™
Ë™
Substituting these in (3.24), noting that Bn (t) âˆ¼ Ã¿n+ t n+ and Bn (t) âˆ¼ Ã¿n+ t n+ as t â†’ 0+, and
recalling (3.4) and (3.17), we obtain

En;j) = Ã¿n+ Wn (cn+
( n+
+ o(â€™(tj )tjn+ )
+1 )â€™(tj )tj as j â†’ âˆž;
1

Ë™
En;j) âˆ¼ Ã¿n+ Un (cn+ +1 )â€™(tj )tjn+
(
as j â†’ âˆž;
2

En;j) âˆ¼ KÃ¿n+ Un (cn+
( n+
+1 )â€™(tj )tj log tj as j â†’ âˆž; (3.26)
3

with Wn (z) as deÃ¿ned in (3.17). Note that we have written the result for En;j) di erently than for
(
1
En; 2 and En; 3 since we cannot be sure that Wn (cn+ +1 ) = 0. The asymptotic equalities for En;j) and
( j) ( j) (
2
( j)
En; 3 , however, are valid as Un (ci ) = 0 for all iÂ¿n + 1: The result now follows by substituting (3.26)
in (3.23) and observing also that En;j) = o(En;j) ) as j â†’ âˆž, so that either En;j) or En;j) determines the
( ( ( (
1 3 2 3
( j)
Ë™ Ë™
asymptotic nature of An âˆ’ A. We leave the details to the reader.

Ë™( j) Ë™
Remark. Comparing (3.19) pertaining to An âˆ’ A with (3.3) pertaining to A( j) âˆ’ A, we realize that,
n
subject to the additional assumption in (3.12), the two behave practically the same way asymptoti-
cally. In addition, their computational costs are generally similar. (In many problems of interest A(y)
A. Sidi / Journal of Computational and Applied Mathematics 122 (2000) 251â€“273 267

Ë™
and A(y) can be computed simultaneously, the total cost of this being almost the same as that of
Ë™
computing A(y) only or A(y) only. An immediate example is that of numerical integration discussed
âˆž
in Section 1.) In contrast, the convergence of {Bnj) }j=0 obtained by applying GREP(2) directly to
(
âˆž
Ë™
A(y) â‰¡ a(t) (recall (2.18) and (2.19)), is inferior to that of {A( j) }j=0 . This can be shown rigorously
Ë™ n
for the case in which Ë™ (y) â‰¡ â€™(t) = Kâ€™(t)(log t + constant) exactly. In this case the asymptotic
Ë™
Ë™
expansion in (2.18) assumes the form a(t) âˆ¼ A + âˆž â€™(t)( k0 + k1 log t)t k as t â†’ 0 + : Therefore,
Ë™ k=1
under the additional condition that limmâ†’âˆž m log tm = 0; where m is as deÃ¿ned following (3.12),
Theorem 2:2 of [11] applies and we have
( j)
B2m âˆ’ B = O(â€™(tj )tjm log tj ) as j â†’ âˆž: (3.27)

Ë™( j) Ë™( j) âˆž Ë™
( j)
Now the computational costs of A2m and B2m are similar, but {A2m }j=0 converges to A much faster
 << Ïðåäûäóùàÿ ñòð. 62(èç 83 ñòð.)ÎÃËÀÂËÅÍÈÅ Ñëåäóþùàÿ >>