. 38
( 83 .)


(1) ||rk || = |tan ™k |||r0 ||; k ¿ 1.
rre tea
(2) ||rk ||6 (cos ™k ) ||rk ||.

Proof. (1) Follows from (3.9) and the fact that r0 = y is orthogonal to rk .
(2) From (2.19) we have cos ™k 6cos ‚k , then using relations (1) of Theorem 1 and (1) of Theorem
2 the result follows.

• Relation (1) of Theorem 2 shows that the residuals of the TEA are deÿned if and only if cos
™k = 0.
• We also observe that, if a stagnation occurs in RRE (||rk || = ||r0 || for some k, then cos ‚k = 0
and this implies that cos ™k = 0, which shows that the TEA-approximation is not deÿned.

The topological -algorithm can also be applied for solving nonlinear systems of equations. For
this, TEA is applied to the sequence (sn ) generated by the nonlinear process (2.22). We note that
TEA does not need the knowledge of the Jacobian of the function G and has the property of quadratic
convergence [22].
When applied for the solution of linear and nonlinear problems, work and storage required by
VEA and TEA grow with the iteration step. So, in practice and for large problems, the algorithms
must be restarted. It is also useful to run some basic iterations before the extrapolation phase.
The application of VEA or TEA for linear and nonlinear systems leads to the following algorithm
where G(x) is to be replaced by Bx + b for linear problems:

1. k = 0, choose x0 and the integers p and m.
2. Basic iteration
set t0 = x0
w 0 = t0
wj+1 = G(wj ), j = 0; : : : ; p ’ 1.
3. Extrapolation phase
s0 = wp ;
if ||s1 ’ s0 ||¡ stop;
otherwise generate sj+1 = G(sj ), j = 0; : : : ; 2m ’ 1,
compute the approximation tm = 2m by VEA or TEA;
4. set x0 = tm , k = k + 1 and go to 2.
K. Jbilou, H. Sadok / Journal of Computational and Applied Mathematics 122 (2000) 149“165 161

Table 1
Memory requirements and computational costs (multiplications and additions) for RRE, MPE, MMPE, VEA and TEA


2Nk 2 2Nk 2 Nk 2 10Nk 2 10Nk 2
Multiplications and additions
Mat“Vec with A or evaluation of G k +1 k +1 k +1 2k 2k
Memory locations (k + 1)N (k + 1)N (k + 1)N (2k + 1)N (2k + 1)N

4. Operation count and storage

Table 1 lists the operation count (multiplications and additions) and the storage requirements to
(0) (0)
compute the approximation tk with RRE, MPE and MMPE and the approximation 2k with VEA
and TEA. In practice, the dimension N of vectors is very large and k is small, so we listed only
the main computational e ort.
For RRE and MPE, we used the QR-implementation given in [32], whereas the LU-implementation
developed in [19] was used for MMPE.
To compute tk with the three polynomial vector extrapolation methods, the vectors s0 ; s1 ; : : : ; sk+1
are required while the terms s0 ; : : : ; s2k are needed for the computation of 2k with VEA and TEA.
So, when solving linear systems of equations, k + 1 matrix“vector (Mat“Vec) products are required
with RRE, MPE and MMPE and 2k matrix“vector products are needed with VEA and TEA. For
nonlinear problems the comparison is still valid by replacing “Mat“Vec” with “evaluation of the
function G”.
All these operations are listed in Table 1.
As seen in Table 1, the vector and topological -algorithms are more expensive in terms of work
and storage as compared to the polynomial vector extrapolation methods, namely RRE, MPE and
The implementations given in [32,19] for RRE, MPE and MMPE allow us to compute exactly
the norm of the residual at each iteration for linear systems or to estimate it for nonlinear problems
without actually computing the residuals. This reduce the cost of implementation and is used to stop
the algorithms when the accuracy is achieved.

5. Numerical examples

We report in this section a few numerical examples comparing the performances of RRE, MPE,
MMPE, VEA and TEA. For RRE and MPE, we used the program given in [32] and for MMPE we
used the implementation developed in [19]. The programs used for VEA and TEA were taken out
from [6].
The tests were run in double precision on SUN Entreprise 450 SERVER using the standard F77
compiler. We have ÿrst considered one example for linear systems and one example for nonlinear
systems. In these examples the starting point was chosen x0 = rand(N; 1) where the function rand
creates an N vector with coe cients uniformly distributed in [0; 1].
For the TEA the vector y was also y = rand(N; 1).
162 K. Jbilou, H. Sadok / Journal of Computational and Applied Mathematics 122 (2000) 149“165

Table 2

Number of restarts 28 25 26 30 30
Residual norms 2.16d-09 2.d-09 1.d-09 9d-04 3d-01
CPU time 40 80 83 230 206

5.1. Example 1

In the ÿrst example, we derived the matrix test problem by discretizing the boundary value
problem [1]
’uxx (x; y) ’ uyy (x; y) + 2p1 ux (x; y) + 2p2 uy (x; y) ’ p3 u(x; y) = (x; y) on ;

u(x; y) = 1 + xy on @ ;
by ÿnite di erences, where is the unit square {(x; y) ∈ R2 ; 06x; y61} and p1 ; p2 ; p3 are positive
constants. The right-hand-side function (x; y) was chosen so that the true solution is u(x; y)=1+xy
in . We used centred di erences to discretize this problem on a uniform (n + 2) — (n + 2) grid
(including grid points on the boundary). We get a matrix of size N = n2 .
We applied the extrapolation methods to the sequence (sj ) deÿned as in [14] by
sj+1 = B! sk + c! ; (5.1)
c! = !(2 ’ !)(D ’ !U )’1 D(D ’ !L)’1 b; (5.2)

B! = (D ’ !U )’1 (!L + (1 ’ !)D)(D ’ !L)’1 (!U + (1 ’ !)D) (5.3)
and A = D ’ L ’ U , the classical splitting decomposition.
When (sj ) converges, the ÿxed point of iteration (5.1) is the solution of the SSOR preconditioned
system (I ’ B! )x = c! .
The stopping criterion was ||(I ’ B! )xk ’ c! || ¡ 10’8 for this linear problem.
We let n = 70 and choose p1 = 1; p2 = 1 and p3 = 10.
For this experiment, the system has dimension 4900 — 4900. The width of extrapolation is m = 20
and ! = 0:5.
In Table 2, we give the l2 -norm of the residuals obtained at the end of each cycle and the CPU
time for the ÿve methods (MMPE, MPE, RRE, VEA and TEA).
A maximum of 30 cycles was allowed to all the algorithms. Remark that for this experiment,
TEA failed to converge and for the VEA we obtained only a residual norm of 9 · 10’4 .

5.2. Example 2

We consider now the following nonlinear partial di erential equation:
’uxx (x; y) ’ uyy (x; y) + 2p1 ux (x; y) + 2p2 uy (x; y) ’ p3 u(x; y) + 5eu(x; y) = (x; y) on ;

u(x; y) = 1 + xy on @ ;
K. Jbilou, H. Sadok / Journal of Computational and Applied Mathematics 122 (2000) 149“165 163

Table 3

Number of restarts 20 18 19 22 30
Residual norms 2.9d-09 9.2d-08 2.8d-08 9.6d-09 2.9d-05
CPU time 13.59 13.90 14.72 51.24 65.90

over the unit square of R2 with Dirichlet boundary condition. This problem is discretized by a
standard ÿve-point central di erence formula with uniform grid of size h = 1=(n + 1). We get the
following nonlinear system of dimension N — N , where N = n2 :
AX + 5eX ’ b = 0: (5.4)
The right-hand-side function (x; y) was chosen so that the true solution is u(x; y) = 1 + xy in .
The sequence (sj ) is generated by using the nonlinear SSOR method. Hence we have sj+1 = G(sj ),
G(X ) = B! X + !(2 ’ !)(D ’ !U )’1 D(D ’ !L)’1 (b ’ 5eX );
the matrix B! is given in (5.3).
In the following tests, we compare the ÿve extrapolation methods using the SSOR preconditioning.
The stopping criterion was ||xk ’ G(xk )|| ¡ 10’8 .
In our tests, we choose n = 72 and hence the system has dimension N = 4900. With m = 20 and
! = 0:5, we obtain the results of Table 3.
The convergence of the ÿve extrapolation methods above is relatively sensitive to the choice of
the parameter !. We note that for this experiment, the TEA algorithm stagnates after 30 restarts.
The VEA algorithm requires more CPU time as compared to the three polynomial extrapolation

6. Conclusion

We have proposed a review of the most known vector extrapolation methods namely the poly-
nomial ones (MMPE, RRE and MPE) and the -algorithms (TEA and VEA). We also give some
numerical comparison of these methods. The numerical tests presented in this paper show the ad-
vantage of the vector polynomial methods. We note also that VEA is numerically more stable than
TEA. However, the last two algorithms require more storage and operation counts as compared
to the polynomial methods. The advantage of vector extrapolation methods when compared to the
classical Krylov subspace methods is that they generalize in a straightforward manner from linear
to nonlinear problems.


We would like to thank M. Redivo-Zaglia and C. Brezinski for providing us their programs for
the -algorithms and A. Sidi for the RRE and MPE programs. We also wish to thank the referees
for their valuable comments and suggestions.
164 K. Jbilou, H. Sadok / Journal of Computational and Applied Mathematics 122 (2000) 149“165


[1] Z. Bai, D. Hu, L. Reichel, A Newton basis GMRES implementation, IMA J. Numer. Anal. 14 (1994) 563“581.
[2] P. Brown, Y. Saad, Hybrid Krylov methods for nonlinear systems of equations, SIAM J. Sci. Statist. Comput. 11
(1990) 450“481.
[3] C. Brezinski, Gà nà ralisation de la transformation de Shanks, de la table de la Table de Padà et de l™epsilon-algorithm,
ee e
Calcolo 12 (1975) 317“360.
[4] C. Brezinski, Padà -type Approximation and General Orthogonal Polynomials, International Series of Numerical
Methods, Vol. 50, Birkhauser, Basel, 1980.
[5] C. Brezinski, Recursive interpolation, extrapolation and projection, J. Comput. Appl. Math. 9 (1983) 369“376.
[6] C. Brezinski, M. Redivo Zaglia, Extrapolation Methods, Theory and Practice, North-Holland, Amsterdam, 1991.
[7] C. Brezinski, A.C. Rieu, The solution of systems of equations using the vector -algorithm and an application to
boundary value problems, Math. Comp. 28 (1974) 731“741.
[8] S. Cabay, L.W. Jackson, A polynomial extrapolation method for ÿnding limits and antilimits for vector sequences,
SIAM J. Numer. Anal. 13 (1976) 734“752.
[9] R.P. Eddy, Extrapolation to the limit of a vector sequence, in: P.C.C Wang (Ed.), Information Linkage Between
Applied Mathematics and Industry, Academic Press, New-York, 1979, pp. 387“396.
[10] W.D. Ford, A. Sidi, Recursive algorithms for vector extrapolation methods, Appl. Numer. Math. 4 (6) (1988)
[11] W. Gander, G.H. Golub, D. Gruntz, Solving linear equations by extrapolation, in: J.S. Kovalic (Ed.), Supercomputing,
Nato ASI Series, Springer, Berlin, 1990.
[12] E. Gekeler, On the solution of systems of equations by the epsilon algorithm of Wynn, Math. Comp. 26 (1972)
[13] P.R. Graves-Morris, Vector valued rational interpolants I, Numer. Math. 42 (1983) 331“348.
[14] L. Hageman, D. Young, Applied Iterative Methods, Academic Press, New York, 1981.
[15] K. Jbilou, A general projection algorithm for solving systems of linear equations, Numer. Algorithms 4 (1993)
[16] K. Jbilou, On some vector extrapolation methods, Technical Report, ANO(305), Università de Lille1, France, 1993.
[17] K. Jbilou, H. Sadok, Some results about vector extrapolation methods and related ÿxed point iterations, J. Comput.
Appl. Math. 36 (1991) 385“398.
[18] K. Jbilou, H. Sadok, Analysis of some vector extrapolation methods for linear systems, Numer. Math. 70 (1995)
[19] K. Jbilou, H. Sadok, LU-implementation of the modiÿed minimal polynomial extrapolation method, IMA J. Numer.
Anal. 19 (1999) 549“561.
[20] K. Jbilou, H. Sadok, Hybrid vector sequence transformations, J. Comput. Appl. Math. 81 (1997) 257“267.
[21] C. Lanczos, Solution of systems of linear equations by minimized iterations, J. Res. Natl. Bur. Stand. 49 (1952)
[22] H. Le Ferrand, The quadratic convergence of the topological -algorithm for systems of nonlinear equations, Numer.
Algorithms 3 (1992) 273“284.
[23] J.B. McLeod, A note on the -algorithm, Computing 7 (1971) 17“24.
[24] M. Me‚ina, Convergence acceleration for the iterative solution of x = Ax + f, Comput. Methods Appl. Mech. Eng.
10 (2) (1977) 165“173.
[25] B.P. Pugatchev, Acceleration of the convergence of iterative processes and a method for solving systems of nonlinear
equations, USSR. Comput. Math. Math. Phys. 17 (1978) 199“207.
[26] Y. Saad, Krylov subspace methods for solving large unsymmetric linear systems, Math. Comp. 37 (1981) 105“126.
[27] Y. Saad, M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems,
SIAM J. Sci. Statist. Comput. 7 (1986) 856“869.
[28] H. Sadok, Quasilinear vector extrapolation methods, Linear Algebra Appl. 190 (1993) 71“85.
[29] H. Sadok, About Henrici™s transformation for accelerating vector sequences, J. Comput. Appl. Math. 29 (1990)
[30] H. Sadok, Mà thodes de projection pour les syst‚ mes linà aires et non linà aires, Habilitation Thesis, Università de
e e e e e
Lille1, France, 1994.
K. Jbilou, H. Sadok / Journal of Computational and Applied Mathematics 122 (2000) 149“165 165

[31] D. Shanks, Nonlinear transformations of divergent and slowly convergent sequences, J. Math. Phys. 34 (1955) 1“42.
[32] A. Sidi, E cient implementation of minimal polynomial and reduced rank extrapolation methods, J. Comput. Appl.
Math. 36 (1991) 305“337.
[33] A. Sidi, Convergence and stability of minimal polynomial and reduced rank extrapolation algorithms, SIAM J.
Numer. Anal. 23 (1986) 197“209.
[34] A. Sidi, Extrapolation vs. projection methods for linear systems of equations, J. Comput. Appl. Math. 22 (1) (1988)
[35] A. Sidi, W.F. Ford, D.A. Smith, Acceleration of convergence of vector sequences, SIAM J. Numer. Anal. 23 (1986)
[36] D.A. Smith, W.F. Ford, A. Sidi, Extrapolation methods for vector sequences, SIAM Rev. 29 (1987) 199 “233;
Correction, SIAM Rev. 30 (1988) 623“ 624.
[37] R.C.E. Tan, Implementation of the topological -algorithm, SIAM J. Sci. Statist. Comput. 9 (1988) 839“848.
[38] J.H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, England, 1965.
[39] P. Wynn, On a device for computing the em (sn ) transformation, MTAC 10 (1956) 91“96.
[40] P. Wynn, Acceleration technique for iterated vector and matrix problems, Math. Comp. 16 (1962) 301“322.
Journal of Computational and Applied Mathematics 122 (2000) 167“201


. 38
( 83 .)