<<

. 11
( 83 .)



>>


7. Variation diminution and computer-aided geometric design

An n — n matrix A is said to be sign-regular (SR) if for each 16k6n all its minors of order k
have the same (non strict) sign (in the sense that the product of any two of them is greater than or
46 M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50


equal to zero). The matrix is strictly sign-regular (SSR) if for each 16k6n all its minors of order
k are di erent from zero and have the same sign. In [27] a test for strict sign regularity is given.
The importance of these types of matrices comes from their variation diminishing properties. By
a sign sequence of a vector x = (x1 ; : : : ; x n )T ∈ Rn we understand any signature sequence for which
i xi = |xi |; i = 1; 2; : : : ; n. The number of sign changes of x associated to , denoted by C( ), is the
number of indices i such that i i+1 ¡ 0; 16i6n ’ 1. The maximum (resp. minimum) variation of
signs, V+ (x) (resp. V’ (x)), is by deÿnition the maximum (resp. minimum) of C( ) when runs
over all sign sequences of x. Let us observe that if xi = 0 for all i, then V+ (x) = V’ (x) and this
value is usually called the exact variation of signs. The next result (see [2, Theorems 5:3 and 5:6])
characterizes sign-regular and strictly sign-regular matrices in terms of their variation diminishing
properties.
Let A be an n — n nonsingular matrix. Then:
(i) A is SR ” V’ (Ax)6V’ (x) ∀x ∈ Rn .
(ii) A is SR ” V+ (Ax)6V+ (x) ∀x ∈ Rn .
(iii) A is SSR ” V+ (Ax)6V’ (x) ∀x ∈ Rn \ {0}:
The above matricial deÿnitions lead to the corresponding deÿnitions for systems of functions. A
system of functions (u0 ; : : : ; un ) is sign-regular if all its collocation matrices are sign-regular of the
same kind. The system is strictly sign-regular if all its collocation matrices are strictly sign-regular
of the same kind. Here a collocation matrix is deÿned to be a matrix whose (i; j)-entry is of the
form ui (xj ) with any system of strictly increasing points xj .
Sign-regular systems have important applications in CAGD. Given u0 ; : : : ; un , functions deÿned on
[a; b], and P0 ; : : : ; Pn ∈ Rk , we may deÿne a curve (t) by
n
(t) = ui (t)Pi :
i=0

The points P0 ; : : : ; Pn are called control points, because we expect to modify the shape of the curve
by changing these points adequately. The polygon with vertices P0 ; : : : ; Pn is called control polygon
of .
In CAGD the functions u0 ; : : : ; un are usually nonnegative and normalized ( n ui (t)=1 ∀ t ∈ [a; b]).
i=0
In this case they are called blending functions. These requirements imply that the curve lies in the
convex hull of the control polygon (convex hull property). Clearly, (u0 ; : : : ; un ) is a system of blend-
ing functions if and only if all the collocation matrices are stochastic (that is, they are nonnegative
matrices such that the elements of each row sum up to 1). For design purposes, it is desirable that
the curve imitates the control polygon and that the control polygon even “exaggerates” the shape of
the curve, and this holds when the system satisÿes variation diminishing properties. If (u0 ; : : : ; un ) is
a sign-regular system of blending functions then the curve preserves many shape properties of the
control polygon, due to the variation diminishing properties of (u0 ; : : : ; un ). For instance, any line
intersects the curve no more often than it intersects the control polygon.
A characterization of SSR matrices A by the Neville elimination of A and of some submatrices
of A is obtained in [26, Theorem 4.1].
A system of functions (u0 ; : : : ; un ) is said to be totally positive if all its collocation matrices
are totally positive. The system is normalized totally positive (NTP) if it is totally positive and
n
i=0 ui = 1.
M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50 47


Normalized totally positive systems satisfy an interesting shape-preserving property, which is very
convenient for design purposes and which we call endpoint interpolation property: the initial and
ÿnal endpoints of the curve and the initial and ÿnal endpoints (respectively) of the control poly-
gon coincide. In summary, these systems are characterized by the fact that they always generate
curves satisfying simultaneously the convex hull, variation diminishing and endpoint interpolation
properties.
Now the following question arises. Given a system of functions used in CAGD to generate
curves, does there exist a basis of the space generated by that system with optimal shape preserving
properties? Or equivalently, is there a basis such that the generated curves imitate better the form
of the corresponding control polygon than the form of the corresponding control polygon for any
other basis?
In the space of polynomials of degree less than or equal to n on a compact interval, the Bernstein
basis is optimal. This was conjectured by Goodman and Said in [30], and it was proved in [11].
In [12], there is also an a rmative answer to the above questions for any space with TP basis.
Moreover, Neville elimination provides a constructive way to obtain optimal bases. In the space of
polynomial splines, B-splines form the optimal basis.
Since the product of TP matrices is a TP matrix, if (u0 ; : : : ; un ) is a TP system of functions and A
is a TP matrix of order n+1, then the new system (u0 ; : : : ; un )A is again a TP system (which satisÿes
a “stronger” variation diminishing property than (u0 ; : : : ; un )). If we obtain from a basis (u0 ; : : : ; un ),
in this way, all the totally positive bases of the space, then (u0 ; : : : ; un ) will be the “least variation
diminishing” basis of the space. In consequence, the control polygons with respect to (u0 ; : : : ; un )
will imitate the form of the curve better than the control polygons with respect to other bases of
the space. Therefore, we may reformulate the problem of ÿnding an optimal basis (b0 ; : : : ; bn ) in the
following way:
Given a vector space U with a TP basis, is there a TP basis (b0 ; : : : ; bn ) of U such that, for any
TP basis (v0 ; : : : ; vn ) of U there exists a TP matrix K satisfying (v0 ; : : : ; vn ) = (b0 ; : : : ; bn )K?.
The existence of such optimal basis (b0 ; : : : ; bn ) was proved in [12], where it was called B-basis.
In the same paper, a method of construction, inspired by the Neville elimination process, was given.
As mentioned above, Bernstein polynomials and B-splines are examples of B-bases.
Another point of view for B-bases is closely related to corner cutting algorithms, which play an
important role in CAGD.
Given two NTP bases, (p0 ; : : : ; pn ); (b0 ; : : : ; bn ), let K be the nonsingular matrix such that

(p0 ; : : : ; pn ) = (b0 ; : : : ; bn )K:

Since both bases are normalized, if K is a nonnegative matrix, it is clearly stochastic.
A curve can be expressed in terms of both bases
n n
(t) = Bi bi (t) = Pi pi (t); t ∈ [a; b];
i=0 i=0


and the matrix K gives the relationship between both control polygons

(B0 ; : : : ; Bn )T = K(P0 ; : : : ; Pn )T :
48 M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50


An elementary corner cutting is a transformation which maps any polygon P0 · · · Pn into another
polygon B0 · · · Bn deÿned by:
Bj = Pj ; j = i;
Bi = (1 ’ )Pi + Pi+1 ; for one i ∈ {0; : : : ; n ’ 1} (17)
or
Bj = Pj ; j = i;
Bi = (1 ’ )Pi + Pi’1 ; for one i ∈ {1; : : : ; n}: (18)
Here ∈ (0; 1).
A corner-cutting algorithm is the algorithmic description of a corner cutting transformation, which
is any composition of elementary corner cutting transformations.
Let us assume now that the matrix K above is TP. Since it is stochastic, nonsingular and TP, it can
be factorized as a product of bidiagonal nonnegative matrices, (as we have mentioned in Section 6),
which can be interpreted as a corner cutting transformation. Such factorizations are closely related
to the Neville elimination of the matrix [28]. From the variation diminution produced by the totally
positive matrices of the process, it can be deduced that the curve imitates better the form of the
control polygon B0 · · · Bn than that of the control polygon P0 · · · Pn . Therefore, we see again that an
NTP basis (b0 ; : : : ; bn ) of a space U has optimal shape-preserving properties if for any other NTP
basis (p0 ; : : : ; pn ) of U there exists a (stochastic) TP matrix K such that
(p0 ; : : : ; pn ) = (b0 ; : : : ; bn )K: (19)
Hence, a basis has optimal shape preserving properties if and only if it is a normalized B-basis.
Neville elimination has also inspired the construction of B-bases in [11,12]. Many of these results
and other important properties and applications of totally positive matrices have been collected, as
we have already said in [28, Section 6].

References

[1] A.G. Aitken, On interpolation by iteration of proportional parts without the use of di erences, Proc. Edinburgh Math.
Soc. 3 (1932) 56“76.
[2] T. Ando, Totally positive matrices, Linear Algebra Appl. 90 (1987) 165“219.
[3] B. Beckermann, G. Muhlbach, A general determinantal identity of Sylvester type and some applications, Linear
Algebra Appl. 197,198 (1994) 93“112.
[4] Cl. Brezinski, The Muhlbach“Neville“Aitken-algorithm and some extensions, BIT 20 (1980) 444“451.
[5] Cl. Brezinski, A general extrapolation algorithm, Numer. Math. 35 (1980) 175“187.
[6] Cl. Brezinski, Convergence acceleration during the 20th century, this volume, J. Comput. Appl. Math. 122 (2000)
1“21.
[7] Cl. Brezinski, Recursive interpolation, extrapolation and projection, J. Comput. Appl. Math. 9 (1983) 369“376.
[8] Cl. Brezinski, M. Redivo Zaglia, Extrapolation methods, theory and practice, North-Holland, Amsterdam, 1991.
[9] R.A. Brualdi, H. Schneider, Determinantal identities: Gauss, Schur, Cauchy, Sylvester, Kronecker, Jacobi, Binet,
Laplace, Muir and Cayley, Linear Algebra Appl. 52=53 (1983) 769“791.
[10] R.A. Brualdi, H. Schneider, Determinantal identities revisited, Linear Algebra Appl. 59 (1984) 183“211.
[11] J.M. Carnicer, J.M. Pe˜ a, Shape preserving representations and optimality of the Bernstein basis, Adv. Comput.
n
Math. 1 (1993) 173“196.
M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50 49


[12] J.M. Carnicer, J.M. Pe˜ a, Totally positive bases for shape preserving curve design and optimality of B-splines,
n
Comput. Aided Geom. Design 11 (1994) 633“654.
[13] R.W. Cottle, Manifestations of the Schur complement, Linear Algebra Appl. 8 (1974) 189“211.
[14] C. Cryer, The LU-factorization of totally positive matrices, Linear Algebra Appl. 7 (1973) 83“92.
[15] C. Cryer, Some poperties of totally positive matrices, Linear algebra Appl. 15 (1976) 1“25.
[16] D.R. Faddeev, U.N. Faddeva, Computational Methods of Linear Algebra, Freeman, San Francisco, 1963.
[17] M. Fekete, G. PÃ lya, Uber ein Problem von Laguerre, Rend. C.M. Palermo 34 (1912) 89“120.
o
[18] M. Gasca, A. LÃ pez-Carmona, V. RamÃrez, A generalized Sylvester™s identity on determinants and 1st application
o
to interpolation problems, in: W. Schempp, K. Zeller (Eds.), Multivariate Approximation Theory II, ISNM, Vol. 61,
Biskhauser, Basel, 1982, pp. 171“184.
[19] M. Gasca, A. LÃ pez-Carmona, A general interpolation formula and its application to multivariate interpolation, J.
o
Approx. Theory 34 (1982) 361“374.
[20] M. Gasca, E. Lebrà n, Elimination techniques and interpolation, J. Comput. Appl. Math. 19 (1987) 125“132.
o
[21] M. Gasca, G. Muhlbach, Generalized Schur-complements and a test for total positivity, Appl. Numer. Math. 3 (1987)
215“232.
[22] M. Gasca, G. Muhlbach, Generalized Schur-complements, Publicacciones del Seminario Matematico Garcia de
Galdeano, Serie II, Seccion 1, No. 17, Universidad de Zaragoza, 1984.
[23] M. Gasca, J.M. Pe˜ a, Neville elimination and approximation theory, in: S.P. Singh (Ed.), Approximation Theory,
n
Wavelets and Applications, Kluwer Academic Publishers, Dordrecht, 1995, pp. 131“151.
[24] M. Gasca, J.M. Pe˜ a, Total positivity and Neville elimination, Linear Algebra Appl. 165 (1992) 25“44.
n
[25] M. Gasca, J.M. Pe˜ a, On the characterization of TP and STP matrices, in: S.P. Singh (Ed.), Aproximation Theory,
n
Spline Functions and Applications, Kluwer Academic Publishers, Dordrecht, 1992, pp. 357“364.
[26] M. Gasca, J.M. Pe˜ a, A matricial description of Neville elimination with applications to total positivity, Linear
n
Algebra Appl. 202 (1994) 33“54.
[27] M. Gasca, J.M. Pe˜ a, A test for strict sign-regularity, Linear Algebra Appl. 197“198 (1994) 133“142.
n
[28] M. Gasca, J.M. Pe˜ a, Corner cutting algorithms and totally positive matrices, in: P.J. Laurent, A. Le Mà hautà ,
n e e
L.L. Schumaker (Eds.), Curves and Surfaces II, 177“184, A.K. Peters, Wellesley, MA, 1994.
[29] M. Gasca, C.A. Micchelli (Eds.), Total Positivity and its Applications, Kluwer Academic Publishers, Dordrecht,
1996.
[30] T.N.T. Goodman, H.B. Said, Shape preserving properties of the generalized ball basis, Comput. Aided Geom. Design
8 (115 “121) 1991.
[31] T. Havie, Generalized Neville type extrapolation schemes, BIT 19 (1979) 204“213.
[32] T. Havie, Remarks on a uniÿed theory of classical and generalized interpolation and extrapolation, BIT 21 (1981)
465“474.
[33] T. Havie, Remarks on the Muhlbach“Neville“Aitken-algorithm, Math. a. Comp. Nr. 2=80, Department of Numerical
Mathematics, The University of Trondheim, 1980.
[34] E. Haynsworth, Determination of the inertia of a partitioned Hermitian matrix, Linear Algebra Appl. 1 (1968) 73“81.
[35] E. Haynsworth, On the Schur Complement, Basel Mathemathics Notes No. 20, June 1968.
[36] S. Karlin, Total Positivity, Stanford University Press, Standford, 1968.
[37] S. Karlin, W.J. Studden, Tchebyche Systems: with Applications in Analysis and Statistics, Interscience, New York,
1966.

[38] G. Muhlbach, Neville“Aitken Algorithms for interpolation by functions of Ceby„ev-systems in the sense of Newton
s
and in a generalized sense of hermite, in: A.G. Law, B.N. Sahney (Eds.), Theory of Approximation, with
Applications, Proceedings of the International Congress on Approximation Theory in Calgary, 1975, Academic Press,
New York, 1976, pp. 200“212.
[39] G. Muhlbach, The general Neville“Aitken-algorithm and some applications, Numer. Math. 31 (1978) 97“110.
[40] G. Muhlbach, On two general algorithms for extrapolation with applications to numerical di erentiation and
integration, in: M.G. de Bruin, H. van Rossum (Eds.), Padà Approximation and its Applications, Lecture Notes
e
in Mathematics, Vol. 888, Springer, Berlin, 1981, pp. 326“340.
[41] G. Muhlbach, Extrapolation algorithms as elimination techniques with applications to systems of linear equations,
Report 152, Institut fur Mathematik der Universitat Hannover, 1982, pp. 1“ 47.
[42] G. Muhlbach, Algorithmes d™extrapolation, Publication ANO 118, Università de Lille 1, January 1984.
e
50 M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50


[43] G. Muhlbach, Sur une identità gà nà ralisà e de Sylvester, Publication ANO 119, Università de Lille 1, January 1984.
e ee e e
[44] G. Muhlbach, M. Gasca, A generalization of Sylvester™s identity on determinants and some applications, Linear
Algebra Appl. 66 (1985) 221“234.
[45] G. Muhlbach, Two composition methods for solving certain systems of linear equations, Numer. Math. 46 (1985)
339“349.
[46] G. Muhlbach, A recurrence formula for generalized divided di erences and some applications, J. Approx. Theory. 9
(1973) 165“172.
[47] G. Muhlbach, On extending determinantal identities, Publicaciones del Seminario Matematico Garcia de Galdeano,
Serie II, Seccion 1, No. 139, Universidad de Zaragoza, 1987.
[48] G. Muhlbach, Linear and quasilinear extrapolation algorithms, in: R. Vichnevetsky, J. Vignes (Eds.), Numerical
Mathematics and Applications, Elsevier, North-Holland, Amsterdam, IMACS, 1986, pp. 65 “71.
[49] G. Muhlbach, M. Gasca, A test for strict total positivity via Neville elimination, in: F. Uhlig, R. Grone (Eds.),
Current Trends in Matrix Theory, North-Holland, Amsterdam, 1987, pp. 225“232.
[50] G. Muhlbach, Recursive triangles, in: D. Beinov, V. Covachev (Eds.), Proceedings of the third International
Colloquium on Numerical Analysis, Utrecht, VSP, 1995, pp. 123“134.
[51] T. Muir, The law of extensible minors in determinants, Trans. Roy. Soc. Edinburgh 30 (1883) 1“4.
[52] E.H. Neville, Iterative interpolation, J. Indian Math. Soc. 20 (1934) 87“120.
[53] D.V. Ouellette, Schur complements and statistics, Linear Algebra Appl. 36 (1981) 186“295.
[54] C. Schneider, Vereinfachte rekursionen zur Richardson-extrapolation in spezialfallen, Numer. Math. 24 (1975)
177“184.
[55] J.J. Sylvester, On the relation between the minor determinants of linearly equivalent quadratic functions, Philos.
Mag. (4) (1851) 295 “305.
[56] J.J. Sylvester, Collected Mathematical Papers, Vol. 1, Cambridge University Press, Cambridge, 1904, pp. 241“250.
Journal of Computational and Applied Mathematics 122 (2000) 51“80
www.elsevier.nl/locate/cam




The epsilon algorithm and related topics
P.R. Graves-Morrisa; — , D.E. Robertsb , A. Salamc
a
School of Computing and Mathematics, University of Bradford, Bradford, West Yorkshire BD7 1DP, UK
b
Department of Mathematics, Napier University, Colinton Road, Edinburgh, EH14 1DJ Scotland, UK
c
Laboratoire de Mathà matiques Pures et Appliquà es, Università du Littoral, BP 699, 62228 Calais, France
e e e
Received 7 May 1999; received in revised form 27 December 1999



Abstract
The epsilon algorithm is recommended as the best all-purpose acceleration method for slowly converging sequences. It
exploits the numerical precision of the data to extrapolate the sequence to its limit. We explain its connections with PadÃ
e
approximation and continued fractions which underpin its theoretical base. Then we review the most recent extensions of
these principles to treat application of the epsilon algorithm to vector-valued sequences, and some related topics. In this

<<

. 11
( 83 .)



>>