Matrix elimination techniques are basic tools in many mathematical problems. In this paper we

will show their crucial role in some results that various authors with us have obtained in two

problems apparently distant: extrapolation and computer-aided geometric design (CAGD). A brief

overview of how things were developed over time will show that, once again, two results which

are apparently far from each other, even obtained by di erent groups in di erent countries, are the

natural consequence of a sequence of intermediate results.

Newton™s interpolation formula is a classical tool for constructing an interpolating polynomial

by recurrence, by using divided di erences. In the 1930s, Aitken [1] and Neville [52] derived

independently of each other algorithms to compute the interpolating polynomial from the solutions

of two simpler interpolation problems, avoiding the explicit use of divided di erences. Some papers,

[38,46] among others, extended both approaches at the beginning of the 1970s, to the more general

setting of Chebyshev systems. Almost simultaneously, extrapolation methods were being studied and

extended by several authors, as Schneider [54], Brezinski [4,5,7], Havie [31“33], Muhlbach [39

“ 42,48] and Gasca and LÃ pez-Carmona [19]. For a historical overview of extrapolation methods

o

confer Brezinski™s contribution [6] to this volume and the book [8]. It must be remarked that the

—

Corresponding author.

E-mail address: gasca@posta.unizar.es (M. Gasca).

0377-0427/00/$ - see front matter c 2000 Elsevier Science B.V. All rights reserved.

PII: S 0 3 7 7 - 0 4 2 7 ( 0 0 ) 0 0 3 5 6 - 3

38 M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50

techniques used by these authors were di erent, and that frequently the results obtained using one

of these techniques induced some progress in the other ones, in a very cooperative form.

However, it is clear that the basic role in all these papers was played by elimination techniques. In

[21] we studied general elimination strategies, where one strategy which we called Neville elimination

proved to be well suited to work with some special classes of matrices, in particular totally positive

matrices (that are matrices with all subdeterminants nonnegative).

This was the origin of a series of papers [24 “27] where the properties of Neville elimination were

carefully studied and its application to totally positive matrices allowed a much better knowledge

of these matrices. Since one of the applications of totally positive matrices is CAGD, the results

obtained for them have given rise in the last years to several other papers as [28,11,12]. In [11,12]

Carnicer and Pe˜ a proved the optimality in their respective spaces of some well-known function bases

n

as Bernstein polynomials and B-splines in the context of shape preserving representations. Neville

elimination has appeared, once again, as a way to construct other bases with similar properties.

2. Extrapolation and Schur complement

A k-tuple L = (˜1 ; : : : ; ˜k ) of natural numbers, with ˜1 ¡ · · · ¡ ˜k , will be called an index list of

length k over N. For I = (i1 ; : : : ; im ) and J = (j1 ; : : : ; jn ) two index lists over N, we write I ‚ J i

every element of I is an element of J . Generally, we shall use for index lists the same notations as

for sets emphasizing that I \ J; I © J; I ∪ J : : : always have to be ordered as above.

Let A=(aj ) be a real matrix and I =(i1 ; : : : ; im ) and J =(j1 ; : : : ; jn ) index lists contained, repectively,

i

in the index lists of rows and columns of A. By

J j1 ; : : : ; jn

= (aj ) =1; :::; n ∈ Rm—n ;

A =A i =1; :::; m

I i1 ; : : : ; im

we denote the submatrix of A with list of rows I and list of columns J .

—¦ —¦ —¦ —¦ —¦ —¦ —¦ —¦ —¦ —¦

If I ; I and J ; J are partitions of I and J , respectively, i.e., I ∪ I = I; I © I = …; J ∪ J =

J; J © J = …, we represent A( J ) in a corresponding partition

—¦ —¦

I

«

—¦ —¦

J J

¬A A ·

—¦ —¦

I I

J ¬ ·

A =¬ ·: (1)

I

—¦ —¦

J J

A A

—¦ —¦

I I

If m = n, then by

J J j1 ; : : : ; jm

A := det A =A ;

I I i1 ; : : : ; im

we denote the determinant of A( J ) which is called a subdeterminant of A. Throughout we set

I

…

A| … | := 1.

—¦

Let N ∈ N; I := (1; 2; : : : ; N +1) and I := (1; 2; : : : ; N ). By a prime we denote ordered complements

with respect to I . Given elements f1 ; : : : ; fN and f =: fN +1 of a linear space E over R, elements

L1 ; : : : ; LN and L =: LN +1 of its dual E — , consider the problem of ÿnding

N

L; p1 (f) ; (2)

M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50 39

N

where p = p1 (f) = c1 · f1 + · · · + cN · fN satisÿes the interpolation conditions

—¦

Li ; p = Li ; f i∈I : (3)

Here ·; · means duality between E — and E. If we write

j

A := Li ; fj for i; j ∈ I; (i; j) = (N + 1; N + 1);

i

and c is the vector of components ci , this problem is equivalent to solving the bordered system (cf.

[16])

« «

—¦

I N +1

A 0· ¬A

¬ ·

—¦ —¦

I I

c

¬ · ¬ ·

B·x=y where B=¬ ·; x= ; y=¬ ·: (4)

—¦

I N +1

A 1 A

N +1 N +1

—¦

Assuming A( I —¦ ) nonsingular this can be solved by eliminating the components of c in the last

I

equation by adding a suitable linear combination of the ÿrst N equations of (4) to the last one,

yielding one equation for one unknown, namely :

’1

—¦ —¦

N +1 I I N +1

=A ’A ·A A : (5)

—¦ —¦

N +1 N +1 I I

Considering the e ect of this block elimination step on the matrix

«

—¦

I N +1

A A

¬ ·

—¦ —¦

I I

¬ ·

A=¬ ·; (6)

—¦

I N +1

A A

N +1 N +1

we ÿnd it transformed to

«

—¦

I N +1

A —¦A

˜

A= :

—¦

I I

If we take

N +1

A := 0; (7)

N +1

then we have

N

= ’ L; p1 (f) : (8)

On the other hand, if instead of (7) we take

N +1

A := LN +1 ; fN +1 ; (9)

N +1

then, in this frame, we get

N

= L; r1 (f) ; (10)

40 M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50

where

N N

r1 (f) := f ’ p1 (f)

is the interpolation remainder.

If the systems (f1 ; : : : ; fN ) and (L1 ; : : : ; LN ) are independent of f and L then these problems are

called general linear extrapolation problems, and if one or both do depend on f = fN +1 or L = LN +1

they are called problems of quasilinear extrapolation.

Observe, that with regard to determinants the block elimination step above is an elementary

operation leaving the value of det A unchanged. Hence

I

det A

I

= ;

—¦

I

det A —¦

I

—¦

which is known as the Schur complement of A( I —¦ ) in A( I ). This concept, introduced in [34,35] has

I I

found many applications in Linear Algebra and Statistics [13,53]. It may be generalized in di erent

ways, see, for example, [21,22,44] where we used the concept of general elimination strategy which

is explained in the next section.

3. Elimination strategies

In this section and the next two let k; m; n ∈ N such that k + m = n and I = (1; : : : ; n). Given a

square matrix A = A( I ) over R, how can we simplify det A by elementary operations, not altering the

I

value of det A, producing zeros in prescribed columns, e.g. in columns 1 to k?. Take a permutation

of all rows, M = (m1 ; : : : ; mn ) say, then look for a linear combination of k rows from (m1 ; : : : ; mn’1 )

which, when added to row mn , will produce zeros in columns 1 to k. Then add to row mn’1 a linear

combination of k of its predecessors in M , to produce zeros in columns 1 to k, etc. Finally, add

to row mk+1 a suitable linear combination of rows m1 ; : : : ; mk to produce zeros in columns 1 to k.

Necessarily,

1; : : : ; k

A r =0

r

j1 ; : : : ; jk

r r

is assumed when a linear combination of rows j1 ; : : : ; jk is added to row mr (r = n; n ’ 1; : : : ; k + 1)

r

to generate zeros in columns 1 to k, and jq ¡ mr (q = 1; : : : ; k; r = n; n ’ 1; : : : ; k + 1) in order that

in each step an elementary operation will be performed.

—¦

Let us give a formal description of this general procedure. Suppose that (Is ; Is ) (s = 1; : : : ; m) are

—¦

pairs of ordered index lists of length k +1 and k, respectively, over a basic index list M with Is ‚ Is .

Then the family

—¦

:= ((Is ; Is ))s=1; :::; m

will be called a (k; m)-elimination strategy over I := I1 ∪ · · · ∪ Im provided that for s = 2; : : : ; m

(i) card(I1 ∪ · · · ∪ Is ) = k + s,

—¦

(ii) Is ‚ Is © (I1 ∪ · · · ∪ Is’1 ):

M. Gasca, G. Muhlbach / Journal of Computational and Applied Mathematics 122 (2000) 37“50 41

—¦ —¦

By E(k; m; I ) we denote the set of all (k; m)-elimination strategies over I . I := I1 is called the

—¦

basic index list of the strategy . For each s, the zeros in the row (s) := Is \ Is are produced with

—¦

the rows of Is . For shortness, we shall abbreviate the phrase “elimination strategy” by e.s. Notice

that, when elimination is actually performed, it is done in the reverse ordering: ÿrst in row (m),

then in row (m ’ 1), etc.

The simplest example of e.s. over I = (1; : : : ; m + k), is Gauss elimination:

—¦ —¦ —¦ —¦

= ((Gs ; Gs ))s=1; :::; m ; G = Gs = {1; : : : ; k}; Gs = G ∪ {k + s}: (11)

For this strategy it is irrelevant in which order elimination is performed. This does not hold for

another useful strategy over I :

—¦

N = ((Ns ; Ns ))s=1; :::; m (12)

—¦

with Ns = (s; : : : ; s + k ’ 1); Ns = (s; : : : ; s + k); s = 1; : : : ; m, which we called [21,43,44] the Neville

(k; m)“e.s. Using this strategy elimination must be performed from bottom to top. The reason for

the name Neville is their relationship with Neville interpolation algorithm, based on consecutivity,

see [43,23].

4. Generalized Schur complements

—¦ —¦

Suppose that = ((Is ; Is ))s=1; :::; m ∈ E(k; m; I ) and that K ‚ I is a ÿxed index list of length k.

—¦

We assume that the submatrices A( K ) of a given matrix A = A( I ) ∈ Rn—n are nonsingular for

—¦

Is I

˜

s = 1; : : : ; m. Then the elimination strategy transforms A into the matrix A which, partitioned with

—¦ —¦ —¦ —¦

respect to I ∪ I = I; K ∪ K = I , can be written as

«

—¦ —¦

˜K ˜K

A A

¬ ·

—¦ —¦

I I