ñòð. 14 |

0

where w = x âˆ’ y; r = |w|, = w2 /r and H0(1) (z) is a Hankel function of the Ã¿rst kind, as speciÃ¿ed

more fully in [10,11]. By taking x2 = 0 in (3.9), we see from (3.13) that u(x1 ; 0) satisÃ¿es an integral

equation with Toeplitz structure, and the fast Fourier transform yields its iterative solution e ciently.

Without loss of generality, we use the scale determined by k =1 in (3.9) â€“ (3.13). For this example,

the impedance is taken to be Ã¿ = 1:4ei =4 on the interval = {x: âˆ’ 40 Â¡ x1 Â¡ 40 ; x2 = 0}. At

two sample points (x1 â‰ˆ âˆ’20 and 20 ) taken from a 400-point discretisation of , we found the

following results with the VEA using 16 decimal place (MATLAB) arithmetic

(12)

= [ : : ; âˆ’0:36843 + 0:44072i; : : : ; âˆ’0:14507 + 0:55796i; : :];

0

(10)

= [ : : ; âˆ’0:36333 + 0:45614i; : : : ; âˆ’0:14565 + 0:56342i; : :];

2

(8)

= [ : : ; âˆ’0:36341 + 0:45582i; : : : ; âˆ’0:14568 + 0:56312i; : :];

4

(6)

= [ : : ; âˆ’0:36341 + 0:45583i; : : : ; âˆ’0:14569 + 0:56311i; : :];

6

(4)

= [ : : ; âˆ’0:36341 + 0:45583i; : : : ; âˆ’0:14569 + 0:56311i; : :];

8

where the converged Ã¿gures are shown in bold face.

Each of these results, showing just two of the components of a particular Ã„j) in columns Ã„ =

(

0; 2; : : : ; 8 of the vector-epsilon table, needs 12 iterations of (3.9) for its construction. In this appli-

cation, these results show that the VEA converges reasonably steadily, in contrast to Lanczos type

methods, eventually yielding Ã¿ve decimal places of precision.

Example 3.2 was chosen partly to demonstrate the use of the vector epsilon algorithm for a weakly

convergent sequence of complex-valued data, and partly because the problem is one which lends

itself to iterative methods. In fact, the example also shows that the VEA has used up 11 of the 15

decimal places of accuracy of the data to extrapolate the sequence to its limit. If greater precision

is required, other methods such as stabilised Lanczos or multigrid methods should be considered.

The success of the VEA in examples such as those given above is usually attributed to the fact

( j)

that the entries { 2k ; j = 0; 1; 2; : : :} are the exact limit of a convergent sequence S if S is generated

by precisely k nontrivial geometric components. This result is an immediate and direct generalisation

of that for the scalar case given in Section 2. The given vector sequence is represented by

j

k k

i

wÃ„ (Ã‚Ã„ )j ;

sj = C0 + CÃ„ (Ã‚Ã„ ) = sâˆž âˆ’ j = 0; 1; 2; : : : ; (3.14)

Ã„=1 i=0 Ã„=1

where each CÃ„ ; wÃ„ âˆˆ Cd ; Ã‚Ã„ âˆˆ C; |Ã‚Ã„ | Â¡ 1, and all the Ã‚Ã„ are distinct. The two representations used

in (3.14) are consistent if

k k

âˆ’1

CÃ„ = sâˆž âˆ’ wÃ„ and CÃ„ = wÃ„ (Ã‚Ã„ âˆ’ 1):

Ã„=0 Ã„=1

To establish this convergence result, and its generalisations, we must set up a formalism which

allows vectors to be treated algebraically.

64 P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51â€“80

From the given sequence S = (si ; i = 0; 1; 2; : : : ; : si âˆˆ Cd ), we form the series coe cients

c0 := s0 ; ci := si âˆ’ siâˆ’1 ; i = 1; 2; 3; : : : (3.15)

and the associated generating function

f (z) = c0 + c1 z + c2 z 2 + Â· Â· Â· âˆˆ Cd [[z]]: (3.16)

Our Ã¿rst aim is to Ã¿nd an analogue of (2.15) which allows construction, at least in principle, of

the denominator polynomials of a vector-valued PadÃƒ approximant for f (z). This generalisation is

e

possible if the vectors cj in (3.16) are put in oneâ€“one correspondence with operators cj in a Cli ord

algebra A. The details of how this is done using an explicit matrix representation were basically

set out by McLeod [37]. We use his approach [26,27,38] and square matrices Ei , i = 1; 2; : : : ; 2d + 1

of dimension 22d+1 which obey the anticommutation relations

Ei Ej + Ej Ei = 2 ij I; (3.17)

where I is an identity matrix. The special matrix J = E2d+1 is used to form the operator products

Fi = JEd+i ; i = 1; 2; : : : ; d: (3.18)

Then, to each vector w = x + iy âˆˆ Cd whose real and imaginary parts x; y âˆˆ Rd , we associate the

operator

d d

w= xi E i + yi Fi : (3.19)

i=1 i=1

The real linear space V is deÃ¿ned as the set of all elements of the form (3.19). If w1 ; w2 âˆˆV

C C

d

correspond to w1 ; w2 âˆˆ C and ; Ã¿ are real, then

w3 = w1 + Ã¿w2 âˆˆ V (3.20)

C

corresponds uniquely to w3 = w1 + Ã¿w2 âˆˆ Cd . Were ; Ã¿ complex, the correspondence would not be

oneâ€“one. We refer to the space V as the isomorphic image of Cd , where the isomorphism preserves

C

linearity only in respect of real multipliers as shown in (3.20). Thus the image of f (z) is

f(z) = c0 + c1 z + c2 z 2 + Â· Â· Â· âˆˆ V [[z]]: (3.21)

C

The elements Ei , i = 1; 2; : : : ; 2d + 1 are often called the basis vectors of A, and their linear combi-

nations are called the vectors of A. Notice that the Fi are not vectors of A and so the vectors of

A do not form the space V. Products of the nonnull vectors of A are said to form the Lipschitz

C

group [40]. The reversion operator, denoted by a tilde, is deÃ¿ned as the anti-automorphism which

reverses the order of the vectors constituting any element of the Lipschitz group and the operation

is extended to the whole algebra A by linearity. For example, if ; Ã¿ âˆˆ R and

D = E1 + Ã¿E4 E5 E6 ;

then

Ëœ

D = E1 + Ã¿E6 E5 E4 :

Hence (3.18) and (3.19) imply that

d d

w=

Ëœ xi E i âˆ’ yi Fi : (3.22)

i=1 i=1

P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51â€“80 65

We notice that w corresponds to wâˆ— , the complex conjugate of w, and that

Ëœ

d

(xi2 + yi2 )I = ||w||2 I

ww =

Ëœ (3.23)

2

i=1

is a real scalar in A. The linear space of real scalars in A is deÃ¿ned as S := { I; âˆˆ R}. Using

(3.23) we can form reciprocals, and

wâˆ’1 = w=|w|2 ;

Ëœ (3.24)

where

|w| := ||w||; (3.25)

so that wâˆ’1 is the image of wâˆ’1 as deÃ¿ned by (3.5). Thus (3.19) speciÃ¿es an isomorphism between

(i) the space Cd , having representative element

wâˆ’1 = wâˆ— =||w||2 ;

w = x + iy and an inverse

(ii) the real linear space V with a representative element

C

d d

wâˆ’1 = w=|w|2 :

w= xi E i + yi Fi and its inverse given by Ëœ

i=1 i=1

The isomorphism preserves inverses and linearity with respect to real multipliers, as shown in (3.20).

Using this formalism, we proceed to form the polynomial q2j+1 (z) analogously to (2.15). The equa-

tions for its coe cients are

ï£¹ ï£® (2j+1) ï£¹ ï£®

ï£® ï£¹

qj+1

c0 Â·Â·Â· cj âˆ’cj+1

ï£ºï£¯ . ï£º ï£¯

ï£¯. . .ï£º

ï£»ï£¯ . ï£º = ï£°

ï£°. . .ï£» (3.26)

. . .

.ï£»

ï£°

(2j+1)

cj Â·Â·Â· c2j âˆ’c2j+1

q1

(2j+1)

which represent the accuracy-through-order conditions; we assume that q0 = q2j+1 (0) = I . In

(2j+1) (2j+1) (2j+1) (2j+1)

principle, we can eliminate the variables qj+1 ; qj ; : : : ; q2 sequentially, Ã¿nd q1 and then

(2j+1)

the rest of the variables of (3.26) by back-substitution. However, the resulting qi turn out to be

higher grade quantities in the Cli ord algebra, meaning that they involve higher-order outer products

of the fundamental vectors. Numerical representation of these quantities uses up computer storage

and is undesirable. For practical purposes, we prefer to work with low-grade quantities such as

scalars and vectors [42].

The previous remarks re ect the fact that, in general, the product w1 ; w2 ; w3 âˆˆ V when C

w1 ; w2 ; w3 âˆˆ V. However, there is an important exception to this rule, which we formulate as

C

follows [26], see Eqs. (6:3) and (6:4) in [40].

Lemma 3.3. Let w; t âˆˆ V be the images of w = x + iy; t = u + iC âˆˆ Cd . Then

C

(i) t w + wtËœ = 2 Re(wH t)I âˆˆ S;

Ëœ (3.27)

(ii) wtËœw = 2w Re(wH t) âˆ’ t||w||2 âˆˆ V: (3.28)

C

66 P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51â€“80

Proof. Using (3.17), (3.18) and (3.22), we have

d d

t w + wtËœ =

Ëœ (ui Ei + vi Fi )(xj Ej âˆ’ yj Fj ) + (xj Ej + yj Fj )(ui Ei âˆ’ vi Fi )

i=1 j=1

= (uT x + CT y)I = 2 Re(wH t)I

because, for i; j = 1; 2; : : : ; d;

Fi Ej âˆ’ Ej Fi = 0; Fi Fj + Fj Fi = âˆ’2 ij I:

For part (ii), we simply note that

wtËœw = w(tËœw + wt) âˆ’ wwt:

Ëœ Ëœ

We have noted that, as j increases, the coe cients of q2j+1 (z) are increasingly di cult to store.

Economical approximations to q2j+1(z) are given in [42]. Here we proceed with

ï£® ï£¹ ï£® ï£¹

(2j+1)

qj+1 0

ï£® ï£¹

c0 Â·Â·Â· cj+1 ï£¯.ï£º ï£¯.ï£º

ï£ºï£¯ . ï£º ï£¯.ï£º

ï£¯. . .

ï£¯.ï£º

ï£°. . ï£»ï£¯ =ï£¯ (3.29)

ï£º

. . (2j+1) ï£º

ï£» ï£°0ï£»

ï£° q1

cj+1 Â·Â·Â· c2j+2

e2j+1

I

which are the accuracy-through-order conditions for a right-handed operator PadÃƒ approximant (OPA)

e

âˆ’1

p2j+1 (z)[q2j+1 (z)] for f(z) arising from

f(z)q2j+1 (z) = p2j+1 (z) + e2j+1 z 2j+2 + O(z 2j+3 ): (3.30)

The left-hand side of (3.29) contains a general square Hankel matrix with elements that are operators

from V. A remarkable fact, by no means obvious from (3.29) but proved in the next theorem, is

C

that

e2j+1 âˆˆ V: (3.31)

C

This result enables us to use OPAs of f(z) without constructing the denominator polynomials. A

quantity such as e2j+1 in (3.29) is called the left-designant of the operator matrix and it is denoted

by

c0 Â·Â·Â· cj+1

. .

. .

e2j+1 = : (3.31b)

. .

cj+1 Â·Â·Â· c2j+2 l

The subscript l (for left) distinguishes designants from determinants, which are very di erent con-

structs. Designants were introduced by Heyting [32] and in this context by Salam [43]. For present

purposes, we regard them as being deÃ¿ned by the elimination process following (3.26).

Example 3.4. The denominator of the OPA of type [0=1] is constructed using

(1)

c0 c1 0

q1

= :

c1 c2 e1

I

P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51â€“80 67

(1)

We eliminate q1 as described above following (3.26) and Ã¿nd that

c2 c1 âˆ’1

e1 = = c2 âˆ’ c1 c0 c1 âˆˆ span{c0 ; c1 ; c2 }: (3.32)

c1 c0 l

Proceeding with the elimination in (3.29), we obtain

ï£® ï£¹ ï£® ï£¹

(2j+1)

qj+2 0

ï£® ï£¹

âˆ’1 âˆ’1

c2 âˆ’ c1 c0 c1 Â·Â·Â· cj+2 âˆ’ c1 c0 cj+1 ï£¯.ï£º ï£¯.ï£º

ï£ºï£¯ . ï£º ï£¯.ï£º

. .

ï£¯ .

ï£¯.ï£º

. . ï£» ï£¯ (2j+1) ï£º = ï£¯ ï£º: (3.33)

ï£° . . ï£°0ï£»

ñòð. 14 |