<<

. 14
( 83 .)



>>

2 t ’ 2i(t ’ i ’ i )2
0

where w = x ’ y; r = |w|, = w2 /r and H0(1) (z) is a Hankel function of the ÿrst kind, as speciÿed
more fully in [10,11]. By taking x2 = 0 in (3.9), we see from (3.13) that u(x1 ; 0) satisÿes an integral
equation with Toeplitz structure, and the fast Fourier transform yields its iterative solution e ciently.
Without loss of generality, we use the scale determined by k =1 in (3.9) “ (3.13). For this example,
the impedance is taken to be ÿ = 1:4ei =4 on the interval = {x: ’ 40 ¡ x1 ¡ 40 ; x2 = 0}. At
two sample points (x1 ≈ ’20 and 20 ) taken from a 400-point discretisation of , we found the
following results with the VEA using 16 decimal place (MATLAB) arithmetic
(12)
= [ : : ; ’0:36843 + 0:44072i; : : : ; ’0:14507 + 0:55796i; : :];
0
(10)
= [ : : ; ’0:36333 + 0:45614i; : : : ; ’0:14565 + 0:56342i; : :];
2
(8)
= [ : : ; ’0:36341 + 0:45582i; : : : ; ’0:14568 + 0:56312i; : :];
4
(6)
= [ : : ; ’0:36341 + 0:45583i; : : : ; ’0:14569 + 0:56311i; : :];
6
(4)
= [ : : ; ’0:36341 + 0:45583i; : : : ; ’0:14569 + 0:56311i; : :];
8

where the converged ÿgures are shown in bold face.
Each of these results, showing just two of the components of a particular „j) in columns „ =
(

0; 2; : : : ; 8 of the vector-epsilon table, needs 12 iterations of (3.9) for its construction. In this appli-
cation, these results show that the VEA converges reasonably steadily, in contrast to Lanczos type
methods, eventually yielding ÿve decimal places of precision.

Example 3.2 was chosen partly to demonstrate the use of the vector epsilon algorithm for a weakly
convergent sequence of complex-valued data, and partly because the problem is one which lends
itself to iterative methods. In fact, the example also shows that the VEA has used up 11 of the 15
decimal places of accuracy of the data to extrapolate the sequence to its limit. If greater precision
is required, other methods such as stabilised Lanczos or multigrid methods should be considered.
The success of the VEA in examples such as those given above is usually attributed to the fact
( j)
that the entries { 2k ; j = 0; 1; 2; : : :} are the exact limit of a convergent sequence S if S is generated
by precisely k nontrivial geometric components. This result is an immediate and direct generalisation
of that for the scalar case given in Section 2. The given vector sequence is represented by
j
k k
i
w„ (‚„ )j ;
sj = C0 + C„ (‚„ ) = s∞ ’ j = 0; 1; 2; : : : ; (3.14)
„=1 i=0 „=1

where each C„ ; w„ ∈ Cd ; ‚„ ∈ C; |‚„ | ¡ 1, and all the ‚„ are distinct. The two representations used
in (3.14) are consistent if
k k
’1
C„ = s∞ ’ w„ and C„ = w„ (‚„ ’ 1):
„=0 „=1

To establish this convergence result, and its generalisations, we must set up a formalism which
allows vectors to be treated algebraically.
64 P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51“80


From the given sequence S = (si ; i = 0; 1; 2; : : : ; : si ∈ Cd ), we form the series coe cients
c0 := s0 ; ci := si ’ si’1 ; i = 1; 2; 3; : : : (3.15)
and the associated generating function
f (z) = c0 + c1 z + c2 z 2 + · · · ∈ Cd [[z]]: (3.16)
Our ÿrst aim is to ÿnd an analogue of (2.15) which allows construction, at least in principle, of
the denominator polynomials of a vector-valued Padà approximant for f (z). This generalisation is
e
possible if the vectors cj in (3.16) are put in one“one correspondence with operators cj in a Cli ord
algebra A. The details of how this is done using an explicit matrix representation were basically
set out by McLeod [37]. We use his approach [26,27,38] and square matrices Ei , i = 1; 2; : : : ; 2d + 1
of dimension 22d+1 which obey the anticommutation relations
Ei Ej + Ej Ei = 2 ij I; (3.17)
where I is an identity matrix. The special matrix J = E2d+1 is used to form the operator products
Fi = JEd+i ; i = 1; 2; : : : ; d: (3.18)
Then, to each vector w = x + iy ∈ Cd whose real and imaginary parts x; y ∈ Rd , we associate the
operator
d d
w= xi E i + yi Fi : (3.19)
i=1 i=1

The real linear space V is deÿned as the set of all elements of the form (3.19). If w1 ; w2 ∈V
C C
d
correspond to w1 ; w2 ∈ C and ; ÿ are real, then
w3 = w1 + ÿw2 ∈ V (3.20)
C

corresponds uniquely to w3 = w1 + ÿw2 ∈ Cd . Were ; ÿ complex, the correspondence would not be
one“one. We refer to the space V as the isomorphic image of Cd , where the isomorphism preserves
C
linearity only in respect of real multipliers as shown in (3.20). Thus the image of f (z) is
f(z) = c0 + c1 z + c2 z 2 + · · · ∈ V [[z]]: (3.21)
C

The elements Ei , i = 1; 2; : : : ; 2d + 1 are often called the basis vectors of A, and their linear combi-
nations are called the vectors of A. Notice that the Fi are not vectors of A and so the vectors of
A do not form the space V. Products of the nonnull vectors of A are said to form the Lipschitz
C
group [40]. The reversion operator, denoted by a tilde, is deÿned as the anti-automorphism which
reverses the order of the vectors constituting any element of the Lipschitz group and the operation
is extended to the whole algebra A by linearity. For example, if ; ÿ ∈ R and
D = E1 + ÿE4 E5 E6 ;
then
˜
D = E1 + ÿE6 E5 E4 :
Hence (3.18) and (3.19) imply that
d d
w=
˜ xi E i ’ yi Fi : (3.22)
i=1 i=1
P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51“80 65


We notice that w corresponds to w— , the complex conjugate of w, and that
˜
d
(xi2 + yi2 )I = ||w||2 I
ww =
˜ (3.23)
2
i=1

is a real scalar in A. The linear space of real scalars in A is deÿned as S := { I; ∈ R}. Using
(3.23) we can form reciprocals, and
w’1 = w=|w|2 ;
˜ (3.24)
where
|w| := ||w||; (3.25)
so that w’1 is the image of w’1 as deÿned by (3.5). Thus (3.19) speciÿes an isomorphism between
(i) the space Cd , having representative element
w’1 = w— =||w||2 ;
w = x + iy and an inverse
(ii) the real linear space V with a representative element
C

d d
w’1 = w=|w|2 :
w= xi E i + yi Fi and its inverse given by ˜
i=1 i=1

The isomorphism preserves inverses and linearity with respect to real multipliers, as shown in (3.20).
Using this formalism, we proceed to form the polynomial q2j+1 (z) analogously to (2.15). The equa-
tions for its coe cients are
 ® (2j+1)  ®
® 
qj+1
c0 ··· cj ’cj+1
 .  
. . .
» .  = °
°. . .» (3.26)
. . .

°
(2j+1)
cj ··· c2j ’c2j+1
q1
(2j+1)
which represent the accuracy-through-order conditions; we assume that q0 = q2j+1 (0) = I . In
(2j+1) (2j+1) (2j+1) (2j+1)
principle, we can eliminate the variables qj+1 ; qj ; : : : ; q2 sequentially, ÿnd q1 and then
(2j+1)
the rest of the variables of (3.26) by back-substitution. However, the resulting qi turn out to be
higher grade quantities in the Cli ord algebra, meaning that they involve higher-order outer products
of the fundamental vectors. Numerical representation of these quantities uses up computer storage
and is undesirable. For practical purposes, we prefer to work with low-grade quantities such as
scalars and vectors [42].
The previous remarks re ect the fact that, in general, the product w1 ; w2 ; w3 ∈ V when C
w1 ; w2 ; w3 ∈ V. However, there is an important exception to this rule, which we formulate as
C
follows [26], see Eqs. (6:3) and (6:4) in [40].

Lemma 3.3. Let w; t ∈ V be the images of w = x + iy; t = u + iC ∈ Cd . Then
C

(i) t w + wt˜ = 2 Re(wH t)I ∈ S;
˜ (3.27)

(ii) wt˜w = 2w Re(wH t) ’ t||w||2 ∈ V: (3.28)
C
66 P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51“80


Proof. Using (3.17), (3.18) and (3.22), we have
d d
t w + wt˜ =
˜ (ui Ei + vi Fi )(xj Ej ’ yj Fj ) + (xj Ej + yj Fj )(ui Ei ’ vi Fi )
i=1 j=1

= (uT x + CT y)I = 2 Re(wH t)I
because, for i; j = 1; 2; : : : ; d;
Fi Ej ’ Ej Fi = 0; Fi Fj + Fj Fi = ’2 ij I:
For part (ii), we simply note that
wt˜w = w(t˜w + wt) ’ wwt:
˜ ˜
We have noted that, as j increases, the coe cients of q2j+1 (z) are increasingly di cult to store.
Economical approximations to q2j+1(z) are given in [42]. Here we proceed with
®  ® 
(2j+1)
qj+1 0
® 
c0 ··· cj+1 . .
 .  .
. . .
.
°. . » = (3.29)

. . (2j+1) 
» °0»
° q1
cj+1 ··· c2j+2
e2j+1
I
which are the accuracy-through-order conditions for a right-handed operator Padà approximant (OPA)
e
’1
p2j+1 (z)[q2j+1 (z)] for f(z) arising from
f(z)q2j+1 (z) = p2j+1 (z) + e2j+1 z 2j+2 + O(z 2j+3 ): (3.30)
The left-hand side of (3.29) contains a general square Hankel matrix with elements that are operators
from V. A remarkable fact, by no means obvious from (3.29) but proved in the next theorem, is
C
that
e2j+1 ∈ V: (3.31)
C

This result enables us to use OPAs of f(z) without constructing the denominator polynomials. A
quantity such as e2j+1 in (3.29) is called the left-designant of the operator matrix and it is denoted
by
c0 ··· cj+1
. .
. .
e2j+1 = : (3.31b)
. .
cj+1 ··· c2j+2 l

The subscript l (for left) distinguishes designants from determinants, which are very di erent con-
structs. Designants were introduced by Heyting [32] and in this context by Salam [43]. For present
purposes, we regard them as being deÿned by the elimination process following (3.26).

Example 3.4. The denominator of the OPA of type [0=1] is constructed using
(1)
c0 c1 0
q1
= :
c1 c2 e1
I
P.R. Graves-Morris et al. / Journal of Computational and Applied Mathematics 122 (2000) 51“80 67

(1)
We eliminate q1 as described above following (3.26) and ÿnd that
c2 c1 ’1
e1 = = c2 ’ c1 c0 c1 ∈ span{c0 ; c1 ; c2 }: (3.32)
c1 c0 l

Proceeding with the elimination in (3.29), we obtain
®  ® 
(2j+1)
qj+2 0
® 
’1 ’1
c2 ’ c1 c0 c1 ··· cj+2 ’ c1 c0 cj+1 . .
 .  .
. .
 .
.
. . »  (2j+1)  =  : (3.33)
° . . °0»

<<

. 14
( 83 .)



>>