<<

. 45
( 61 .)



>>

compute the columns xµ in reverse order.
The situation where A and B have eigenvalues in common is slightly
more complicated: There may be no solution, and if there is one, it will not
be unique. Here, we only need the following special case:
Lemma 25 Let J1 ∈ C k—k and J2 ∈ C j—j be two Jordan blocks having
the same eigenvalue, and assume k ≥ j (resp. k ¤ j). Then for every
C ∈ C k—j there exists a unique matrix B ∈ C k—j having nonzero entries
in the ¬rst row (resp. last column) only, such that the matrix equation

J1 X ’ X J 2 = C ’ B (A.2)

has a solution X ∈ C k—j , which is unique within the set of matrices X
having zero entries in the last row (resp. ¬rst column).

Proof: First, observe that we may restrict ourselves to the case where
both matrices Jk are nilpotent. In the ¬rst case of k ≥ j, denoting the
entries in the ¬rst row of B by β1 , . . . , βj , and the columns of X resp. C
by xm resp. cm , equation (A.2) is equivalent to

J1 xj = cj ’ βj e1 , J1 xm = xm+1 + cm ’ βm e1 , 1 ¤ m ¤ j ’ 1,

with e1 being the ¬rst unit vector. Computing J1 xj , one ¬nds that the
¬rst equation is solvable if and only if βj equals the ¬rst entry in cj . If so,
the entries in xj can be uniquely computed except for the last one which
remains undetermined. Inserting into the next equation with m = j ’ 1,
assuming j ≥ 2, we again conclude that this equation is solvable if and
only if βj’1 is chosen such that the ¬rst entry in the right hand side vector
equals zero. Also note that k ≥ j implies that the undetermined entry in
xj does not interfere with the determination of βj’1 . Solving for xj’1 then
gives a vector whose components are uniquely determined except for the
last two, since the undetermined entry in xj enters into the second last
component of xj’1 . Repeating these arguments for the remaining columns
then completes the proof of the ¬rst case. In the second case of k ¤ j one
2
proceeds analogously, working with rows instead of columns.
Note that both lemmas remain correct for j = 1 and/or k = 1; in par-
ticular the second one holds trivially for j = k = 1 with B = C, X = 0.

Exercises:
1. Solve (A.1) for

0 1 01 1 0
A= , B= , C= .
’1 0
1 0 0 0
214 Appendix A. Matrices and Vector Spaces

2. Spell out Lemma 25 for the cases of j = 1 and k ≥ 2, resp. j ≥ 2 and
k = 1, resp. j = k = 1.
˜
3. Suppose that (A.1) had been rewritten as A x = c, x, c ∈ C j·k
.
˜
Determine the eigenvalues of A in terms of those of A and B.



A.2 Blocked Matrices
Let a matrix A ∈ C ν—ν , ν ∈ N, be given.1 We shall frequently have reason
to block A into submatrices
® 
A11 A12 . . . A1µ
 A21 A22 . . . A2µ 
 
A = [Ajk ] =  . . ,
. ..
°. .»
. .
. . .
Aµ1 Aµ2 . . . Aµµ
where Ajk is of size νj — νk , with ν1 + . . . + νµ = ν. So, in particular, all
diagonal blocks Ajj are square matrices. If the block sizes are not obvi-
ous from the context, we shall say that A is blocked of type (ν1 , . . . , νµ ).
If several matrices are blocked at a time, then it shall go without say-
ing that all are blocked of the same type. If A = [Ajk ], B = [Bjk ] are
µ
blocked, then C = AB = [Cjk ] is also blocked, with Cjk = ν=1 Ajν Bνk .
So matrix multiplication respects the block structure, and the same holds
trivially for addition. We shall say that a matrix is upper- resp. lower-
triangularly blocked with respect to a given type, if all blocks below resp.
above the block-diagonal vanish, and accordingly we speak of diagonally
blocked matrices. We use the symbol diag [A1 , . . . , Aµ ] for the diagonally
blocked matrix A with diagonal blocks Ak . This is the same as saying that
A is the direct sum of the matrices Ak . For a triangularly blocked matrix
A, the spectrum of A is the union of the spectra of the diagonal blocks Ajj ,
1 ¤ j ¤ µ, and the inverse of A, in case it exists, is likewise triangularly
blocked.

Exercises:
1. For
A11 0
A= ,
A21 A22
show that A is invertible if and only if both diagonal blocks are in-
vertible, and then
A’1 0
A’1 = 11 .
’A22 A21 A’1
’1
A’1
11 22


N not
1 Note that we assume the set of natural numbers to include zero.
A.3 Some Functional Analysis 215

2. For
A11 A12
A= ,
A21 A22
with A11 invertible, show

A’1 A12
A11 0 I 11
A= .
A22 ’ A21 A’1 A12
A21 0 I
11


3. For A as above, conclude det A = det A11 det(A22 ’A21 A’1 A12 ) and
11
compute A’1 , in case it exists.




A.3 Some Functional Analysis
In several chapters of the book we study functions with values in a Ba-
nach space. In this context, we shall make use of some standard results of
Functional Analysis, in particular, of Hahn-Banach™s theorem and the ba-
sic theory of continuous linear operators. While for the elementary theory
of Banach spaces we refer to standard texts, we shall brie¬‚y outline some
notation which we shall use:

• Given a vector space E over the ¬eld of complex numbers, assume
that an operation · : E — E ’’ E is de¬ned. We say that E is
an algebra over C , if for elements a, b, c ∈ E and ± ∈ C the two
associative laws

a · (b · c) = (a · b) · c, ±(a · b) = (±a) · b

and the two distributive laws

a · (b + c) = a · b + a · c, (a + b) · c = a · c + b · c

always hold. If the commutative law a · b = b · a also holds, then we
say that E is a commutative algebra.
The operation · will be referred to as the multiplication in E and for
convenience we shall write a b instead of a · b.
• Let E be an algebra over C . If an element e ∈ E exists so that for
every a ∈ E we have e a = a e = a, then e is called unit element in
E . Observe that E can contain at most one such e, since if e also is
˜
a unit element, then we have e = e e = e e = e.
˜ ˜˜
• Let E be an algebra over C with unit element e. An element a ∈ E
is called invertible, if some b ∈ E exists with a b = b a = e, and b then
is called inverse of a. Again, it follows that only one such b can exist:
216 Appendix A. Matrices and Vector Spaces

If ˜ also is an inverse of a, then ˜ = ˜ e = ˜ a b = e b = b follows, using
b bb b
the associative law. It is common to write a’1 for the inverse of a, in
case it exists.
• Let E be an algebra over C . A linear mapping d : E — E ’’ E is
called a derivation provided that for any a, b ∈ E the product rule

d(a b) = d(a) b + a d(b)

holds. If such a derivation is de¬ned, then E is called a di¬erential
algebra.
• Let E be a vector space over C . A mapping · : E ’’ R is called
a norm on E if for any a, b ∈ E and ± ∈ C we have
(a) a ≥ 0, a = 0 ⇐’ a = 0.
(b) ±a = |±| a .
a+b ¤ a + b .
(c)
Given such a norm, one can de¬ne convergent sequences, resp. Cauchy
sequences in E , as for sequences of real or complex numbers, by re-
placing the absolute value sign by the notion of norm. Doing so, we
say that E is a Banach space, if every Cauchy sequence converges.
• Let E be an algebra on which a norm · is de¬ned. If additionally
to (a), (b), (c) we have

ab ¤ a b,

then E is called a normed algebra. If E even is a Banach space, we
speak of a Banach algebra. Note that, in case E has a unit element e,
e ≥ 1 follows, but equality not necessarily holds. However, one can
always replace the given norm on E by another one, di¬ering from
the former one by a constant factor, so that then we do have e = 1,
and we shall always assume this to be the case.

Let V be a Banach space. An arbitrary mapping F , de¬ned on a subset
D ‚ X, is said to have a ¬xed point if for some v ∈ D we have F (v) = v.
Several theorems show existence of such a ¬xed point; here we are going to
use the most elementary one:
Theorem 64 (Banach™s Fixed Point Theorem)
Let D be a closed subset of a Banach space V , and let F : D ’’ D be a
contraction; i.e., for some ± ∈ (0, 1) let

F (v1 ) ’ F (v2 ) ¤ ± v1 ’ v2 ,

for any two v1 , v2 ∈ D. Then F has a unique ¬xed point in D.
A.3 Some Functional Analysis 217

For a proof of this theorem we refer to Maddox [179], or other books on
(functional) analysis. We also mention that the theorem holds true when-
ever D is a complete metric space. In Chapter 3 we shall use Banach™s ¬xed
point theorem in the space of functions holomorphic in a disc and contin-
uous up to the boundary; this is a Banach space with the norm being the
supremum of the modulus of the function. Here, we apply the theorem to
systems of Volterra integral equations of a very special form, which play a
role in Section 8.2:
Let S be a ¬xed sector of possibly in¬nite radius,2 let r ∈ N be ¬xed,

and de¬ne E(x) = n=1 xn’r /“(n/r).3 For c > 0, consider the set Vc of
functions x, holomorphic in S and so that x c = supS |x(u)|/E(c|u|) < ∞.
It is easy to verify that Vc is, in fact, a Banach space. Similarly, the sets Vcµ ,
resp. Vcµ—µ , of µ-vectors, resp. µ — µ-matrices, of such functions are again
a Banach space, provided we de¬ne their norm as above, but with modulus
of x(u) replaced by the corresponding vector resp. matrix norm. It is easy
to check that all three Banach spaces become larger once we increase c.
µ—µ µ
For µ ∈ N and k > 0, ¬x some K ∈ Vk and f (u) ∈ Vk . More-
over, let A(u) be a ν — ν matrix holomorphic in S, invertible and so that
a = supS A’1 (u) < ∞. Under these assumptions, consider the integral
equation
u
K((ur ’ tr )1/r ) x(t) dtr .
A(u) x(u) = f (u) + (A.3)
0

Then the following holds:
Proposition 26 Under the above assumptions, the above integral equation
has a unique solution x ∈ Vκ , with κ > k + a K k k 1’r .
µ



Proof: By termwise integration, and using the Beta Integral (p. 229), one
can see

1 n’1
(κ|u|)n’r
1/r 1/r ’r
E(k|u|(1 ’ x) (k/κ)j ,
) E(κ|u|x ) dx = (k|u|)
“(n/r)
0 n=2 j=1


which is at most k 1’r |u|’r (κ ’ k)’1 E(κ|u|). By assumption we have
K(u) ¤ K k E(k|u|) for every u ∈ S. With x(u) ¤ x κ E(κ|u|),
one can use the above inequality to show
u
K k x κ E(κ|u|)
K((ur ’ tr )1/r ) x(t) dtr ¤ z ∈ S.
,
k r’1 (κ ’ k)
0


2 See p. 60 for this notion.
3 Observe that the function E is slightly di¬erent from, but intimately related to,
Mittag-Le¬„er™s function E1/r , de¬ned on p. 233. For the de¬nition of the Gamma
function “(z), see p. 227.
218 Appendix A. Matrices and Vector Spaces

<<

. 45
( 61 .)



>>