f x(t) ’ f y(t)

Since this is true for each ¬xed w path, the estimate remains true when

we take expectations, so that

|P t f (x0 ) ’ P t f (y0 )| ¤ Ke»t β|x0 ’ y0 |.

Therefore, if f is a Lipschitz function in C(‚ ) then P t f is a bounded

continuous function. The Lipschitz functions are dense in C(‚ ) and P t

is a bounded linear operator. Consequently, if f is in C(‚ ) then P t f is

a bounded continuous function. We still need to show that it vanishes at

in¬nity. By uniqueness,

t

b x(r) dr + w(t) ’ w(s)

x(t) = x(s) +

s

for all 0 ¤ s ¤ t, so that

t

|x(t) ’ x(s)| ¤ [b x(r) ’ b x(t) ] dr + (t ’ s) b x(t)

s

+ |w(t) ’ w(s)|

t

¤κ |x(r) ’ x(t)| dr + t b x(t) + |w(t) ’ w(s)|

s

¤ κ sup |x(r) ’ x(t)| + t b x(t) + sup |w(t) ’ w(r)|.

0¤r¤t 0¤r¤t

Since this is true for each s, 0 ¤ s ¤ t,

sup |x(t) ’ x(s)| ¤ γ t b x(t) + sup |w(t) ’ w(s)| ,

0¤s¤t 0¤s¤t

where γ = 1/(1 ’ κt), provided that κt < 1. In particular, if κt < 1 then

|x(t) ’ x0 | ¤ γ t b x(t) + sup |w(t) ’ w(s)| . (8.7)

0¤s¤t

40 CHAPTER 8

Now let f be in Ccom (‚ ), let κt < 1, and let δ be the supremum of |b(z0 )|

for z0 in the support of f . By (8.7), f x(t) = 0 unless

|z0 ’ x0 | ¤ γ[ tδ + sup |w(t) ’ w(s)| ].

inf (8.8)

z0 ∈supp f 0¤s¤t

But as x0 tends to in¬nity, the probability that w will satisfy (8.8) tends

to 0. Since f is bounded, this means that Ef x(t) = P t f (x0 ) tends to 0

as x0 tends to in¬nity. We have already seen that P t f is continuous, so

P t f is in C(‚ ). Since Ccom (‚ ) is dense in C(‚ ) and P t is a bounded

linear operator, P t maps C(‚ ) into itself, provided κt < 1. This restric-

tion could have been avoided by introducing an exponential factor, but

this is not necessary, as we shall show that the P t form a semigroup.

Let 0 ¤ s ¤ t. The conditional distribution of x(t), with x(r) for all

0 ¤ r ¤ s given, is a function of x(s) alone, since the equation

t

b x(s ) ds + w(t) ’ w(s),

x(t) = x(s) +

s

has a unique solution. Thus the x process is a Markov process, and

E{f x(t) | x(r), 0 ¤ r ¤ s} = E{f x(t) | x(s)} = P t’s f x(s)

for f in C(‚ ), 0 ¤ s ¤ t. Therefore,

P t+s f (x0 ) = Ef (x(t + s)

= EE{f x(t + s) | x(r), 0 ¤ r ¤ s}

= EP t f x(s)

= P s P t f (x0 ),

so that P t+s = P t P s . It is clear that

sup P t f (x0 ) = 1

0¤f ¤1

for all x0 and t.

2 2

It remains only to prove (8.4) for f in Ccom (‚ ). (Since Ccom (‚ ) is

dense in C(‚ ) and the P t have norm one, this will imply that P t f ’ f

as t ’ 0 for all f in C(‚ ), so that P t is a Markovian semigroup.)

2

Let f be in Ccom (‚ ), and let K be a compact set containing the sup-

port of f in its interior. An argument entirely analogous to the derivation

A CLASS OF STOCHASTIC DIFFERENTIAL EQUATIONS 41

of (8.7), with the subtraction and addition of b(x0 ) instead of b x(t) ,

gives

|x(t) ’ x0 | ¤ γ[ t |b(x0 )| + sup |w(0) ’ w(s)| ], (8.9)

0¤s¤t

provided κt < 1 (which we shall assume to be the case). Let x0 be

in the complement of K. Then f (x0 ) = 0 and f x(t) is also 0 unless

µ ¤ |x(t) ’ x0 |, where µ is the distance from the support of f to the

complement of K. But the probability that the right hand side of (8.9)

will be bigger than µ is o(t) (in fact, o(tn ) for all n) by familiar properties

of the Wiener process. Since f is bounded, this means that P t f (x0 ) is

uniformly o(t) for x0 in the complement of K, so that

P t f (x0 ) ’ f (x0 )

’ b(x0 ) · f (x0 ) + Cf (x0 ) = 0

t

uniformly for x0 in the complement of K. Now let x0 be in K. We have

t

t

b x(s) ds + w(t) ’ w(0) .

P f (x0 ) = Ef x(t) = Ef x0 +

0

De¬ne R(t) by

t

b x(s) ds + w(t) ’ w(0)

f x0 +

0

= f (x0 ) + tb(x0 ) · f (x0 ) + [w(t) ’ w(0)] · f (x0 )

‚2

1 i i j j

[w (t) ’ w (0)][w (t) ’ w (0)] i j f (x0 ) + R(t).

+

2 i,j ‚x ‚x

Then

P t f (x0 ) ’ f (x0 ) 1

= b(x0 ) · f (x0 ) + Cf (x0 ) + ER(t).

t t

By Taylor™s formula,

t

2

R(t) = o(|w(t) ’ w(0)| ) + o b x(s) ’ b(x0 ) ds .

0

Since E(|w(t) ’ w(0)|2 ) ¤ const. t, we need only show that

t

1

|b x(s) ’ b(x0 )|ds

E sup (8.10)

x0 ∈K t 0

42 CHAPTER 8

tends to 0. But (8.10) is less than

t

1

|x(s) ’ x0 |ds,

E sup κ

x0 ∈K t 0

which by (8.9) is less than

E sup κγ[ t |b(x0 )| + sup |w(0) ’ w(s)|]. (8.11)

x0 ∈K 0¤s¤t

The integrand in (8.11) is integrable and decreases to 0 as t ’ 0. QED.

Theorem 8.1 can be generalized in various ways. The ¬rst paragraph

of the theorem remains true if b is a continuous function of x and t that

satis¬es a global Lipschitz condition in x with a uniform Lipschitz con-

stant for each compact t-interval. The second paragraph needs to be

slightly modi¬ed as we no longer have a semigroup, but the proofs are

the same. Doob [15, §6, pp. 273“291], using K. Itˆ™s stochastic integrals

o

(see Chapter 11), has a much deeper generalization in which the matrix

cij depends on x and t. The restriction that b satisfy a global Lipschitz

condition is necessary in general. For example, if the matrix cij is 0 then

we have a system of ordinary di¬erential equations. However, if C is el-

liptic (that is, if the matrix cij is of positive type and non-singular) the

smoothness conditions on b can be greatly relaxed (cf. [20]).

We make the convention that

dx(t) = b x(t) dt + dw(t)

means that

t

x(t) ’ x(s) = b x(r) dr + w(t) ’ w(s)

s

for all t and s.

THEOREM 8.2 Let A : ‚ ’ ‚ be linear, let w be a Wiener process

on ‚ with in¬nitesimal generator (8.1), and let f : [0, ∞) ’ ‚ be

continuous. Then the solution of

dx(t) = Ax(t)dt + f (t)dt + dw(t), x(0) = x0 , (8.12)

for t ≥ 0 is

t t

At A(t’s)

eA(t’s) dw(s).

x(t) = e x0 + e f (s) ds + (8.13)

0 0

A CLASS OF STOCHASTIC DIFFERENTIAL EQUATIONS 43

The x(t) are Gaussian with mean

t

At

eA(t’s) f (s) ds

Ex(t) = e x0 + (8.14)

0

and covariance r(t, s) = E x(t) ’ Ex(t) x(s) ’ Ex(s) given by

s T

eA(t’s) 0 eAr 2ceA r dr, t ≥ s

r(t, s) = (8.15)

t Ar T

e 2ceA r dreA(s’t) , t ¤ s.

0

The latter integral in (8.13) is a Wiener integral (as in Chapter 7). In

(8.15), AT denotes the transpose of A and c is the matrix with entries cij

occurring in (8.1).

Proof. De¬ne x(t) by (8.13). Integrate the last term in (8.13) by parts,

obtaining

t t

s=t

A(t’s)

AeA(t’s) w(s) ds + eA(t’s) w(s)

e dw(s) = s=0

0 0

t

AeA(t’s) w(s) ds + w(t) ’ eAt w(0).

=

0

It follows that x(t)’w(t) is di¬erentiable, and has derivative Ax(t)+f (t).

This proves that (8.12) holds.

The x(t) are clearly Gaussian with the mean (8.14). Suppose that

t ≥ s. Then the covariance is given by

Exi (t)xj (s) ’ Exi (t)Exj (s)

t s

A(t’t1 )

eA(s’s1 )

=E e dwk (t1 ) dwh (s1 )

ik jh

0 0

k h

s

eA(t’r) 2ckh eA(s’r)

= dr

ik jh

0 k,h

s

T (s’r)

eA(t’r) 2ceA

= dr

ij

0

s

T

A(t’s)

eAr 2ceA r dr

= e .

0 ij

The case t ¤ s is analogous. QED.

44 CHAPTER 8

Reference

[20]. Edward Nelson, Les ´coulements incompressibles d™´nergie ¬nie,

e e

Colloques internationaux du Centre national de la recherche scienti¬que

´

No 117, “Les ´quations aux d´riv´es partielles”, Editions du C.N.R.S.,

e ee

Paris, 1962. (The last statement in section II is incorrect.)

Chapter 9

The Ornstein-Uhlenbeck

theory of Brownian motion

The theory of Brownian motion developed by Einstein and Smolu-