ńņš. 13 |

2

(S3). The conditions (R3) and (S2) hold and det Ļā— (t) > 0 a.e. for

a.e. t.

We obtain theorems analogous to the preceding ones. In particular, if

a ā¤ b, a ā I, b ā I, then for an (S1) process

b

E{x(b) ā’ x(a) | Fb } = E Dā— x(s) ds Fb , (11.11)

a

KINEMATICS OF STOCHASTIC MOTION 79

and for an (S2) process

b

E{[yā— (b) ā’ yā— (a)] | Fb } = E Ļā— (s) ds Fb .

2 2

(11.12)

a

THEOREM 11.10 Let x be an (S1) process. Then

EDx(t) = EDā— x(t) (11.13)

for all t in I. Let x be an (S2) process. Then

EĻ 2 (t) = EĻā— (t)

2

(11.14)

for all t in I.

Proof. By Theorem 11.1 and (11.11), if we take absolute expectations

we ļ¬nd

b b

E[x(b) ā’ x(a)] = E Dx(s) ds = E Dā— x(s) ds

a a

for all a and b in I. Since s ā’ Dx(s) and s ā’ Dā— x(s) are continuous

in L 1 , (11.13) holds. Similarly, (11.14) follows from Theorem 11.4 and

(11.12). QED.

THEOREM 11.11 Let x be an (S1) process. Then x is a constant (i.e.,

x(t) is the same random variable for all t) if and only if Dx = Dā— x = 0.

Proof. The only if part of the theorem is trivial. Suppose that Dx =

Dā— x = 0. By Theorem 11.2, x is a martingale and a martingale with the

direction of time reversed. Let t1 = t2 , x1 = x(t1 ), x2 = x(t2 ). Then x1

and x2 are in L 1 and E{x1 |x2 } = x2 , E{x2 |x1 } = x1 . We wish to show

that x1 = x2 (a.e., of course).

If x1 and x2 are in L 2 (as they are if x is an (S2) process) there is a

trivial proof, as follows. We have

E{(x2 ā’ x1 )2 | x1 } = E{x2 ā’ 2x2 x1 + x2 | x1 } = E{x2 | x1 } ā’ x2 ,

2 1 2 1

so that if we take absolute expectations we ļ¬nd

E(x2 ā’ x1 )2 = Ex2 ā’ Ex2 .

2 1

80 CHAPTER 11

The same result holds with x1 and x2 interchanged. Thus E(x2 ā’x1 )2 = 0,

x2 = x1 a.e.

G. A. Hunt showed me the following proof for the general case (x1 , x2

in L 1 ).

Let Āµ be the distribution of x1 , x2 in the plane. We can take x1 and

x2 to be the coordinate functions. Then there is a conditional probability

distribution p(x1 , Ā·) such that if Ī½ is the distribution of x1 and f is a

positive Baire function on Ā‚2 ,

f (x1 , x2 ) dĀµ(x1 , x2 ) = f (x1 , x2 ) p(x1 , x2 ) dĪ½(x1 ).

(See Doob [15, Ā§6, pp. 26ā“34].) Then

E{Ļ•(x2 ) | x1 } = Ļ•(x2 ) p(x1 , dx2 ) a.e. [Ī½]

provided Ļ•(x2 ) is in L 1 . Take Ļ• to be strictly convex with |Ļ•(Ī¾)| ā¤ |Ī¾|

for all real Ī¾ (so that Ļ•(x2 ) is in L 1 ). Then, for each x1 , since Ļ• is strictly

convex, Jensenā™s inequality gives

Ļ• x2 p(x1 , dx2 ) < Ļ•(x2 ) p(x1 , dx2 )

Ļ•(x2 ) p(x1 , dx2 ) a.e. [p(x1 , Ā·)]. But

unless Ļ•(x1 ) =

x2 p(x1 , dx2 ) = x1 a.e. [Ī½],

so, unless x2 = x1 a.e. [Ī½],

Ļ•(x1 ) < Ļ•(x2 ) p(x1 , dx2 ).

If we take absolute expectations, we ļ¬nd EĻ•(x1 ) < EĻ•(x2 ) unless x2 = x1

a.e. The same argument gives the reverse inequality, so x2 = x1 a.e.

QED.

THEOREM 11.12 Let x be and y be (S1) processes with respect to the

same families of Ļ-algebras Pt and Ft , and suppose that x(t), y(t), Dx(t),

Dy(t), Dā— x(t), and Dā— y(t) all lie in L 2 and are continuous functions of

t in L 2 . Then

d

Ex(t)y(t) = EDx(t) Ā· y(t) + Ex(t)Dā— y(t).

dt

KINEMATICS OF STOCHASTIC MOTION 81

Proof. We need to show for a and b in I, that

b

E [x(b)y(b) ā’ x(a)y(a)] = E [Dx(t) Ā· y(t) + x(t)Dā— y(t)]dt.

a

(Notice that the integrand is continuous.) Divide [a, b] into n equal parts:

tj = a + j(b ā’ a)/n for j = 0, . . . , n. Then

nā’1

E [x(b)y(b) ā’ x(a)y(a)] = lim E [x(tj+1 )y(tj ) ā’ x(tj )y(tjā’1 )] =

nā’ā

j=1

nā’1

y(tj ) + y(tjā’1 )

E x(tj+1 ) ā’ x(tj )

lim +

2

nā’ā

j=1

x(tj+1 ) + x(tj )

y(tj ) ā’ y(tjā’1 ) =

2

nā’1

bā’a

E [Dx(tj ) Ā· y(tj ) + x(tj )Dā— y(tj )]

lim =

n

nā’ā

j=1

b

E [Dx(t) Ā· y(t) + x(t)Dā— y(t)] dt.

a

QED.

Now let us assume that the past Pt and the future Ft are condi-

tionally independent given the present Pt ā© Ft . That is, if f is any

Ft -measurable function in L 1 then E{f | Pt } = E{f | Pt ā© Ft }, and if f

is any Pt -measurable function in L 1 then E{f | Ft } = E{f | Pt ā© Ft }.

If x is a Markov process and Pt is generated by the x(s) with s ā¤ t, and

Ft by the x(s) with s ā„ t, this is certainly the case. However, the as-

sumption is much weaker. It applies, for example, to the position x(t) of

the Ornstein-Uhlenbeck process. The reason is that the present Pt ā© Ft

may not be generated by x(t); for example, in the Ornstein-Uhlenbeck

case v(t) = dx(t)/dt is also Pt ā© Ft -measurable.

With the above assumption on the Pt and Ft , if x is an (S1) process

then Dx(t) and Dā— x(t) are Pt ā©Ft -measurable, and we can form DDā— x(t)

and Dā— Dx(t) if they exist. Assuming they exist, we deļ¬ne

1 1

a(t) = DDā— x(t) + Dā— Dx(t) (11.15)

2 2

82 CHAPTER 11

and call it the mean second derivative or mean acceleration.

If x is a suļ¬ciently smooth function of t then a(t) = d2 x(t)/dt2 . This

is also true of other possible candidates for the title of mean acceleration,

such as DDā— x(t), Dā— Dx(t), DDx(t), Dā— Dā— x(t), and 1 DDx(t) + 1 Dā— Dā— x(t).

2 2

Of these the ļ¬rst four distinguish between the two choices of direction for

the time axis, and so can be discarded. To discuss the ļ¬fth possibility,

consider the Gaussian Markov process x(t) satisfying

dx(t) = ā’Ļx(t) dt + dw(t),

where w is a Wiener process, in equilibrium (that is, with the invariant

Gaussian measure as initial measure). Then

Dx(t) = ā’Ļx(t),

Dā— x(t) = Ļx(t),

a(t) = ā’Ļ 2 x(t),

but

1 1

DDx(t) + Dā— Dā— x(t) = Ļ 2 x(t).

2 2

This process is familiar to us: it is the position in the Smoluchowski de-

scription of the highly overdamped harmonic oscillator (or the velocity

of a free particle in the Ornstein-Uhlenbeck theory). The characteristic

feature of this process is its constant tendency to go towards the origin,

no matter which direction of time is taken. Our deļ¬nition of mean ac-

celeration, which gives a(t) = ā’Ļ 2 x(t), is kinematically the appropriate

deļ¬nition.

Reference

The stochastic integral was invented by ItĖ: o

[27]. Kiyosi ItĖ, āOn Stochastic Diļ¬erential Equationsā, Memoirs of the

o

American Mathematical Society, Number 4 (1951).

Doob gave a treatment based on martingales [15, Ā§6, pp. 436ā“451].

Our discussion of stochastic integrals, as well as most of the other material

of this section, is based on Doobā™s book.

Chapter 12

Dynamics of stochastic motion

The fundamental law of non-relativistic dynamics is Newtonā™s law

F = ma: the force on a particle is the product of the particleā™s mass

and the acceleration of the particle. This law is, of course, nothing but

the deļ¬nition of force. Most deļ¬nitions are trivialā”others are profound.

Feynman [28] has analyzed the characteristics that make Newtonā™s deļ¬-

nition profound:

āIt implies that if we study the mass times the acceleration and call

the product the force, i.e., if we study the characteristics of force as a

program of interest, then we shall ļ¬nd that forces have some simplicity;

the law is a good program for analyzing nature, it is a suggestion that

the forces will be simple.ā

Now suppose that x is a stochastic process representing the motion

of a particle of mass m. Leaving unanalyzed the dynamical mechanism

causing the random ļ¬‚uctuations, we can ask how to express the fact that

there is an external force F acting on the particle. We do this simply by

setting

F = ma

where a is the mean acceleration (Chapter 11).

For example, suppose that x is the position in the Ornstein-Uhlenbeck

theory of Brownian motion, and suppose that the external force is F =

ā’ grad V where exp(ā’V D/mĪ²) is integrable. In equilibrium, the particle

has probability density a normalization constant times exp(ā’V D/mĪ²)

and satisļ¬es

dx(t) = v(t)dt

dv(t) = ā’Ī²v(t)dt + K x(t) dt + dB(t),

83

84 CHAPTER 12

where K = F/m = ā’ grad V /m, and B has variance parameter 2Ī² 2 D.

Then

Dx(t) = Dā— x(t) = v(t),

Dv(t) = ā’Ī²v(t) + K x(t) ,

Dā— v(t) = Ī²v(t) + K x(t) ,

a(t) = K x(t) .

Therefore the law F = ma holds.

Reference

[28]. Richard P. Feynman, Robert B. Leighton, and Matthew Sands, āThe

Feynman Lectures on Physicsā, Addison-Wesley, Reading, Massachusetts,

1963.

Chapter 13

Kinematics of Markovian

motion

At this point I shall cease making regularity assumptions explicit.

Whenever we take the derivative of a function, the function is assumed

to be diļ¬erentiable. Whenever we take D of a stochastic process, it is

assumed to exist. Whenever we consider the probability density of a ran-

dom variable, it is assumed to exist. I do this not out of laziness but out

of ignorance. The problem of ļ¬nding convenient regularity assumptions

for this discussion and later applications of it (Chapter 15) is a non-trivial

problem.

Consider a Markov process x on Ā‚ of the form

dx(t) = b x(t), t)dt + dw(t),

where w is a Wiener process on Ā‚ with diļ¬usion coeļ¬cient Ī½ (we write

Ī½ instead of D to avoid confusion with mean forward derivatives). Here

b is a ļ¬xed smooth function on Ā‚ +1 . The w(t) ā’ w(s) are independent of

the x(r) whenever r ā¤ s and r ā¤ t, so that

Dx(t) = b x(t), t .

A Markov process with time reversed is again a Markov process (see

Doob [15, Ā§6, p. 83]), so we can deļ¬ne bā— by

ńņš. 13 |