<< Предыдущая стр. 13(из 18 стр.)ОГЛАВЛЕНИЕ Следующая >>

2
(S3). The conditions (R3) and (S2) hold and det Пѓв€— (t) > 0 a.e. for
a.e. t.

We obtain theorems analogous to the preceding ones. In particular, if
a в‰¤ b, a в€€ I, b в€€ I, then for an (S1) process
b
E{x(b) в€’ x(a) | Fb } = E Dв€— x(s) ds Fb , (11.11)
a
KINEMATICS OF STOCHASTIC MOTION 79

and for an (S2) process
b
E{[yв€— (b) в€’ yв€— (a)] | Fb } = E Пѓв€— (s) ds Fb .
2 2
(11.12)
a

THEOREM 11.10 Let x be an (S1) process. Then

EDx(t) = EDв€— x(t) (11.13)

for all t in I. Let x be an (S2) process. Then

EПѓ 2 (t) = EПѓв€— (t)
2
(11.14)

for all t in I.

Proof. By Theorem 11.1 and (11.11), if we take absolute expectations
we п¬Ѓnd
b b
E[x(b) в€’ x(a)] = E Dx(s) ds = E Dв€— x(s) ds
a a

for all a and b in I. Since s в†’ Dx(s) and s в†’ Dв€— x(s) are continuous
in L 1 , (11.13) holds. Similarly, (11.14) follows from Theorem 11.4 and
(11.12). QED.

THEOREM 11.11 Let x be an (S1) process. Then x is a constant (i.e.,
x(t) is the same random variable for all t) if and only if Dx = Dв€— x = 0.

Proof. The only if part of the theorem is trivial. Suppose that Dx =
Dв€— x = 0. By Theorem 11.2, x is a martingale and a martingale with the
direction of time reversed. Let t1 = t2 , x1 = x(t1 ), x2 = x(t2 ). Then x1
and x2 are in L 1 and E{x1 |x2 } = x2 , E{x2 |x1 } = x1 . We wish to show
that x1 = x2 (a.e., of course).
If x1 and x2 are in L 2 (as they are if x is an (S2) process) there is a
trivial proof, as follows. We have

E{(x2 в€’ x1 )2 | x1 } = E{x2 в€’ 2x2 x1 + x2 | x1 } = E{x2 | x1 } в€’ x2 ,
2 1 2 1

so that if we take absolute expectations we п¬Ѓnd

E(x2 в€’ x1 )2 = Ex2 в€’ Ex2 .
2 1
80 CHAPTER 11

The same result holds with x1 and x2 interchanged. Thus E(x2 в€’x1 )2 = 0,
x2 = x1 a.e.
G. A. Hunt showed me the following proof for the general case (x1 , x2
in L 1 ).
Let Вµ be the distribution of x1 , x2 in the plane. We can take x1 and
x2 to be the coordinate functions. Then there is a conditional probability
distribution p(x1 , В·) such that if ОЅ is the distribution of x1 and f is a
positive Baire function on В‚2 ,

f (x1 , x2 ) dВµ(x1 , x2 ) = f (x1 , x2 ) p(x1 , x2 ) dОЅ(x1 ).

(See Doob [15, В§6, pp. 26вЂ“34].) Then

E{П•(x2 ) | x1 } = П•(x2 ) p(x1 , dx2 ) a.e. [ОЅ]

provided П•(x2 ) is in L 1 . Take П• to be strictly convex with |П•(Оѕ)| в‰¤ |Оѕ|
for all real Оѕ (so that П•(x2 ) is in L 1 ). Then, for each x1 , since П• is strictly
convex, JensenвЂ™s inequality gives

П• x2 p(x1 , dx2 ) < П•(x2 ) p(x1 , dx2 )

П•(x2 ) p(x1 , dx2 ) a.e. [p(x1 , В·)]. But
unless П•(x1 ) =

x2 p(x1 , dx2 ) = x1 a.e. [ОЅ],

so, unless x2 = x1 a.e. [ОЅ],

П•(x1 ) < П•(x2 ) p(x1 , dx2 ).

If we take absolute expectations, we п¬Ѓnd EП•(x1 ) < EП•(x2 ) unless x2 = x1
a.e. The same argument gives the reverse inequality, so x2 = x1 a.e.
QED.

THEOREM 11.12 Let x be and y be (S1) processes with respect to the
same families of Пѓ-algebras Pt and Ft , and suppose that x(t), y(t), Dx(t),
Dy(t), Dв€— x(t), and Dв€— y(t) all lie in L 2 and are continuous functions of
t in L 2 . Then
d
Ex(t)y(t) = EDx(t) В· y(t) + Ex(t)Dв€— y(t).
dt
KINEMATICS OF STOCHASTIC MOTION 81

Proof. We need to show for a and b in I, that
b
E [x(b)y(b) в€’ x(a)y(a)] = E [Dx(t) В· y(t) + x(t)Dв€— y(t)]dt.
a

(Notice that the integrand is continuous.) Divide [a, b] into n equal parts:
tj = a + j(b в€’ a)/n for j = 0, . . . , n. Then
nв€’1
E [x(b)y(b) в€’ x(a)y(a)] = lim E [x(tj+1 )y(tj ) в€’ x(tj )y(tjв€’1 )] =
nв†’в€ћ
j=1
nв€’1
y(tj ) + y(tjв€’1 )
E x(tj+1 ) в€’ x(tj )
lim +
2
nв†’в€ћ
j=1

x(tj+1 ) + x(tj )
y(tj ) в€’ y(tjв€’1 ) =
2
nв€’1
bв€’a
E [Dx(tj ) В· y(tj ) + x(tj )Dв€— y(tj )]
lim =
n
nв†’в€ћ
j=1
b
E [Dx(t) В· y(t) + x(t)Dв€— y(t)] dt.
a

QED.

Now let us assume that the past Pt and the future Ft are condi-
tionally independent given the present Pt в€© Ft . That is, if f is any
Ft -measurable function in L 1 then E{f | Pt } = E{f | Pt в€© Ft }, and if f
is any Pt -measurable function in L 1 then E{f | Ft } = E{f | Pt в€© Ft }.
If x is a Markov process and Pt is generated by the x(s) with s в‰¤ t, and
Ft by the x(s) with s в‰Ґ t, this is certainly the case. However, the as-
sumption is much weaker. It applies, for example, to the position x(t) of
the Ornstein-Uhlenbeck process. The reason is that the present Pt в€© Ft
may not be generated by x(t); for example, in the Ornstein-Uhlenbeck
case v(t) = dx(t)/dt is also Pt в€© Ft -measurable.
With the above assumption on the Pt and Ft , if x is an (S1) process
then Dx(t) and Dв€— x(t) are Pt в€©Ft -measurable, and we can form DDв€— x(t)
and Dв€— Dx(t) if they exist. Assuming they exist, we deп¬Ѓne
1 1
a(t) = DDв€— x(t) + Dв€— Dx(t) (11.15)
2 2
82 CHAPTER 11

and call it the mean second derivative or mean acceleration.
If x is a suп¬ѓciently smooth function of t then a(t) = d2 x(t)/dt2 . This
is also true of other possible candidates for the title of mean acceleration,
such as DDв€— x(t), Dв€— Dx(t), DDx(t), Dв€— Dв€— x(t), and 1 DDx(t) + 1 Dв€— Dв€— x(t).
2 2
Of these the п¬Ѓrst four distinguish between the two choices of direction for
the time axis, and so can be discarded. To discuss the п¬Ѓfth possibility,
consider the Gaussian Markov process x(t) satisfying

dx(t) = в€’П‰x(t) dt + dw(t),

where w is a Wiener process, in equilibrium (that is, with the invariant
Gaussian measure as initial measure). Then

Dx(t) = в€’П‰x(t),
Dв€— x(t) = П‰x(t),
a(t) = в€’П‰ 2 x(t),

but
1 1
DDx(t) + Dв€— Dв€— x(t) = П‰ 2 x(t).
2 2
This process is familiar to us: it is the position in the Smoluchowski de-
scription of the highly overdamped harmonic oscillator (or the velocity
of a free particle in the Ornstein-Uhlenbeck theory). The characteristic
feature of this process is its constant tendency to go towards the origin,
no matter which direction of time is taken. Our deп¬Ѓnition of mean ac-
celeration, which gives a(t) = в€’П‰ 2 x(t), is kinematically the appropriate
deп¬Ѓnition.

Reference

The stochastic integral was invented by ItЛ†: o
. Kiyosi ItЛ†, вЂњOn Stochastic Diп¬Ђerential EquationsвЂќ, Memoirs of the
o
American Mathematical Society, Number 4 (1951).
Doob gave a treatment based on martingales [15, В§6, pp. 436вЂ“451].
Our discussion of stochastic integrals, as well as most of the other material
of this section, is based on DoobвЂ™s book.
Chapter 12

Dynamics of stochastic motion

The fundamental law of non-relativistic dynamics is NewtonвЂ™s law
F = ma: the force on a particle is the product of the particleвЂ™s mass
and the acceleration of the particle. This law is, of course, nothing but
the deп¬Ѓnition of force. Most deп¬Ѓnitions are trivialвЂ”others are profound.
Feynman  has analyzed the characteristics that make NewtonвЂ™s deп¬Ѓ-
nition profound:
вЂњIt implies that if we study the mass times the acceleration and call
the product the force, i.e., if we study the characteristics of force as a
program of interest, then we shall п¬Ѓnd that forces have some simplicity;
the law is a good program for analyzing nature, it is a suggestion that
the forces will be simple.вЂќ
Now suppose that x is a stochastic process representing the motion
of a particle of mass m. Leaving unanalyzed the dynamical mechanism
causing the random п¬‚uctuations, we can ask how to express the fact that
there is an external force F acting on the particle. We do this simply by
setting
F = ma
where a is the mean acceleration (Chapter 11).
For example, suppose that x is the position in the Ornstein-Uhlenbeck
theory of Brownian motion, and suppose that the external force is F =
в€’ grad V where exp(в€’V D/mОІ) is integrable. In equilibrium, the particle
has probability density a normalization constant times exp(в€’V D/mОІ)
and satisп¬Ѓes
dx(t) = v(t)dt
dv(t) = в€’ОІv(t)dt + K x(t) dt + dB(t),

83
84 CHAPTER 12

where K = F/m = в€’ grad V /m, and B has variance parameter 2ОІ 2 D.
Then

Dx(t) = Dв€— x(t) = v(t),
Dv(t) = в€’ОІv(t) + K x(t) ,
Dв€— v(t) = ОІv(t) + K x(t) ,
a(t) = K x(t) .

Therefore the law F = ma holds.

Reference

. Richard P. Feynman, Robert B. Leighton, and Matthew Sands, вЂњThe
Feynman Lectures on PhysicsвЂќ, Addison-Wesley, Reading, Massachusetts,
1963.
Chapter 13

Kinematics of Markovian
motion

At this point I shall cease making regularity assumptions explicit.
Whenever we take the derivative of a function, the function is assumed
to be diп¬Ђerentiable. Whenever we take D of a stochastic process, it is
assumed to exist. Whenever we consider the probability density of a ran-
dom variable, it is assumed to exist. I do this not out of laziness but out
of ignorance. The problem of п¬Ѓnding convenient regularity assumptions
for this discussion and later applications of it (Chapter 15) is a non-trivial
problem.
Consider a Markov process x on В‚ of the form
dx(t) = b x(t), t)dt + dw(t),

where w is a Wiener process on В‚ with diп¬Ђusion coeп¬ѓcient ОЅ (we write
ОЅ instead of D to avoid confusion with mean forward derivatives). Here
b is a п¬Ѓxed smooth function on В‚ +1 . The w(t) в€’ w(s) are independent of
the x(r) whenever r в‰¤ s and r в‰¤ t, so that
Dx(t) = b x(t), t .
A Markov process with time reversed is again a Markov process (see
Doob [15, В§6, p. 83]), so we can deп¬Ѓne bв€— by
 << Предыдущая стр. 13(из 18 стр.)ОГЛАВЛЕНИЕ Следующая >>