<< Предыдущая стр. 12(из 19 стр.)ОГЛАВЛЕНИЕ Следующая >>

but this is not precise because Wt is a continuous random variable and both sides of the
above equation are zero; (14.4) is the rigorous version of the reп¬‚ection principle.
2
Now let Wt be a Brownian motion under P. Let dQ/dP = Mt = eВµWt в€’Вµ t/2 . Let
Yt = Wt в€’ Вµt. Theorem 14.1 says that under Q, Yt is a Brownian motion. We have
Wt = Yt + Вµt.
Let A = (supsв‰¤t0 Ws в‰Ґ a). We want to calculate

P(sup (Ws + Вµs) в‰Ґ a).
sв‰¤t0

Wt is a Brownian motion under P while Yt is a Brownian motion under Q. So this proba-
bility is equal to
Q(sup (Ys + Вµs) в‰Ґ a).
sв‰¤t0

This in turn is equal to
Q(sup Ws в‰Ґ a) = Q(A).
sв‰¤t0

Now we use the expression for Mt :
2
Q(A) = E P [eВµWt0 в€’Вµ t0 /2
; A]
в€ћ
2
eВµxв€’Вµ t0 /2
P(sup Ws в‰Ґ a, Wt0 = x)dx
=
sв‰¤t0
в€’в€ћ
в€ћ
a
1 1
в€’Вµ2 t0 /2 2 2
eВµx eв€’(2aв€’x) /2t0 dx + eВµx eв€’x /2t0 dx.
в€љ в€љ
=e
2ПЂt0 2ПЂt0
в€’в€ћ a

Proof of Theorem 14.1. Using ItoвЂ™s formula with f (x) = ex ,
t
Mt = 1 в€’ Вµ(Xr )Mr dWr .
0

So
t
=в€’
W, M Вµ(Xr )Mr dr.
t
0

Since Q(A) = E P [Mt ; A], it is not hard to see that

E Q [Wt ; A] = E P [Mt Wt ; A].

66
By ItoвЂ™s product formula this is
t t
Mr dWr ; A + E P Wr dMr ; A + E P W, M t ; A .
EP
0 0

t t
Since 0 Mr dWr and 0 Wr dMr are stochastic integrals with respect to martingales, they
are themselves martingales. Thus the above is equal to
s s
Mr dWr ; A + E P Wr dMr ; A + E P W, M t ; A .
EP
0 0

Using the product formula again, this is

в€’ W, M s ; A] = E Q [Ws ; A] + E P [ W, M в€’ W, M s ; A].
E P [Ms Ws ; A] + E P [ W, M t t

The last term on the right is equal to
t t t
d W, M r ; A = E P в€’ Mr Вµ(Xr )dr; A = E P в€’ E P [Mt | Fr ]Вµ(Xr )dr; A
EP
s s s
t t
= EP в€’ Mt Вµ(Xr )dr; A = E Q в€’ Вµ(Xr )dr; A
s s
t s
= в€’E Q Вµ(Xr ) dr; A + E Q Вµ(Xr ) dr; A .
0 0

Therefore
t s
E Q Wt + Вµ(Xr )dr; A = E Q Ws + Вµ(Xr )dr; A ,
0 0

which shows Xt is a martingale with respect to Q.
2
Similarly, Xt в€’ t is a martingale with respect to Q. By LВґvyвЂ™s theorem, Xt is a
e
Brownian motion.

In Note 3 we give a proof of Theorem 14.2 and in Note 4 we show how Theorem
14.1 is really a special case of Theorem 14.2.

Note 1. Let
t t
[Вµ(Xs )]2 ds.
1
Yt = в€’ Вµ(Xs )dWs в€’ 2
0 0

We apply ItoвЂ™s formula with the function f (x) = ex . Note the martingale part of Yt is the
stochastic integral term and the quadratic variation of Y is the quadratic variation of the
martingale part, so
t
[в€’Вµ(Xs )]2 ds.
Y =
t
0

67
Then f (x) = ex , f (x) = ex , and hence
t t
Yt Y0 Ys
eYs d Y
1
Mt = e =e + e dYs + s
2
0 0
t t
[Вµ(Xs )]2 ds
1
Ms (в€’Вµ(Xs )dWs в€’
=1+ 2
0 0
t
Ms [в€’Вµ(Xs )]2 ds
1
+ 2
0
t
=1в€’ Ms Вµ(Xs )dWs .
0

Since stochastic integrals with respect to a Brownian motion are martingales, this completes
the argument that Mt is a martingale.
Note 2. Let Sn be a simple random walk. This means that X1 , X2 , . . . , are independent
and identically distributed random variables with P(Xi = 1) = P(Xi = в€’1) = 1 ; let Sn =
2
n
i=1 Xi . If you are playing a game where you toss a fair coin and win \$1 if it comes up heads
and lose \$1 if it comes up tails, then Sn will be your fortune at time n. Let An = max0в‰¤kв‰¤n Sk .
We will show the analogue of (14.4) for Sn , which is
xв‰Ґa
P(Sn = x)
P(Sn = x, An в‰Ґ a) = (14.5)
P(Sn = 2a в€’ x) x < a.
(14.4) can be derived from this using a weak convergence argument.
To establish (14.5), note that if x в‰Ґ a and Sn = x, then automatically An в‰Ґ a, so
the only case to consider is when x < a. Any path that crosses a but is at level x at time n
has a corresponding path determined by reп¬‚ecting across level a at the п¬Ѓrst time the Brownian
motion hits a; the reп¬‚ected path will end up at a + (a в€’ x) = 2a в€’ x. The probability on the
left hand side of (14.5) is the number of paths that hit a and end up at x divided by the total
number of paths. Since the number of paths that hit a and end up at x is equal to the number
of paths that end up at 2a в€’ x, then the probability on the left is equal to the number of paths
that end up at 2a в€’ x divided by the total number of paths; this is P(Sn = 2a в€’ x), which is
the right hand side.
Note 3. To prove Theorem 14.2, we proceed as follows. Assume without loss of generality
that X0 = 0. Then if A в€€ Fs ,
E Q [Xt ; A] = E P [Mt Xt ; A]
t t
= EP Mr dXr ; A + E P Xr dMr ; A + E P [ X, M t ; A]
0 0
s s
= EP Mr dXr ; A + E P Xr dMr ; A + E P [ X, M t ; A]
0 0
в€’ X, M s ; A].
= E Q [Xs ; A] + E Q [ X, M t

68
Here we used the fact that stochastic integrals with respect to the martingales X and M are
again martingales.
On the other hand,
t
в€’ X, M s ; A] = E P
E P [ X, M d X, M r ; A
t
s
t
= EP Mr dDr ; A
s
t
E P [Mt | Fr ] dDr ; A
= EP
s
t
= EP Mt dDr ; A
s
= E P [(Dt в€’ Ds )Mt ; A]
= E Q [Dt в€’ Ds ; A].

The proof of the quadratic variation assertion is similar.

Note 4. Here is an argument showing how Theorem 14.1 can also be derived from Theorem
14.2.
From our formula for M we have dMt = в€’Mt Вµ(Xt )dWt , and therefore d X, M t =
в€’Mt Вµ(Xt )dt. Hence by Theorem 14.2 we see that under Q, Xt is a continuous martingale
with X t = t. By LВґvyвЂ™s theorem, this means that X is a Brownian motion under Q.
e

69
15. Stochastic diп¬Ђerential equations.
Let Wt be a Brownian motion. We are interested in the existence and uniqueness
for stochastic diп¬Ђerential equations (SDEs) of the form

dXt = Пѓ(Xt ) dWt + b(Xt ) dt, X0 = x0 . (15.1)

This means Xt satisп¬Ѓes
t t
Xt = x0 + Пѓ(Xs ) dWs + b(Xs ) ds. (15.2)
0 0

Here Wt is a Brownian motion, and (15.2) holds for almost every П‰.
We have to make some assumptions on Пѓ and b. We assume they are Lipschitz,
which means:
|Пѓ(x) в€’ Пѓ(y)| в‰¤ c|x в€’ y|, |b(x) в€’ b(y)| в‰¤ c|x в€’ y|

for some constant c. We also suppose that Пѓ and b grow at most linearly, which means:

|Пѓ(x)| в‰¤ c(1 + |x|), |b(x)| в‰¤ c(1 + |x|).

Theorem 15.1. There exists one and only one solution to (15.2).
The idea of the proof is Picard iteration, which is how existence and uniqueness for
ordinary diп¬Ђerential equations is proved; see Note 1.
The intuition behind (15.1) is that Xt behaves locally like a multiple of Brownian
motion plus a constant drift: locally Xt+h в€’ Xt в‰€ Пѓ(Wt+h в€’ Wt ) + b((t + h) в€’ t). However
the constants Пѓ and b depend on the current value of Xt . When Xt is at diп¬Ђerent points,
the coeп¬ѓcients vary, which is why they are written Пѓ(Xt ) and b(Xt ). Пѓ is sometimes called
the diп¬Ђusion coeп¬ѓcient and Вµ is sometimes called the drift coeп¬ѓcient.
The above theorem also works in higher dimensions. We want to solve
d
i j
dXt = Пѓij (Xs )dWs + bi (Xs )ds, i = 1, . . . , d.
j=1

This is an abbreviation for the equation
d
t t
i
xi j
Xt = + Пѓij (Xs )dWs + bi (Xs )ds.
0
0 j=1 0

Here the initial value is x0 = (x1 , . . . , xd ), the solution process is Xt = (Xt , . . . , Xt ), and
1 d
0 0
Wt1 , . . . , Wtd are d independent Brownian motions. If all of the Пѓij and bi are Lipschitz
and grow at most linearly, we have existence and uniqueness for the solution.

70
Suppose one wants to solve
 << Предыдущая стр. 12(из 19 стр.)ОГЛАВЛЕНИЕ Следующая >>