<<

. 12
( 19 .)



>>


but this is not precise because Wt is a continuous random variable and both sides of the
above equation are zero; (14.4) is the rigorous version of the re¬‚ection principle.
2
Now let Wt be a Brownian motion under P. Let dQ/dP = Mt = eµWt ’µ t/2 . Let
Yt = Wt ’ µt. Theorem 14.1 says that under Q, Yt is a Brownian motion. We have
Wt = Yt + µt.
Let A = (sups¤t0 Ws ≥ a). We want to calculate

P(sup (Ws + µs) ≥ a).
s¤t0


Wt is a Brownian motion under P while Yt is a Brownian motion under Q. So this proba-
bility is equal to
Q(sup (Ys + µs) ≥ a).
s¤t0

This in turn is equal to
Q(sup Ws ≥ a) = Q(A).
s¤t0

Now we use the expression for Mt :
2
Q(A) = E P [eµWt0 ’µ t0 /2
; A]

2
eµx’µ t0 /2
P(sup Ws ≥ a, Wt0 = x)dx
=
s¤t0
’∞

a
1 1
’µ2 t0 /2 2 2
eµx e’(2a’x) /2t0 dx + eµx e’x /2t0 dx.
√ √
=e
2πt0 2πt0
’∞ a


Proof of Theorem 14.1. Using Ito™s formula with f (x) = ex ,
t
Mt = 1 ’ µ(Xr )Mr dWr .
0

So
t
=’
W, M µ(Xr )Mr dr.
t
0

Since Q(A) = E P [Mt ; A], it is not hard to see that

E Q [Wt ; A] = E P [Mt Wt ; A].

66
By Ito™s product formula this is
t t
Mr dWr ; A + E P Wr dMr ; A + E P W, M t ; A .
EP
0 0

t t
Since 0 Mr dWr and 0 Wr dMr are stochastic integrals with respect to martingales, they
are themselves martingales. Thus the above is equal to
s s
Mr dWr ; A + E P Wr dMr ; A + E P W, M t ; A .
EP
0 0

Using the product formula again, this is

’ W, M s ; A] = E Q [Ws ; A] + E P [ W, M ’ W, M s ; A].
E P [Ms Ws ; A] + E P [ W, M t t


The last term on the right is equal to
t t t
d W, M r ; A = E P ’ Mr µ(Xr )dr; A = E P ’ E P [Mt | Fr ]µ(Xr )dr; A
EP
s s s
t t
= EP ’ Mt µ(Xr )dr; A = E Q ’ µ(Xr )dr; A
s s
t s
= ’E Q µ(Xr ) dr; A + E Q µ(Xr ) dr; A .
0 0

Therefore
t s
E Q Wt + µ(Xr )dr; A = E Q Ws + µ(Xr )dr; A ,
0 0

which shows Xt is a martingale with respect to Q.
2
Similarly, Xt ’ t is a martingale with respect to Q. By L´vy™s theorem, Xt is a
e
Brownian motion.

In Note 3 we give a proof of Theorem 14.2 and in Note 4 we show how Theorem
14.1 is really a special case of Theorem 14.2.

Note 1. Let
t t
[µ(Xs )]2 ds.
1
Yt = ’ µ(Xs )dWs ’ 2
0 0

We apply Ito™s formula with the function f (x) = ex . Note the martingale part of Yt is the
stochastic integral term and the quadratic variation of Y is the quadratic variation of the
martingale part, so
t
[’µ(Xs )]2 ds.
Y =
t
0

67
Then f (x) = ex , f (x) = ex , and hence
t t
Yt Y0 Ys
eYs d Y
1
Mt = e =e + e dYs + s
2
0 0
t t
[µ(Xs )]2 ds
1
Ms (’µ(Xs )dWs ’
=1+ 2
0 0
t
Ms [’µ(Xs )]2 ds
1
+ 2
0
t
=1’ Ms µ(Xs )dWs .
0

Since stochastic integrals with respect to a Brownian motion are martingales, this completes
the argument that Mt is a martingale.
Note 2. Let Sn be a simple random walk. This means that X1 , X2 , . . . , are independent
and identically distributed random variables with P(Xi = 1) = P(Xi = ’1) = 1 ; let Sn =
2
n
i=1 Xi . If you are playing a game where you toss a fair coin and win $1 if it comes up heads
and lose $1 if it comes up tails, then Sn will be your fortune at time n. Let An = max0¤k¤n Sk .
We will show the analogue of (14.4) for Sn , which is
x≥a
P(Sn = x)
P(Sn = x, An ≥ a) = (14.5)
P(Sn = 2a ’ x) x < a.
(14.4) can be derived from this using a weak convergence argument.
To establish (14.5), note that if x ≥ a and Sn = x, then automatically An ≥ a, so
the only case to consider is when x < a. Any path that crosses a but is at level x at time n
has a corresponding path determined by re¬‚ecting across level a at the ¬rst time the Brownian
motion hits a; the re¬‚ected path will end up at a + (a ’ x) = 2a ’ x. The probability on the
left hand side of (14.5) is the number of paths that hit a and end up at x divided by the total
number of paths. Since the number of paths that hit a and end up at x is equal to the number
of paths that end up at 2a ’ x, then the probability on the left is equal to the number of paths
that end up at 2a ’ x divided by the total number of paths; this is P(Sn = 2a ’ x), which is
the right hand side.
Note 3. To prove Theorem 14.2, we proceed as follows. Assume without loss of generality
that X0 = 0. Then if A ∈ Fs ,
E Q [Xt ; A] = E P [Mt Xt ; A]
t t
= EP Mr dXr ; A + E P Xr dMr ; A + E P [ X, M t ; A]
0 0
s s
= EP Mr dXr ; A + E P Xr dMr ; A + E P [ X, M t ; A]
0 0
’ X, M s ; A].
= E Q [Xs ; A] + E Q [ X, M t

68
Here we used the fact that stochastic integrals with respect to the martingales X and M are
again martingales.
On the other hand,
t
’ X, M s ; A] = E P
E P [ X, M d X, M r ; A
t
s
t
= EP Mr dDr ; A
s
t
E P [Mt | Fr ] dDr ; A
= EP
s
t
= EP Mt dDr ; A
s
= E P [(Dt ’ Ds )Mt ; A]
= E Q [Dt ’ Ds ; A].

The proof of the quadratic variation assertion is similar.


Note 4. Here is an argument showing how Theorem 14.1 can also be derived from Theorem
14.2.
From our formula for M we have dMt = ’Mt µ(Xt )dWt , and therefore d X, M t =
’Mt µ(Xt )dt. Hence by Theorem 14.2 we see that under Q, Xt is a continuous martingale
with X t = t. By L´vy™s theorem, this means that X is a Brownian motion under Q.
e




69
15. Stochastic di¬erential equations.
Let Wt be a Brownian motion. We are interested in the existence and uniqueness
for stochastic di¬erential equations (SDEs) of the form

dXt = σ(Xt ) dWt + b(Xt ) dt, X0 = x0 . (15.1)

This means Xt satis¬es
t t
Xt = x0 + σ(Xs ) dWs + b(Xs ) ds. (15.2)
0 0

Here Wt is a Brownian motion, and (15.2) holds for almost every ω.
We have to make some assumptions on σ and b. We assume they are Lipschitz,
which means:
|σ(x) ’ σ(y)| ¤ c|x ’ y|, |b(x) ’ b(y)| ¤ c|x ’ y|

for some constant c. We also suppose that σ and b grow at most linearly, which means:

|σ(x)| ¤ c(1 + |x|), |b(x)| ¤ c(1 + |x|).

Theorem 15.1. There exists one and only one solution to (15.2).
The idea of the proof is Picard iteration, which is how existence and uniqueness for
ordinary di¬erential equations is proved; see Note 1.
The intuition behind (15.1) is that Xt behaves locally like a multiple of Brownian
motion plus a constant drift: locally Xt+h ’ Xt ≈ σ(Wt+h ’ Wt ) + b((t + h) ’ t). However
the constants σ and b depend on the current value of Xt . When Xt is at di¬erent points,
the coe¬cients vary, which is why they are written σ(Xt ) and b(Xt ). σ is sometimes called
the di¬usion coe¬cient and µ is sometimes called the drift coe¬cient.
The above theorem also works in higher dimensions. We want to solve
d
i j
dXt = σij (Xs )dWs + bi (Xs )ds, i = 1, . . . , d.
j=1

This is an abbreviation for the equation
d
t t
i
xi j
Xt = + σij (Xs )dWs + bi (Xs )ds.
0
0 j=1 0


Here the initial value is x0 = (x1 , . . . , xd ), the solution process is Xt = (Xt , . . . , Xt ), and
1 d
0 0
Wt1 , . . . , Wtd are d independent Brownian motions. If all of the σij and bi are Lipschitz
and grow at most linearly, we have existence and uniqueness for the solution.

70
Suppose one wants to solve

<<

. 12
( 19 .)



>>