<<

. 11
( 19 .)



>>

Let us give another example of the use of Ito™s formula. Let Xt = Wt and let
f (x) = xk . Then f (x) = kxk’1 and f (x) = k(k ’ 1)xk’2 . We then have
t t
Wtk k k’1 k’2
1
k(k ’ 1)Ws d W
= W0 + kWs dWs + s
2
0 0
t t
k(k ’ 1)
k’1 k’2
= kWs dWs + Ws ds.
2
0 0
t
When k = 3, this says Wt3 ’ 3 0 Ws ds is a stochastic integral with respect to a Brownian
motion, and hence a martingale.
For a semimartingale Xt = Mt +At we set X t = M t . Given two semimartingales
X, Y , we de¬ne
X, Y t = 1 [ X + Y t ’ X t ’ Y t ].
2

The following is known as Ito™s product formula. It may also be viewed as an
integration by parts formula.
Proposition 13.2. If Xt and Yt are semimartingales,
t t
Xt Yt = X0 Y0 + Xs dYs + Ys dXs + X, Y t .
0 0

Proof. Applying Ito™s formula with f (x) = x2 to Xt + Yt , we obtain
t
2 2
(Xt + Yt ) = (X0 + Y0 ) + 2 (Xs + Ys )(dXs + dYs ) + X + Y t .
0
Applying Ito™s formula with f (x) = x2 to X and to Y , then
t
2 2
Xt = X0 +2 Xs dXs + X t
0
and
t
Yt2 Y02
= +2 Ys dYs + Y t .
0
Then some algebra and the fact that
Xt Yt = 2 [(Xt + Yt )2 ’ Xt ’ Yt2 ]
2
1


yields the formula.
1 d
There is a multidimensional version of Ito™s formula: if Xt = (Xt , . . . , Xt ) is a
vector, each component of which is a semimartingale, and f ∈ C 2 , then
d d
t t
‚2f
‚f i i j
1
f (Xt ) ’ f (X0 ) = (Xs )dXs + 2 (Xs )d X , X s .
2
‚xi ‚xi
0 0
i=1 i,j=1

The following application of Ito™s formula, known as L´vy™s theorem, is important.
e

61
Theorem 13.3. Suppose Mt is a continuous martingale with M = t. Then Mt is a
t
Brownian motion.
Before proving this, recall from undergraduate probability that the moment generating
function of a r.v. X is de¬ned by mX (a) = E eaX and that if two random variables have
the same moment generating function, they have the same law. This is also true if we
replace a by iu. In this case we have •X (u) = E eiuX and •X is called the characteristic
function of X. The reason for looking at the characteristic function is that •X always
exists, whereas mX (a) might be in¬nite. The one special case we will need is that if X is
2
a normal r.v. with mean 0 and variance t, then •X (u) = e’u t/2 . This follows from the
formula for mX (a) with a replaced by iu (this can be justi¬ed rigorously).
Proof. We will prove that Mt is a N (0, t); for the remainder of the proof see Note 1.
We apply Ito™s formula with f (x) = eiux . Then
t t
iuMt iuMs
(’u2 )eiuMs d M s .
1
e =1+ iue dMs + 2
0 0

Taking expectations and using M s = s and the fact that a stochastic integral is a
martingale, hence has 0 expectation, we have
t
u2
iuMt
eiuMs ds.
=1’
Ee
2 0

Let J(t) = E eiuMt . The equation can be rewritten
t
u2
J(t) = 1 ’ J(s)ds.
2 0

So J (t) = ’ 1 u2 J(t) with J(0) = 1. The solution to this elementary ODE is J(t) =
2
’u2 t/2
e . Since
2
E eiuMt = e’u t/2 ,
then by our remarks above the law of Mt must be that of a N (0, t), which shows that Mt
is a mean 0 variance t normal r.v.

Note 1. If A ∈ Fs and we do the same argument with Mt replaced by Ms+t ’ Ms , we have
t t
iu(Ms+t ’Ms ) iu(Ms+r ’Ms )
(’u2 )eiu(Ms+r ’Ms ) d M r .
1
e =1+ iue dMr + 2
0 0

Multiply this by 1A and take expectations. Since a stochastic integral is a martingale, the
stochastic integral term again has expectation 0. If we let K(t) = E [eiu(Mt+s ’Mt ) ; A], we
1
now arrive at K (t) = ’ 2 u2 K(t) with K(0) = P(A), so
2
K(t) = P(A)e’u t/2
.

62
Therefore
E eiu(Mt+s ’Ms ) ; A = E eiu(Mt+s ’Ms ) P(A). (13.2)

If f is a nice function and f is its Fourier transform, replace u in the above by ’u, multiply
by f (u), and integrate over u. (To do the integral, we approximate the integral by a Riemann
sum and then take limits.) We then have

E [f (Ms+t ’ Ms ); A] = E [f ((Ms+t ’ Ms )]P(A).

By taking limits we have this for f = 1B , so

P(Ms+t ’ Ms ∈ B, A) = P(Ms+t ’ Ms ∈ B)P(A).

This implies that Ms+t ’ Ms is independent of Fs .
Note Var (Mt ’ Ms ) = t ’ s; take A = „¦ in (13.2).




63
14. The Girsanov theorem.
Suppose P is a probability and

dXt = dWt + µ(Xt )dt,

where Wt is a Brownian motion. This is short hand for
t
Xt = X0 + Wt + µ(Xs )ds. (14.1)
0

Let
t t
µ(Xs )2 ds/2 .
Mt = exp ’ µ(Xs )dWs ’ (14.2)
0 0

Then as we have seen before, by Ito™s formula, Mt is a martingale. This calculation is
reviewed in Note 1. We also observe that M0 = 1.
Now let us de¬ne a new probability by setting

Q(A) = E [Mt ; A] (14.3)

if A ∈ Ft . We had better be sure this Q is well de¬ned. If A ∈ Fs ‚ Ft , then E [Mt ; A] =
E [Ms ; A] because Mt is a martingale. We also check that Q(„¦) = E [Mt ; „¦] = E Mt . This
is equal to E M0 = 1, since Mt is a martingale.

What the Girsanov theorem says is

Theorem 14.1. Under Q, Xt is a Brownian motion.

Under P, Wt is a Brownian motion and Xt is not. Under Q, the process Wt is no
longer a Brownian motion.
In order for a process Xt to be a Brownian motion, we need at a minimum that Xt
is mean zero and variance t. To de¬ne mean and variance, we need a probability. Therefore
a process might be a Brownian motion with respect to one probability and not another.
Most of the other parts of the de¬nition of being a Brownian motion also depend on the
probability.
Similarly, to be a martingale, we need conditional expectations, and the conditional
expectation of a random variable depends on what probability is being used.
There is a more general version of the Girsanov theorem.

Theorem 14.2. If Xt is a martingale under P, then under Q the process Xt ’ Dt is a
martingale where
t
1
Dt = d X, M s .
Ms
0

64
X is the same under both P and Q.
t

Let us see how Theorem 14.1 can be used. Let St be the stock price, and suppose

dSt = σSt dWt + mSt dt.

(So in the above formulation, µ(x) = m for all x.) De¬ne
2
/2σ 2 )t
Mt = e(’m/σ)(Wt )’(m .

Then from (13.1) Mt is a martingale and
t
m

Mt = 1 + Ms dWs .
σ
0

Let Xt = Wt . Then
t t
m m
’ Ms ds = ’
X, M = Ms ds.
t
σ σ
0 0

Therefore
t t
1 m
=’ ds = ’(m/σ)t.
d X, M s
Ms σ
0 0

De¬ne Q by (14.3). By Theorem 14.2, under Q the process Wt = Wt + (m/σ)t is a
martingale. Hence
dSt = σSt (dWt + (m/σ)dt) = σSt dWt ,
or
t
St = S0 + σSs dWs
0
is a martingale. So we have found a probability under which the asset price is a martingale.
This means that Q is the risk-neutral probability, which we have been calling P.
Let us give another example of the use of the Girsanov theorem. Suppose Xt =
Wt + µt, where µ is a constant. We want to compute the probability that Xt exceeds the
level a by time t0 .
We ¬rst need the probability that a Brownian motion crosses a level a by time t0 .
If At = sups¤t Wt , (note we are not looking at |Wt |), we have
d
P(At > a, c ¤ Wt ¤ d) = •(t, a, x), (14.4)
c

where 2
√ 1 e’x /2t x≥a
2πt
•(t, a, x) = 2
√ 1 e’(2a’x) /2t x < a.
2πt

65
This is called the re¬‚ection principle, and the name is due to the derivation, given in Note
2. Sometimes one says

P(Wt = x, At > a) = P(Wt = 2a ’ x), x < a,

<<

. 11
( 19 .)



>>