<<

. 9
( 19 .)



>>

First step. If G is bounded and Fa measurable, let Hs (ω) = G(ω)1(a,b] (s), and de¬ne the
stochastic integral to be the process Nt , where Nt = G(Wt§b ’ Wt§a ). Compare this to
the ¬rst paragraph of this section, where we considered Riemann-Stieltjes integrals.
Proposition 12.2. Nt is a continuous martingale, E N∞ = E [G2 (b ’ a)] and
2


t
G2 1[a,b] (s) ds.
N =
t
0


Proof. The continuity is clear. Let us look at E [Nt | Fs ]. In the case a < s < t < b, this
is equal to

E [G(Wt ’ Wa ) | Fs ] = GE [(Wt ’ Wa ) | Fs ] = G(Ws ’ Wa ) = Ns .

In the case s < a < t < b, E [Nt | Fs ] is equal to

E [G(Wt ’ Wa ) | Fs ] = E [GE [Wt ’ Wa | Fa ] | Fs ] = 0 = Ns .

51
The other possibilities are s < t < a < b, a < b < s < t, as < a < b < t, and a < s < b < t;
these are done similarly.
2
For E N∞ , we have using Lemma 12.1(b)

E N∞ = E [G2 E [(Wb ’ Wa )2 | Fa ]] = E [G2 E [Wb ’ Wa | Fa ]] = E [G2 (b ’ a)].
2 2 2



For N t , we need to show

E [G2 (Wt§b ’ Wt§a )2 ’ G2 (t § b ’ t § a) | Fs ]
= G2 (Ws§b ’ Ws§a )2 ’ G2 (s § b ’ s § a).

We do this by checking all six cases for the relative locations of a, b, s, and t; we do one of
the cases in Note 2.


Second step. Next suppose Hs is simple as in (12.3). In this case de¬ne the stochastic
integral
n
t
Gi (Wbi §t ’ Wai §t ).
Nt = Hs dWs =
0 i=1

2 2
Proposition 12.3. Nt is a continuous martingale, E N∞ = E Hs ds, and N =
t
0
t 2
Hs ds.
0


Proof. We may rewrite H so that the intervals (ai , bi ] satisfy a1 ¤ b1 ¤ a2 ¤ b2 ¤ · · · ¤ bn .
For example, if we had a1 < a2 < b1 < b2 , we could write

Hs = G1 1(a1 ,a2 ] + (G1 + G2 )1(a2 ,b1 ] + G2 1(b1 ,b2 ] ,

and then if we set G1 = G1 , G2 = G1 + G2 , G3 = G2 and a1 = a1 , b1 = a2 , a2 = a2 , b2 =
b1 , a3 = b1 , b3 = b2 , we have written H as

3
Gi 1(ai ,bi ] .
i=1


So now we have H simple but with the intervals (ai , bi ] non-overlapping.
Since the sum of martingales is clearly a martingale, Nt is a martingale. The sum
of continuous processes will be continuous, so Nt has continuous paths.
We have

2
G2 (Wbi ’ Wai )2 + 2E Gi Gj (Wbi ’ Wai )(Wbj ’ Waj ) .
E N∞ = E i
i<j


52
The terms in the second sum vanish, because when we condition on Faj , we have

E [Gi Gj (Wbi ’ Wai )E [(Wbj ’ Waj ) | Faj ] = 0.

Taking expectations,
E [Gi Gj (Wbi ’ Wai )(Wbj ’ Waj )] = 0.

For the terms in the ¬rst sum, by Lemma 12.1

E [G2 (Wbi ’ Wai )2 ] = E [G2 E [(Wbi ’ Wai )2 | Fai ]] = E [G2 ([bi ’ ai )].
i i i


So
n
2
E[G2 ([bi ’ ai )],
E N∞ = i
i=1
∞ 2
and this is the same as E Hs ds.
0



2
Third step. Now suppose Hs is adapted and E 0 Hs ds < ∞. Using some results from

measure theory (Note 3), we can choose Hs simple such that E 0 (Hs ’ Hs )2 ds ’ 0.
n n

The triangle inequality then implies (see Note 3 again)

(Hs ’ Hs )2 ds ’ 0.
n m
E
0

t
De¬ne Ntn = n
Hs dWs using Step 2. By Doob™s inequality (Theorem 10.3) we have
0

t 2
E [sup(Ntn Ntm )2 ] n m
’ ’
= E sup (Hs Hs ) dWs
t t 0
∞ 2
n m
¤ 4E ’
(Hs Hs ) dWs
0

(Hs ’ Hs )2 ds ’ 0.
n m
= 4E
0

This should look reminiscent of the de¬nition of Cauchy sequences, and in fact that is what
is going on here; Note 3 has details. In the present context Cauchy sequences converge,
and one can show (Note 3) that there exists a process Nt such that
t 2
n
dWs ’ Nt ’ 0.
sup Hs
E
t 0

t
If Hs and Hs are two sequences converging to H, then E ( 0 (Hs ’ Hs ) dWs )2 =
n n n n
t
E 0 (Hs ’ Hs )2 ds ’ 0, or the limit is independent of which sequence H n we choose. See
n n
t t
Note 4 for the proof that Nt is a martingale, E Nt2 = E 0 Hs ds, and N t = 0 Hs ds.
2 2


53
t
n
Because supt [ 0 Hs dWs ’ Nt ] ’ 0, one can show there exists a subsequence such that the
convergence takes place almost surely, and with probability one, Nt has continuous paths
(Note 5).
t
We write Nt = Hs dWs and call Nt the stochastic integral of H with respect to
0
W.

We discuss some extensions of the de¬nition. First of all, if we replace Wt by a
t 2
continuous martingale Mt and Hs is adapted with E 0 Hs d M s < ∞, we can duplicate
everything we just did (see Note 6) with ds replaced by d M s and get a stochastic integral.
2 2
In particular, if d M s = Ks ds, we replace ds by Ks ds.
There are some other extensions of the de¬nition that are not hard. If the random
∞ 2
variable 0 Hs d M s is ¬nite but without its expectation being ¬nite, we can de¬ne the
stochastic integral by de¬ning it for t ¤ TN for suitable stopping times TN and then letting
TN ’ ∞; look at Note 7.
A process At is of bounded variation if the paths of At have bounded variation. This
means that one can write At = A+ ’A’ , where A+ and A’ have paths that are increasing.
t t t t

+
|A|t is then de¬ned to be At + At . A semimartingale is the sum of a martingale and a
∞ ∞
2
process of bounded variation. If 0 Hs d M s + 0 |Hs | |dAs | < ∞ and Xt = Mt + At ,
we de¬ne
t t t
Hs dXs = Hs dMs + Hs dAs ,
0 0 0

where the ¬rst integral on the right is a stochastic integral and the second is a Riemann-
Stieltjes or Lebesgue-Stieltjes integral. For a semimartingale, we de¬ne X t = Mt . Note
7 has more on this.
Given two semimartingales X and Y we de¬ne X, Y t by what is known as polar-
ization:
= 1[ X + Y ’X ’ Y t ].
X, Y t t t
2
t t t
As an example, if Xt = Hs dWs and Yt = Ks dWs , then (X + Y )t = (Hs + Ks )dWs ,
0 0 0
so
t t t t
2 2 2
X +Y = (Hs + Ks ) ds = Hs ds + 2Hs Ks ds + Ks ds.
t
0 0 0 0
t 2
Since X = Hs ds with a similar formula for Y t , we conclude
t 0

t
X, Y = Hs Ks ds.
t
0



The following holds, which is what one would expect.

54

2
Proposition 12.4. Suppose Ks is adapted to Fs and E 0 Ks ds < ∞. Let Nt =
∞ ∞
t 2 22
Ks dWs . Suppose Hs is adapted and E 0 Hs d N s < ∞. Then E 0 Hs Ks ds < ∞
0
and
t t
Hs dNs = Hs Ks dWs .
0 0

The argument for the proof is given in Note 8.

What does a stochastic integral mean? If one thinks of the derivative of Zt as being
t
a white noise, then 0 Hs dZs is like a ¬lter that increases or decreases the volume by a
factor Hs .
t
For us, an interpretation is that Zt represents a stock price. Then 0 Hs dZs repre-
sents our pro¬t (or loss) if we hold Hs shares at time s. This can be seen most easily if
Hs = G1[a,b] . So we buy G(ω) shares at time a and sell them at time b. The stochastic
integral represents our pro¬t or loss.
Since we are in continuous time, we are allowed to buy and sell continuously and
instantaneously. What we are not allowed to do is look into the future to make our
decisions, which is where the Hs adapted condition comes in.

Note 1. Let us be more precise concerning the measurability of H that is needed. H is a
stochastic process, so can be viewed as a map from [0, ∞) — „¦ to R by H : (s, ω) ’ Hs (ω).
We de¬ne a σ-¬eld P on [0, ∞) — „¦ as follows. Consider the collection of processes of the form
G(ω)1(a,b]) (s) where G is bounded and Fa measurable for some a < b. De¬ne P to be the
smallest σ-¬eld with respect to which every process of this form is measurable. P is called the
predictable or previsible σ-¬eld, and if a process H is measurable with respect to P, then the
process is called predictable. What we require for our integrands H is that they be predictable
processes.
If Hs has continuous paths, then approximating continuous functions by step functions
shows that such an H can be approximated by linear combinations of processes of the form
G(ω)1(a,b]) (s). So continuous processes are predictable. The majority of the integrands we

<<

. 9
( 19 .)



>>