<<

. 22
( 36 .)



>>

n
R = (9767,11500), P = (25482,16638) ;
e = 11685.

The obtained ciphertext ((9767,11500), 11685) must decrypt to the
message 10000 under the secret key 5103.
6.5 Make a program implementation of the algorithms of elliptic curve
signature generation and verification. As usually, assume that
h(m) = m. As the parameter q take n = 32089 (the number of
points on curve). The signed message
(1000;4615,5944)
Y
must be declared valid for the signer whose public key is =
(12224,7207).
Chapter 7

Theoretical Security of Cryptosystems



Introduction
71
.

One of the first open works on cryptography was [Shannon (1949)l. Shan-
non considered the classical secret-key cryptosystem as shown schematically
in Fig. 1.1 (p. 3). In this system, we have a secure channel for transmitting
secret keys. However notice that in our days we may replace the secure
channel by a secured channel which is virtually created when the secret key
is computed by means of public-key cryptographic methods, e.g. Diffie-
Hellman key agreement or Needham-Schroeder protocol. In this chapter,
we shall consider only the classical scheme with the secret key, but many
of the results may be extended to the case of generating secret keys by
public-key methods.
We may roughly divide all ciphers into two big classes:


(1) the schemes unbreakable in principle, which can be strictly proved;
(2) the schemes whose security is based on impossibility of searching
through a large number of keys (although, in principle, they can be
broken).


In this chapter, we shall study the systems from the first class. The second
class will be the topic of the next chapter. To expose the matter of this
chapter we need some elementary notions and facts from probability theory.
We shall use these without giving strict definitions and proofs which may be
found in almost any textbook on probability theory, see, e.g. [Feller (1968)].
Many of the results discussed in the following sections are due to [Shannon
(1948)] and [Shannon (1949)I.

115
Basics of Contemporary Cryptography f o r IT Practitioners
116


Theory of Perfect Secrecy
7.2

Let M = { M I ,M2, M3,. . . , M,} be the set of all admissible messages
(e.g. the set of all texts in English of the length no more than 1000 let-
ters), K = {Kl,K2,K3,.. , , K n } be the set of all possible keys, E =
{ E l ,E2,. . . ,Ek} be the set of all cryptograms (i.e. enciphered messages).
The cryptograms are the functions of the source message and the key,
i.e. Ej = f ( M i , Kl).
Assume that the set of messages M obeys a probability distribution P ,
i.e. a probability P ( M i ) is defined for all i = 1,2, . . . ,rn. This is an a
priori distribution which is also known to an adversary. Notation P(AIB)
will be used, as usual, for conditional probability of event A given event B
(i.e. P(A1B)is the probability of occurrence of event A provided that event
B has occurred).
Definition 7.1 A cryptosystem is said to be perfectly secure (or t o pro-
vide perfect secrecy) if the equality holds



for all Mi, K z ,and Ej = f ( M i , K l ) .

Let™s give some explanations. Suppose that Eve overhears the cryp-
togram Ej. If Eq. (7.1) holds for all admissible messages then Eve does not
obtain any information about the message transmitted, i.e. the knowledge
of Ej is of no use for her. Consider a schematic

Example 7 1 Let A be the set of all 6-letter words in English. Let it
4
.
be known a priori that

P(message = “dollar”) = 0.000150,
(7.2)
P(message = “bottle”) = 0.000012, etc.

Suppose we have a non-perfect system and Eve upon interception and com-
putation obtains the following data:

P(message = “dollar”) =
P(message = “bottle”) = 0.9999.

It means that Eve has, in fact, deciphered the message: she is almost sure
that the word “bottle” was transmitted because the probability of anything
else does not exceed 0.0001.
Theoretical Security of Cryptosystems 117


Suppose now that we have a perfect system. In this case, for any inter-
cepted cryptogram Ej, Eve obtains

P(message = “do11ar”IEj) = 0.000150,
P(message = “bott1e”IEj) = 0.000012, etc.

i.e. her a posteriori distribution completely coincides with the a priori dis-
tribution (7.2). It means that she may pay no attention to the intercepted
cryptogram but guess the message based on the source probabilities. We
can see that Eq. (7.1) is actually a reasonable definition of perfect secrecy.

Explore the properties of perfect secrecy systems.

Theorem 7.1 If a system is perfectly secure (Eq. (7.1) holds) then the
equality is valid



for all i and j . The converse is also true: if (7.3) holds then the system is
perfectly secure.

Proof. By definition of conditional probability




under P(B) # 0. Therefore given P ( E j ) # 0 we can write




Taking into account Eq. (7.1) we obtain




i.e.




So Eq. (7.3) is proved. The converse proposition can be proved by the
0
reverse chain of the presented equalities, see [Shannon (1949)l.
Basics of Contemporary Cryptography for IT Practitioners
118


Vernam Cipher
7.3

This cipher was suggested by American engineer Gilbert Vernam [Vernam
(1926)l and was used in practice but the proof of its security was given
much later in [Shannon (1949)]. The Vernam cipher is sometimes called a
one-time pad. We shall describe the cipher for the case of binary alphabet.
Let the set of messages M consist of binary words of length n, i.e. there
are no more than 2" messages. In the Vernam cipher, the set of keys consists
of the words of the same length n and each key is used with probability
1/2n. In other words, all keys are used with equal probabilities.
Let the message be m = m1m2. . .m, and the key fl. = k1k2 . . . kn. Then
the ciphertext e = e1e2.. . en is produced by the rule:

i = 1 , 2 ,...,n ,
ei=mi@ki, (7.4)

where @ denotes addition modulo 2. In other words, the message is enci-
phered by the scheme

... m,
ml m2
... k,
@kl k2
.. . en
el e2

Since addition and subtraction modulo 2 are the same, deciphering is
done by the rule



Example 7.2 Let .iFL = 01001, E = 11010. Then we obtain 5 = 10011.
Summing up E with E we recover 6 = 01001.

Theorem 7.2 The Vernam cipher is perfectly secure.

Proof. By Theorem 7.1 it suffices to prove that (7.3) holds. We have

P(EjlMi) = P(t?lrTz)
. . . , k, = en @ m,)
= P(kl = el @ m l , k2 = e2 @ m2,
= P(&= kl . . . k,) = 2-"

(in the last equality we used the assumption that all keys are equiprobable).
Find P ( E j ) . Provided that events Mi are pairwise mutually exclusive, by
Theoretical Security of Cryptosystems 119


total probability formula

c P(Mi)P(EjlMi).
P(Ej)=
i=l

Taking into account that P(EjlMi) = 2-" we obtain

c2n

P ( E J = 2-" P(Mi).
i=l

Since the sum of probabilities of all messages equals 1 we obtain
P ( E j ) = 2-.
So, P(EjIMi) = P ( E j )= 2-", i.e. Eq. (7.3) holds. 0
It is known that the Vernam cipher was used for securing governmen-
tal communications, e.g. on the so-called hot line "Moscow-Washington"
[Menezes et al. (1996)l. The key was delivered by a trusted courier. There
is a point of view that the Vernam cipher is very expensive because the
length of the key must be equal to the message length. On the other hand,
the cipher can be used in many practical situations. For example, students
Alice and Bob can agree to use this cipher for securing their e-mail mes-
sages when they part for holidays (we advise the reader to work out the
scheme and write the program assuming that the length of letters they will
exchange does not exceed 1.44 Mbytes, i.e. the size of a standard floppy
disk).


Elements of Information Theory
7.4

We have proved that the Vernam cipher is perfectly secure but it requires
the key to be as long as the message. Shannon showed that in any perfect
secrecy system the key length must be no less than the message entropy
(the notion we shall define below), i.e. proportional to the message length.
However in many practical systems, we have to use short keys (say, hun-
dreds or thousands bits) for encrypting long messages (hundreds kilobytes
and more). In this case, we may construct the so-called ideal cryptosys-
tems described firstly by Shannon. For constructing ideal systems and
studying their properties Shannon suggested to use the notions and re-
sults of information theory. In this section, we shall define and briefly il-
lustrate these. A sufficiently complete and strict study can be found in
Basics of Contemporary Cryptography for IT Practitioners
120


many textbooks, e.g. [Gallager (1968); McEliece (1984); Blahut (1987);
Welsh (1988)]. It is also useful to consult the pioneering work [Shannon
(1948)]. The reader may refer to any of these books for strict definitions
and proofs.
We begin with definition of the main notion, the Shannon entropy. Let
a discrete random variable E be given taking on values a l , a2,. . . ,ar with
probabilities PI, . . . , P . respectively.
Pz, ,,
The entropy of random variable 6 is defined by the equa-
Definition 7.2
tion

1i log P
P i
H(() = -
i= 1

assuming 0 log 0 = 0.

If one uses binary logarithms (i.e. to the base 2 ) then the entropy is
measured in bits, which is generally accepted in cryptography, information
theory, and computer science. In case of natural logarithms, the unity of

<<

. 22
( 36 .)



>>