<<

. 24
( 36 .)



>>


and wishes to determine the value of key. Try to do it with her. Compute
a posteriori probabilities of the keys used. We can do that by the Bayes
formula




. . . , Kt are some events, Ki mutually exclusive and E c
where E , K1,
C:=,
Ki. In our case, event E is the receipt of the ciphertext E = cccbc,
t = 6 and Ki means that the key k = i was chosen.
Basacs of Contemporary Cryptography f o r IT Practitioners
126


We assume that all keys are equally likely, i.e.



Then

P(EIK1) = P ( f i = C C C ˜ C )= 0.054 .0.15 x 0.000001 ,
P(EIK2) = P(m = bbbcb) = 0.154 * 0.05 x 0.000025,
P(EIK3)= P ( f i = cccac) = 0.8. 0.054 M 0.000005 ,
P(EJK4) P ( f i = bbbab) = 0.8 * 0.154 M 0.000405,
=
P(E1Ks)= P ( f i = U C Z ˜ C= )0.84 .0.05 M 0.020480 ,
˜

P(EIK6) = P ( f i = ˜ a a b a= 0A4. 0.15 x 0.061440.
)
F'rom this we easily find
6
C P ( K j )P (EIKj) 0.013726
M
j=l

and obtain by the Bayes formula the a posteriori probability that the key
k = 1 was used given the ciphertext B = cccbc is received:



Continuing similarly, find the probabilities of the other keys:
a posteriori

P(K21E) = P ( f i = bbbcblg = C C C ˜ C )M 0.000304,
P(K3[E) P ( f i = C C C ˜ C ˜ E CCC˜C)M 0.000061 ,
= =
P(K41E) = P(% = bbbable = cccbc) x 0.004918,
P(K5IE) = P ( f i = aaacali? = C C C ˜ C )M 0.25 ,
P(K6IE) = P ( f i = U U U ˜ U ˜ ˜ CCC˜C)x 0.75.
=?

We can see that the most probable keys are k = 5 and k = 6 , the probabil-
ities of all other keys being less than 0.01.
We can see that having eavesdropped only 5 letters Eve is able to deter-
mine the key almost surely. So, from this example and from the example
with the Caesar cipher (Chapter l), we can conclude that, evidently, there
exists some critical length of ciphertext, n, after which the key can be
determined with probability close to 1. More formally it can be written

H (Klel,. . . ,en) M 0 . (7.22)
Theoretical Security of Cryptosystems 127


This means that, on average, it suffices to intercept n symbols of ciphertext
to determine the key used almost surely. The number n satisfying Eq. (7.22)
is called the unicity distance of a cipher. The following result from [Shannon
(1949)] relates unicity distance with message redundancy and secret key
entropy.
Proposition 7 3 (on unicity distance) Let the secret-key cryptosystem be
.
considered and let H ( K ) be the entropy of the key and R the message re-
dundancy. T h e n for unicity distance n, the inequality holds
H(K)
n>- (7.23)
R
Let™s give some notes on this proposition before discussing the proof.
(1) We can see that if the message redundancy R = 0 then the key will
never be determined since n = 00. That is, the cipher cannot be broken
(we have demonstrated this in the example with combination lock (p. 5).
(2) The redundancy can be decreased by means of data compression.
The point is that lossless compression preserves the entropy while decreas-
ing the length of data. Consequently, in the compressed data, the entropy
per symbol (entropy rate) is higher and the redundancy per symbol (re-
dundancy rate) is lower, see (7.21). Therefore after the compression, the
unicity distance of a cipher increases.
(3) In practice, it is better to use the systems in which the key changes
long before the unicity distance is reached.
Proof. We present only the main idea of the proof. Let the adversary
intercepting the ciphertext E = e l e z . . . e, have uniquely determined the key
and, therefore, the message. It means that her uncertainty is decreased by
H ( K ) + H (ml, . ,m,) since she has learned both the key and the message.
..
With that she obtains n letters over an r-letter alphabet { a l , . . . u,,}. We
know that the maximum value of the entropy h, = log r , which means that
the adversary™s uncertainty can be decreased at most by nlogr. Hence

+ . . . , m,) ,
2 H (K) H
72 log r (m1,

consequently,



from which we obtain that
Basics of Contemporary Cryptography for I T Practitioners
128


, m,)
( m l ,. . .
(here we used the convergence H h, and definition of
/n 4

redundancy (7.21)). 0
Example 7.4 Let™s estimate the unicity distance for the cipher from
Example 7.3. We have H ( K ) = log 6 M 2.58, log T = log 3 M 1.58 and the
entropy per source letter

+ 0.15log0.15 + 0.0510g0.05)
H = -(0.810g0.8 0.88.
M


so
2.58
3.7.
n> M
1.58 - 0.88
We could see that 5 letters were enough to determine the key almost
surely and inequality (7.23) corresponds well with our example.
Let™s show by one more example how mutual dependence of letters in-
creases redundancy and thereby decreases the unicity distance.
Example 7.5 Let a Markov source be given, i.e. the source with memory
in which each next letter probability may depend on the preceding letter.
The source is defined by the following transition matrix:

a b c
a 0.1
0 0.9
0.1
b 0 0.9
c 0.3
0.4 0.3

and initial probabilities P ( a ) = 0.19, P(b) = 0.34, P ( c ) = 0.47. It means
that the first letter is generated with its initial probability, and each next
letter is generated with probability given by the row of transition matrix
corresponding t o the preceding letter, e.g. after the letter a the letters b
and c may occur with probabilities 0.9 and 0.1, respectively.
Let, as in Example 7.3, the cipher with 6 possible keys be used and the
intercepted ciphertext be

e = bbacbac .
We can see from the transition matrix that the combination aa is impos-
sible (the probability of occurrence of a after the preceding a is zero) and
the combination bb unlikely (the probability of b after b equals 0.1). Con-
sequently, the first pair of letters in the message is most likely cc, i.e. the
Theoretical Security of Cryptosystems 129


substitution c --t b were used in encryption. Then ac in the ciphertext cor-
responds to either ab or ba in the message. From the transition matrix, the
combination ba is impossible and only ab remains. Therefore we may con-
clude that, with high probability, the key is 2, i.e. the second permutation
was used:

lc=2 (a-+a, b-tc, c - t b ) .

NOW compute the exact a posteriori probabilities of the keys a in Ex-
s
ample 7.3. Note that the probability of a particular source message equals
to the product of the first letter initial probability and the probabilities of
transitions from one letter to another.
P(EII(1) = P ( 3 = bbacbac) = 0.34 0.1 * 0 = 0
P(E(K2)= P ( f i = ccabcab) = 0.47.0.3 * 0.4 * 0.9.0.9.0.4 .0.9
= 0.016446,
P(EIK3) = P ( f i = aabcabc) = 0.19.0 = 0 ,
P(EIK4) = P ( f i = aacbacb) = 0.19.0 = 0 ,
P(EIK5) = P ( f i = ccbacba) = 0.47.0.3.0.3.0 = 0 ,
P(EIK6) = P(T% bbcabca) = 0.34.0.1 .0.9 . 0.4.0.9 .0.9 .0.4
=
= 0.003966.

From this we find
6
P (Kj)P (EIKj)FZ 0.003402
j=1

and obtain what we need by the Bayes formula, i.e. a posteriori probabilities
of keys under the condition that the ciphertext is E = bbacbac:




These computations confirm the correctness of the above informal observa-
tion.
Basics of Contemporary Cryptography for IT Practitioners
130


The unicity distance estimation can be used in constructing cryptosys-
tems. For instance, it seems reasonable to change the key as the total length
of encrypted messages approaches the unicity distance.
New approaches to constructing theoretically secure cryptosystems con-
nected with the use of special coding methods were suggested in the works
[Ryabko and Fionov (1997); Ryabko and Fionov (1999a); Ryabko and
Fionov (1999b); Ryabko (2000)]. The suggested methods are complicated
for description but efficient from the computational point of view and allow
for construction of unbreakable secret-key ciphers. The main idea of these
methods is to ensure, by means of special coding, the zero redundancy of
the message to be encrypted. One of such methods will be considered in
the next section.


7.6 Ideal Cryptosystems

In Sec. 7.2, the notion of perfect secrecy was introduced and it was shown
then that the Vernam cipher is perfectly secure. We could see that in
this cipher the key length equals the message length and the key is used
only once. If we want to use a short long-term key (which is the demand
of the majority of practical applications) then what security level can we
attain at best? In the notes to Proposition 7.3, it was pointed out that
under the null redundancy of the message the unicity distance is infinite.
It means that even a short (or, equivalently, applied many times) key used
for encrypting a very long message cannot be disclosed. In its turn it
means that an adversary who tries t o recover the message will face the
uncertainty equal to that of the key. Evidently, it is the best that can
be reached under the said conditions (it is helpful to recall here again the
example with combination lock from Chapter 1). These observations lead
us to the notion of a strongly ideal cipher introduced in [Shannon (1949)].
Let the message 17217722.. . mt is encrypted under the secret key E =
klk2.. . ks and the ciphertext E = e1e2.. . et is produced. Denote by
H ( m l m 2 . . . mt) the entropy of the message, by H(E) and H ( k ) the en-
tropies of the ciphertext and the key, respectively. Then H(mlm2 . . . mtlE)
is the uncertainty of the message and H(Ela) the uncertainty of the key
given the ciphertext 2.

Definition 7 3 The cipher is called strongly ideal if
.

H(mlm2.. . mtlE) = H(&?) = min{H(mlrnz.. . m t ) , ( K ) } .
H (7.24)
Theoretical Security of Cryptosystems 131


Let's first consider the case when min(H(mlm2. . . mt), H ( k ) } =
H ( m l m 2 . . . mt). In this case the system provides perfect secrecy. Indeed,
Eq. (7.24) converts to H ( m l r n 2 . . . mtlE) = H ( m l m 2 . . . mt) which means
perfect secrecy by definition, see Eq. (7.1).
In the other case, when the entropy of key is less than the entropy of
message, Eq. (7.24) is reduced to
H(m1m2.. .mtlE) = H(@) = H(E) (7.25)
for all sufficiently large t. Since we shall deal mainly with the case when
the message length t is large (or even infinite) we shall use Eq. (7.25) as
the definition of a strongly ideal cipher.
Informally, strong ideality of the cipher means that the number of solu-
tions of a cryptogram equals the number of possible keys, all the solutions
being equiprobable (as in the example with combination lock).
In this section, we shall consider the construction of ideal system sug-
gested recently in [Ryabko (2000)] but confine ourselves t o the description
of the main idea only as applied to the case of a binary source with unknown
statistics, i.e. to the case when the message consists of letters over the al-
phabet A = { a l ,a ˜ } , letters being independent but their probabilities
the
unknown.

<<

. 24
( 36 .)



>>