18.02.2014 Views

Skript as PDF Skript

Skript as PDF Skript

Skript as PDF Skript

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 3<br />

Security, pseudorandom<br />

generators, and<br />

pseudorandom functions<br />

In this chapter we will discuss various definitions of security. We describe<br />

general constructions of encryption schemes that satisfy these security definitions.<br />

The constructions build on pseudorandom generators and pseudorandom<br />

functions, two fundamental concepts in cryptography.<br />

3.1 Security definitions<br />

An encryption scheme is secure when it cannot be broken by an adversary.<br />

To make this more precise, we have to describe the goals of an adversary.<br />

Equivalently, we have to say when we consider an encryption scheme broken.<br />

We also have to state the resources and capabilities of an adversary.<br />

We start with the goals of an adversary. Of course, we apply Kerckhoffs’<br />

principle: The adversary knows everything about the scheme he tries to attack.<br />

Only the secret key used is unknown to the adversary.<br />

Goals of an adversary<br />

key recovery: Determine the secret key s used in the encryption scheme.<br />

plaintext recovery: Given a ciphertext v = E s (m), which is the encryption<br />

of plaintext m, determine the plaintext m.<br />

20


plaintext distinguishability: Given a ciphertext v = E s (m b ), b ∈ {0, 1},<br />

which is the encryption of one out of two plaintexts m 0 , m 1 known to<br />

the adversary, determine bit b.<br />

Clearly, key recovery is the most ambitious goal. Once an adversary<br />

knows the secret key used, the scheme is completely broken. However, it<br />

is conceivable that an adversary can decrypt certain (or even most) ciphertexts<br />

without knowing the secret key. At first, the l<strong>as</strong>t goal mentioned above<br />

will seem strange. The explanation for including plaintext indistinguishability<br />

in our list of adversarial goals is <strong>as</strong> follows. An adversary may not<br />

be interested in decrypting ciphertexts completely. Instead, he may only be<br />

interested in parts of the corresponding plaintext. That is, the adversary<br />

only wants to compute partial information about the plaintext. A strong<br />

security definition will require, that even this is impossible for an adversary.<br />

A formal security definition along these lines is called semantic security.<br />

Formally defining semantic security is tricky. Fortunately, plaintext indistinguishability<br />

is quite e<strong>as</strong>y to formalize and we will do so shortly. Moreover,<br />

in many situations it can be shown that semantic security and plaintext indistinguishability<br />

are equivalent. Intuitively, this should be clear. Plaintext<br />

indistinguishability says that an adversary cannot distinguish encryptions<br />

of two plaintexts of his own choice. But then the adversary cannot be able<br />

to determine partial information about plaintexts from the corresponding<br />

ciphertexts.<br />

Perfect secrecy provides security against adversaries that have unlimited<br />

computational power. But <strong>as</strong> we have seen perfect secrecy is too expensive.<br />

Therefore, we will restrict the computational power, time and memory, of an<br />

adversary. Mostly we will be concerned with time restricted adversaries. For<br />

example, we may restrict ourselves to adversaries whose computations are<br />

limited to 2 t steps on a Turing machine or a RAM. An encryption scheme<br />

that is provably secure against adversaries limited to 2 80 steps will meet<br />

most practical security requirements. As our computational model we choose<br />

Turing machine, but many results we show, in particular reductions, do not<br />

depend on the exact model of computation that we use.<br />

Besides time and memory, we need to consider other capabilities or characteristics<br />

of adversaries. Various possible characterizations of adversaries<br />

are given in the following list of attack models.<br />

Capabilities of adversary/models of attacks<br />

21


eavesdropping attack (eda): The adversary only knows ciphertexts v 1 ,<br />

. . . , v l .<br />

known plaintext attack (kpa): The adversary knows pairs (m 1 , E s (m 1 )),<br />

. . . , (m l , E s (m l )) of plaintexts m i and their encryptions E s (m i ).<br />

chosen plaintext attack (cpa): The adversary can adaptively choose plaintexts<br />

m 1 , . . . , m l for which he gets the corresponding ciphertexts E s (m 1 ),<br />

. . . , E s (m l ), i.e, after seeing pairs (m 1 , E s (m 1 )), . . . , (m i−1 , E s (m i−1 ))<br />

the adversary chooses the next plaintext m i for which he gets the<br />

encryption E s (m i ).<br />

chosen ciphertext attacks (cca): The adversary can adaptively choose<br />

ciphertexts v 1 , . . . , v l for which he gets the corresponding plaintexts<br />

m 1 = D s (v 1 ), . . . , m l = D s (v l ), i.e, after seeing pairs (v 1 , D s (v 1 )), . . . ,<br />

(v i−1 , D s (v i−1 )) the adversary chooses the next ciphertext v i for which<br />

he gets the decryption D s (v i ).<br />

Eavesdropping attacks clearly are realistic. Known or chosen plaintexts<br />

and chosen ciphertext attacks become realistic once encryptions and decryptions<br />

are done by servers that can be queried or even manipulated by an<br />

adversary. Nevertheless chosen plaintext and chosen ciphertext attacks are<br />

mostly (but not entirely) theoretical models. It is always prudent to overrather<br />

than underestimate an adversary. Encryption schemes secure even<br />

under chosen ciphertext attacks should resist most practical attacks.<br />

The goals and capabilities or attack models can be combined arbitrarily.<br />

We describe some examples and show that in most c<strong>as</strong>es the one-time-pad<br />

(OTP) is not secure.<br />

Key recovery under eavesdropping attacks: The adversary A sees<br />

several ciphertexts v 1 , . . . , v l , where v i = E s (m i ), i = 1, . . . , l, for plaintexts<br />

m i and a single key s. The adversary tries to determine a key ŝ that is<br />

consistent with the v i ’s, i.e. there exist plaintexts m i such that v i = Eŝ(m i )<br />

for all i. Note that there need not be a single key consistent with the<br />

ciphertexts v i .<br />

Example 3.1.1 For the one-time-pad (OTP) key recovery with eavesdropping<br />

attacks is possible but useless, because no key can be excluded by an<br />

eavesdropping attack.<br />

22


Key recovery under known plaintext attacks: Given pairs (m 1 , v 1 ),<br />

. . . , (m l , v l ), where v i = E s (m i ), i = 1, . . . , l, for some fixed key s. The adversary<br />

tries to determine a key ŝ such that Eŝ(m i ) = v i for all i. Note that<br />

there may be several keys ŝ with this property.<br />

Example 3.1.2 For the OTP even with l = 1 key recovery with known<br />

plaintext attacks is possible. Given m 1 and v 1 such that E s (m 1 ) = v 1 , we<br />

must have that s = m 1 ⊕ v 1 .<br />

Plaintext recovery under known plaintext attack: Given pairs (m i , v i ),<br />

v i = E s (m i ), i = 1, . . . , l, for a key s and a ciphertext v ≠ v i for all i, the<br />

adversary tries to determine the plaintext m such that E s (m) = v.<br />

Example 3.1.3 For the OTP plaintext recovery with a kpa is possible,<br />

in fact we already saw that for the OTP even key recovery with a kpa is<br />

possible.<br />

Before we look at constructions of secure encryption schemes, we define<br />

plaintext indistinguishability more formally. At le<strong>as</strong>t in the parts of this<br />

course where we pursue the rigorous treatment of cryptography, plaintext<br />

indistinguishability is our main security goal. For the time being we consider<br />

plaintext indistinguishabilty under eavesdropping and under chosen<br />

plaintext attcks. In both c<strong>as</strong>es we model an attack by a game between an<br />

adversary A and a challenger C.<br />

Plaintext indistinguishability game Game pi-eda<br />

(under eavesdropping atacks)<br />

1. Adversary A determines two plaintexts m 0 , m 1 ∈ P and sends<br />

m 0 , m 1 to the challenger C.<br />

2. C chooses a key s ∈ K and a bit b ∈ {0, 1} uniformly at random<br />

and sends v = E s (m b ) to A.<br />

3. A computes a bit b ′ ∈ {0, 1}.<br />

4. A wins the game iff b = b ′ .<br />

23


If A wins the game, we write Succ pi-eda (A) = 1. We call Pr(Succ pi-eda (A) =<br />

1) the success probability of A. Here the probability is over the choice of<br />

s, b in step 2. It is also over internal coin tosses that A and C may use,<br />

i.e. we allow the adversary and the challenger to be randomized algorithms.<br />

Finally we call<br />

Adv pi-eda (A) := | Pr(Succ pi-eda (A) = 1) − 1/2|<br />

the advantage of A. Any adversary can just guess a bit b. Using this strategy<br />

an adversary will win with probability 1/2. The advantage Adv pi-eda (A) is<br />

a me<strong>as</strong>ure how much better than just guessing an adversary A is doing.<br />

Definition 3.1.4 Let t ∈ N and ɛ > 0 be arbitrary. An encryption scheme<br />

(E, D) over (P, C, K) is called (t, ɛ)-indistinguishable under eavesdropping<br />

attacks if for every adversary A in Game pi-eda that runs for at most t steps<br />

Adv pi-eda (A) < ɛ.<br />

For plaintext indistinguishability under chosen plaintext attacks we proceed<br />

in the same manner, except that in the game between adversary and<br />

challenger the adversary can <strong>as</strong>k adaptively for encryptions of plaintexts of<br />

his choice.<br />

Plaintext indistinguishability game Game pi-cpa<br />

(under chosen plaintext attacks)<br />

1. C chooses a key s ∈ K uniformly at random.<br />

2. Adversary A determines two plaintexts m 0 , m 1 ∈ P and sends<br />

m 0 , m 1 to the challenger C. In this step A can <strong>as</strong>k for the encryptions<br />

of arbitrary plaintexts m ∈ P.<br />

3. C chooses a bit b ∈ {0, 1} uniformly at random and sends v =<br />

E s (m b ) to A.<br />

4. A computes a bit b ′ ∈ {0, 1}. In this step A can <strong>as</strong>k for the<br />

encryptions of arbitrary plaintexts m ∈ P.<br />

5. A wins the game iff b = b ′ .<br />

If A wins the game, we write Succ pi-cpa (A) = 1. We call Pr(Succ pi-cpa (A) =<br />

1) the success probability of A. Here the probability is over the choice of s, b<br />

24


in step 1 and 3. It is also over internal coin tosses that A and C may use,<br />

i.e. we allow the adversary and the challenger to be randomized algorithms.<br />

Finally we call<br />

the advantage of A.<br />

Adv pi-cpa (A) := | Pr(Succ pi-cpa (A) = 1) − 1/2|<br />

Definition 3.1.5 Let t ∈ N and ɛ ∈ R + be arbitrary. An encryption scheme<br />

(E, D) over (P, C, K) is called (t, ɛ)-indistinguishable under chosen plaintext<br />

attacks if for every adversary A in Game pi-cpa that runs for at most t steps<br />

Adv pi-cpa (A) < ɛ.<br />

In our definition of encryption schemes, the encryption E s and decryption D s<br />

is done via functions. With this requirement plaintext indistinguishability<br />

under chosen plaintexts attacks is impossible. In fact, in step 4 of Game pi-cpa<br />

the adversary A can simply <strong>as</strong>k for the encryption of m 0 or m 1 . Then he<br />

will be able to win Game pi-cpa with probability 1.<br />

Rather than concluding that plaintext indistinguishability under chosen<br />

plaintext attacks is not attainable, we generalize our definition of encryptions<br />

schemes. In the future we no longer require that encryptions are done via<br />

functions. Instead an encryption scheme over plaintext set P, ciphertext set<br />

C and key set K will consist of two probabilistic algorithms E, D. That is,<br />

when encryption a plaintext m using secret key s the algorithm E will use<br />

internal random bits to compute a ciphertext c. The ciphertext c depends<br />

<strong>as</strong> before on s and m. But it also depends on the random bits used by E.<br />

In particular, even for a fixed key s there will be many possible encryptions<br />

of m. As we will see, with this generalized definition of encryption plaintext<br />

indistinguishability under chosen plaintext attacks is possible.<br />

3.2 Pseudorandom generators and plaintext indistinguishable<br />

encryption<br />

In this section we see how so-called pseudorandom generators can be used<br />

to construct encryption schemes that are plaintext indistinguishable under<br />

eavesdropping attacks. We need several preliminary definitions and remarks<br />

before we can define pseudorandom generators.<br />

Let Ω be a finite set. As before, for a probability distribution X on Ω<br />

we denote the probability of choosing s ∈ Ω according to the distribution X<br />

by Pr(X = s).<br />

25


If A is a deterministic algorithm, we can consider A <strong>as</strong> a function. Accordingly,<br />

we denote the output of A on input x by A(x). The set of legal<br />

inputs to A is called the domain of A. The set of possible outputs of A is<br />

called the range of A. As with functions, we write A : D → R for a deterministic<br />

algorithm with domain D and range R. In the following we restrict<br />

ourselves to deterministic algorithms. However, many definitions can e<strong>as</strong>ily<br />

be extended to functions. If X is a probability distribution on Ω and Ω is<br />

a subset of the domain of some deterministic algorithm A, then X and A<br />

induce a distribution A(X) on the range of A by the following formula<br />

Pr(A(X) = r) =<br />

∑<br />

{s∈Ω:A(s)=r}<br />

Pr(X = s).<br />

Example 3.2.1 We denote by U 5 the uniform distribution on {0, 1} 5 . We<br />

consider an algorithm A : {0, 1} 5 → {0, 1} that on input x = (x 1 , . . . , x 5 )<br />

computes the majority of the bits in x, i.e. A(x) = 1 iff at le<strong>as</strong>t three bits<br />

x i are 1. Then<br />

Pr(A(X) = 1) = 1 ( ( ) ( ( )<br />

5 5 5 ) 1 ( ) 1<br />

+ + = 10 + 5 + 1 =<br />

32 3 4)<br />

5 32<br />

2 .<br />

Accordingly, Pr(A(X) = 0) = 1/2.<br />

Example 3.2.2 We denote by X the following distribution on {0, 1} 5 . For<br />

1 5 = (1, 1, 1, 1, 1) we have Pr(X = 1 5 ) = 1/2. For x ∈ {0, 1} 5 , x ≠ 1 5 , we<br />

have Pr(X = x) = 1/62. Then<br />

Pr(A(X) = 1) = 1 2 + 1 ( ( ) ( )<br />

5 5 ) 23<br />

+ =<br />

62 3 4 31 .<br />

Accordingly, Pr(A(X) = 0) = 8/31.<br />

Definition 3.2.3 Let X, Y be distributions on some finite set Ω.<br />

1. A deterministic algorithm A : Ω → {0, 1} is called an ɛ-distinguisher<br />

for X, Y if<br />

| Pr(A(X) = 1) − Pr(A(Y ) = 1)| ≥ ɛ.<br />

2. Algorithm A is called a (t, ɛ)-distinguisher for X, Y if A is an ɛ-<br />

distinguisher for X, Y and on every s ∈ Ω algorithm A runs for at<br />

most t steps.<br />

26


3. X, Y are called (t, ɛ)-indistinguishable if no (t, ɛ)-distinguisher for X, Y<br />

exists.<br />

If we consider the algorithm A and the two distributions U 5 , X from our two<br />

previous examples, we conclude that A is a 23/31−1/2 = 15/62 distinguisher<br />

for U 5 and X.<br />

In the following, for any n ∈ N we denote by U n the uniform distribution<br />

on {0, 1} n . Then we have the following two fundamental definitions.<br />

Definition 3.2.4 A distribution X on {0, 1} n is called (t, ɛ)-pseudorandom<br />

if X and U n are (t, ɛ)-indistinguishable.<br />

Definition 3.2.5 An algorithm G : {0, 1} n → {0, 1} l is called a (t, ɛ)-<br />

pseudorandom generator (PRG) if the distribution G(U n ) is (t, ɛ)-pseudorandom.<br />

Actually, we also require that G is an efficient algorithm. However, to make<br />

this precise, more involved definitions of pseudorandom generators are required.<br />

We leave this issue to more advanced courses and rely on the reader’s<br />

intuition about efficient algorithms.<br />

Example 3.2.6 Let n ∈ N and consider the algorithm G : {0, 1} n →<br />

{0, 1} 2n that maps a bit string x = (x 1 , . . . , x n ) to (x 1 , . . . , x n , ¯x 1 , . . . , ¯x n ).<br />

Here ¯b denotes the negation of bit b. Furthermore, consider the algorithm A<br />

that on input y = (y 1 , . . . , y 2n ) returns 1 iff y n+i = ȳ i for i = 1, . . . , n. One<br />

checks that<br />

Pr(A(U 2n ) = 1) = 2 −n and Pr(A(G(U n )) = 1) = 1.<br />

Moreover, A will use at most cn steps on inputs from {0, 1} 2n for some small<br />

c. Hence, A is a (cn, 1 − 2 −n )-distinguisher for G(U n ) and U 2n . Therefore,<br />

G is not a (cn, 1 − 2 −n )-pseudorandom generator.<br />

Next, we consider the algorithm B that on input y = (y 1 , . . . , y 2n ) returns<br />

1 iff y 2n = ȳ n . Assuming that we can retrieve the first and l<strong>as</strong>t bit of a 2nbit<br />

string in constant time, algorithm B runs in at most a steps, for some<br />

constant a. We also get<br />

Pr(B(U 2n ) = 1) = 1 2<br />

and Pr(B(G(U n )) = 1) = 1.<br />

Hence, G(U n ) and U 2n are not (a, 1/2)-indistinguishable and G is not a<br />

(a, 1/2)-PRG.<br />

27


Next we will see that we can construct encryption schemes that are plaintext<br />

indistinguishable under eavesdropping attacks provided we know already<br />

good pseudorandom generators. So <strong>as</strong>sume G : {0, 1} n → {0, 1} l is a (t, ɛ)-<br />

PRG. To construct an encryption scheme we set P = C = {0, 1} l and K =<br />

{0, 1} n . For key s ∈ {0, 1} n we define the encryption function <strong>as</strong> follows:<br />

E s : {0, 1} l → {0, 1} l<br />

m ↦→ G(s) ⊕ m,<br />

i.e., using the PRG G the key s is mapped to a bit string of length l, then the<br />

resulting bit string is xored with the plaintext m. The decryption function<br />

D s is identical to the encryption function:<br />

D s : {0, 1} l → {0, 1} l<br />

v ↦→ G(s) ⊕ v.<br />

For fixed s ∈ {0, 1} n the functions E s and D s are injective and for every<br />

m ∈ {0, 1} l we have D s (E s (m)) = G(s) ⊕ G(s) ⊕ m = m. Hence we obtain<br />

a valid encryption scheme. We call it the encryption scheme b<strong>as</strong>ed on PRG<br />

G.<br />

Theorem 3.2.7 There exists a constant c such that for all (t, ɛ)-PRGs G :<br />

{0, 1} n → {0, 1} l the encryption scheme b<strong>as</strong>ed on G is (t ′ , ɛ)-indistinguishable<br />

under eavesdropping attacks.<br />

Proof: We show that if the encryption scheme b<strong>as</strong>ed on G is not (t ′ , ɛ)-<br />

indistinguishable under eavesdropping attacks then G is not a (t, ɛ)-PRG.<br />

To prove this we <strong>as</strong>sume that there exists an adversary A with advantage<br />

Adv pi-eda ≥ ɛ in the plaintext indistinguishability game Game pi-eda with the<br />

encryption scheme b<strong>as</strong>ed on G. Using this adversary we construct an ɛ-<br />

distinguisher D for G(U n ) and U l . We also show that if A runs in t ′ steps<br />

then D runs in t = t ′ + cl steps for some constant c. First we describe how<br />

to obtain the distinguisher D from the adversary A.<br />

28


Distinguisher D from plaintext indistinguishability<br />

adversary A<br />

On input z ∈ {0, 1} l :<br />

1. Simulate A <strong>as</strong> in the first step of Game pi-eda to obtain two<br />

plaintexts m 0 , m 1 .<br />

2. Choose b ∈ {0, 1} uniformly at random.<br />

3. Simulate A <strong>as</strong> in the third step of Game pi-eda with ciphertext<br />

v = z ⊕ m b to obtain bit b ′ .<br />

4. Return 1 iff b = b ′ .<br />

We claim that Pr(D(U l ) = 1) = 1/2 and Pr(D(G(U n )) = 1) ≥ 1/2 + ɛ,<br />

which implies that D is an ɛ-distinguisher for U l , G(U n )). To determine<br />

Pr(D(U l ) = 1), note that with z the bit strings z ⊕ m 0 and z ⊕ m 1 are also<br />

uniformly distributed in {0, 1} l . This implies that for z chosen uniformly<br />

from {0, 1} l the input to A in the third step of D is distributed according to<br />

U l , independent of the value of b. Hence, A’s output b ′ is independent from<br />

the choice of b. Therefore Pr(D(U l ) = 1) = 1/2.<br />

On the other hand, if z is distributed according to G(U n ), the input to<br />

A in the third step of D is exactly <strong>as</strong> in the third step of Game pi-eda , i.e.<br />

A is gets the encryption E s (m b ), where s is chosen uniformly at random<br />

from the set of keys and b is a bit chosen uniformly at random. Hence, the<br />

probability that b = b ′ is Succ pi-eda (A) = Adv pi-eda (A) + 1/2 ≥ 1/2 + ɛ.<br />

To finish the proof, it remains to determine the number of steps required<br />

by D. By <strong>as</strong>sumption on A the simulations of A in step 1 and 3 together<br />

require at most t steps. Computing v ⊕m b clearly can be done in time linear<br />

in l. The remaining computations in D require only a constant number of<br />

steps. Hence, there exists a constant c such that D uses at most t + cl steps,<br />

if t is the number of steps of A. This concludes the proof.<br />

29

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!