22.04.2014 Views

a590003

a590003

a590003

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Experiment H 0 . This experiments corresponds to the real world execution. The simulator simply uses the<br />

honest party input and runs the honest party algorithm in the protocol execution.<br />

Experiment H 1 . This experiment is the same as H 0 , except that in the pre-processing phase, S runs the simulators<br />

S fhe , S prf and S test instead of running the honest party algorithm. Note that the functionalities F fhe , F prf<br />

and F test are still computed honestly, in the same manner as in H 0 .<br />

Indistinguishability of H 0 and H 1 : From the security of the two-party computation protocols Π fhe , Π prf and<br />

Π test , it immediately follows that the output distributions of H 0 and H 1 are computationally indistinguishable.<br />

Experiment H 2 . This experiment is the same as H 1 , except that in the offline-phase, S runs the simulator S ver<br />

instead of running the honest party algorithm. S answers the output query of S ver by computing F ver in the same<br />

manner as description of S.<br />

Indistinguishability of H 1 and H 2 : From the security of the two-party computation protocol Π ver , it immediately<br />

follows that the output distributions of H 1 and H 2 are computationally indistinguishable.<br />

Experiment H 3 . This experiment is the same manner as H 2 except that S computes the bits b i,j for the honest<br />

party P i as random bits (instead of computing them pseudorandomly).<br />

Indistinguishability of H 2 and H 3 :Follows immediately from the security of PRF.<br />

Experiment H 3 . This experiment is the same as H 2 , except that now, in order to compute the final output of<br />

F ver , S queries the ideal functionality F instead of performing decryption in the final step.<br />

Indistinguishability of H 2 and H 3 :We now claim that hybrids H 2 and H 3 are statistically indistinguishable. Towards<br />

contradiction, suppose that there exists a distinguisher that can distinguish between the output distributions<br />

of H 2 and H 3 with inverse polynomial probability p(κ). Now, note that the only difference between H 2 and H 3<br />

is the manner in which the final outputs are computed. In other words, the existence of such a distinguisher<br />

implies that the outputs computed in H 3 and H 4 are different. However, note that conditioned on the event that<br />

worker W performs the computation correctly, then the checks performed by F ver corresponding to the inputs of<br />

the parties (i.e., step no 2(c) in the description of F ver ) guarantee that the outputs in both experiments must be the<br />

same. Thus, from the check 2(b) of F ver , we now have that the existence of such a distinguisher D implies that<br />

with inverse polynomial probability p ′ (κ), the worker W is able to provide incorrect answers at positions p j , and<br />

correct answers at positions 4 − p j , for all j ∈ [n]. We now obtain a contradiction using the soundness lemma of<br />

Chung et al. [CKV10].<br />

In more detail, we now consider an experiment G where the simulator interacts with the server as in H 4 ,<br />

and then stops the experiment at the end of the online phase. That is, in H 3 , for every j ∈ [n], S prepares each<br />

̂X i,j ← Enc P K (Enc pk (x i )) and ̂R i,j ← Enc P K (R i ). Now, consider an alternate experiment G ′ that is the same<br />

as G, except that S now prepares ̂X i,j ← Enc P K (R i ). Then, the following equation follows from the semantic<br />

security of the (outer layer) FHE scheme:<br />

Pr[W correct on (̂R i,1 , . . . , ̂R i,n ) and incorrect on ( ̂X i,1 , . . . , ̂X i,j ) in G]<br />

≤ Pr[W correct on (̂R i,1 , . . . , ̂R i,n ) and incorrect on ( ̂X i,1 , . . . , ̂X i,j ) in G ′ ] + negl(κ)<br />

Note that to obtain the above equation, we rely on the fact that the function outsourced is a PPT function, and<br />

thus we can check whether W is correct or incorrect by executing the Eval algorithm. Note that the simulator<br />

knows the positions where the Eval checks must be performed since it knows the PRF key of the adversary and<br />

can therefore compute its random bits b i ∗ ,j.<br />

Now, it is easy to see that:<br />

Pr[W correct on (̂R i,1 , . . . , ̂R i,n ) and incorrect on ( ̂X i,1 , . . . , ̂X i,j ) in G ′ ] ≤ 1<br />

2 n<br />

Thus, combing the above two equations, we arrive at a contradiction. We refer the reader to [CKV10] for more<br />

details.<br />

20<br />

11. How to Delegate Secure Multiparty Computation to the Cloud

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!