22.04.2014 Views

a590003

a590003

a590003

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Vectors, Matrices and Tensors. We denote scalars in plain (e.g. x) and vectors in bold lowercase<br />

(e.g. v), and matrices in bold uppercase (e.g. A). For the sake of brevity, we use (x, y) to<br />

refer to the vector [ x T ∥y T ] T .<br />

The l i norm of a vector is denoted by ∥v∥ i<br />

. Inner product is denoted by ⟨v, u⟩, recall that<br />

⟨v, u⟩ = v T · u. Let v be an n dimensional vector. For all i = 1, . . . , n, the i th element in v is<br />

denoted v[i]. When applied to vectors, operators such as [·] q<br />

, ⌊·⌉ are applied element-wise.<br />

The tensor product of two vectors v, w of dimension n, denoted v ⊗ w, is the n 2 dimensional<br />

vector containing all elements of the form v[i]w[j]. Note that<br />

2.1 Learning With Errors (LWE)<br />

⟨v ⊗ w, x ⊗ y⟩ = ⟨v, x⟩ · ⟨w, y⟩ .<br />

The LWE problem was introduced by Regev [Reg05] as a generalization of “learning parity with<br />

noise”. For positive integers n and q ≥ 2, a vector s ∈ Z n q , and a probability distribution χ on Z,<br />

let A s,χ be the distribution obtained by choosing a vector a $ ← Z n q uniformly at random and a<br />

noise term e $ ← χ, and outputting (a, [⟨a, s⟩ + e] q<br />

) ∈ Z n q × Z q . Decisional LWE (DLWE) is defined<br />

as follows.<br />

Definition 2.2 (DLWE). For an integer q = q(n) and an error distribution χ = χ(n) over Z, the<br />

(average-case) decision learning with errors problem, denoted DLWE n,m,q,χ , is to distinguish (with<br />

non-negligible advantage) m samples chosen according to A s,χ (for uniformly random s $ ← Z n q ), from<br />

m samples chosen according to the uniform distribution over Z n q × Z q . We denote by DLWE n,q,χ the<br />

variant where the adversary gets oracle access to A s,χ , and is not a-priori bounded in the number<br />

of samples.<br />

There are known quantum (Regev [Reg05]) and classical (Peikert [Pei09]) reductions between<br />

DLWE n,m,q,χ and approximating short vector problems in lattices. Specifically, these reductions take<br />

χ to be (discretized versions of) the Gaussian distribution, which is statistically indistinguishable<br />

from B-bounded, for an appropriate B. Since the exact distribution χ does not matter for our<br />

results, we state a corollary of the results of [Reg05, Pei09] (in conjunction with the search to<br />

decision reduction of Micciancio and Mol [MM11] and Micciancio and Peikert [MP11]) in terms of<br />

the bound B. These results also extend to additional forms of q (see [MM11, MP11]).<br />

Corollary 2.1 ([Reg05, Pei09, MM11, MP11]). Let q = q(n) ∈ N be either a prime power q = p r , or<br />

a product of co-prime numbers q = ∏ q i such that for all i, q i = poly(n), and let B ≥ ω(log n) · √n.<br />

Then there exists an efficiently sampleable B-bounded distribution χ such that if there is an efficient<br />

algorithm that solves the (average-case) DLWE n,q,χ problem. Then:<br />

• There is an efficient quantum algorithm that solves GapSVPÕ(n·q/B)<br />

(and SIVPÕ(n·q/B)<br />

) on<br />

any n-dimensional lattice.<br />

• If in addition q ≥ Õ(2n/2 ), then there is an efficient classical algorithm for GapSVPÕ(n·q/B)<br />

on any n-dimensional lattice.<br />

In both cases, if one also considers distinguishers with sub-polynomial advantage, then we require<br />

B ≥ Õ(n) and the resulting approximation factor is slightly larger Õ(n√ n · q/B).<br />

6<br />

6. FHE without Modulus Switching

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!