22.04.2014 Views

a590003

a590003

a590003

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Cryptographic problems. For β > 0, the short integer solution problem SIS q,β is an average-case version<br />

of the approximate shortest vector problem on Λ ⊥ (A). The problem is: given uniformly random A ∈ Z n×m<br />

q<br />

for any desired m = poly(n), find a relatively short nonzero z ∈ Λ ⊥ (A), i.e., output a nonzero z ∈ Z m such<br />

that Az = 0 mod q and ‖z‖ ≤ β. When q ≥ β √ n·ω( √ log n), solving this problem (with any non-negligible<br />

probability over the random choice of A) is at least as hard as (probabilistically) approximating the Shortest<br />

Independent Vectors Problem (SIVP, a classic problem in the computational study of point lattices [MG02])<br />

on n-dimensional lattices to within Õ(β√ n) factors in the worst case [Ajt96, MR04, GPV08].<br />

For α > 0, the learning with errors problem LWE q,α may be seen an average-case version of the<br />

bounded-distance decoding problem on the dual lattice 1 q Λ(At ). Let T = R/Z, the additive group of<br />

reals modulo 1, and let D α denote the Gaussian probability distribution over R with parameter α (see<br />

Section 2.3 below). For any fixed s ∈ Z n q , define A s,α to be the distribution over Z n q × T obtained by<br />

choosing a ← Z n q uniformly at random, choosing e ← D α , and outputting (a, b = 〈a, s〉/q + e mod 1).<br />

The search-LWE q,α problem is: given any desired number m = poly(n) of independent samples from A s,α<br />

for some arbitrary s, find s. The decision-LWE q,α problem is to distinguish, with non-negligible advantage,<br />

between samples from A s,α for uniformly random s ∈ Z n q , and uniformly random samples from Z n q × T.<br />

There are a variety of (incomparable) search/decision reductions for LWE under certain conditions on the<br />

parameters (e.g., [Reg05, Pei09b, ACPS09]); in Section 3 we give a reduction that essentially subsumes<br />

them all. When q ≥ 2 √ n/α, solving search-LWE q,α is at least as hard as quantumly approximating SIVP<br />

on n-dimensional lattices to within Õ(n/α) factors in the worst case [Reg05]. For a restricted range of<br />

parameters (e.g., when q is exponentially large) a classical (non-quantum) reduction is also known [Pei09b],<br />

but only from a potentially easier class of problems like the decisional Shortest Vector Problem (GapSVP)<br />

and the Bounded Distance Decoding Problem (BDD) (see [LM09]).<br />

Note that the m samples (a i , b i ) and underlying error terms e i from A s,α may be grouped into a matrix<br />

A ∈ Z n×m<br />

q and vectors b ∈ T m , e ∈ R m in the natural way, so that b = (A t s)/q + e mod 1. In this way, b<br />

may be seen as an element of Λ ⊥ (A) ∗ = 1 q Λ(At ), perturbed by Gaussian error. By scaling b and discretizing<br />

its entries using a form of randomized rounding (see [Pei10]), we can convert it into b ′ = A t s + e ′ mod q<br />

where e ′ ∈ Z m has discrete Gaussian distribution with parameter (say) √ 2αq.<br />

2.3 Gaussians and Lattices<br />

The n-dimensional Gaussian function ρ : R n → (0, 1] is defined as<br />

ρ(x) ∆ = exp(−π · ‖x‖ 2 ) = exp(−π · 〈x, x〉).<br />

Applying a linear transformation given by a (not necessarily square) matrix B with linearly independent<br />

columns yields the (possibly degenerate) Gaussian function<br />

{<br />

ρ B (x) =<br />

∆ ρ(B + x) = exp ( −π · x t Σ + x )<br />

if x ∈ span(B) = span(Σ)<br />

0 otherwise<br />

where Σ = BB t ≥ 0. Because ρ B is distinguished only up to Σ, we usually refer to it as ρ √ Σ .<br />

Normalizing ρ √ Σ<br />

by its total measure over span(Σ), we obtain the probability distribution function of<br />

the (continuous) Gaussian distribution D √ Σ<br />

. By linearity of expectation, this distribution has covariance<br />

E x←D √<br />

Σ<br />

[x · x t ] = Σ 2π . (The 1<br />

2π factor is the variance of the Gaussian D 1, due to our choice of normalization.)<br />

For convenience, we implicitly ignore the 1<br />

2π factor, and refer to Σ as the covariance matrix of D√ Σ .<br />

11<br />

4. Trapdoors for Lattices

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!