Cryptosystems from unique-SVP lattices Ajtai-Dwork’97/07, Regev’03

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Quantum Versus Classical Proofs and Advice Scott Aaronson Waterloo MIT Greg Kuperberg UC Davis | x {0,1} n ?
Sublinear-time Algorithms for Machine Learning Ken Clarkson Elad Hazan David Woodruff IBM Almaden Technion IBM Almaden.
Shortest Vector In A Lattice is NP-Hard to approximate
Fearful Symmetry: Can We Solve Ideal Lattice Problems Efficiently?
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Discrete Gaussian Leftover Hash Lemma Shweta Agrawal IIT Delhi With Craig Gentry, Shai Halevi, Amit Sahai.
Lattice-based Cryptography Oded Regev Tel-Aviv University Oded Regev Tel-Aviv University CRYPTO 2006, Santa Barbara, CA.
New Lattice Based Cryptographic Constructions
Lattice-Based Cryptography. Cryptographic Hardness Assumptions Factoring is hard Discrete Log Problem is hard  Diffie-Hellman problem is hard  Decisional.
Derandomized DP  Thus far, the DP-test was over sets of size k  For instance, the Z-Test required three random sets: a set of size k, a set of size k-k’
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
Lattice-Based Cryptography
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Diophantine Approximation and Basis Reduction
Fast algorithm for the Shortest Vector Problem er (joint with Aggarwal, Dadush, and Stephens-Davidowitz) Oded Regev Courant Institute, NYU UC Irvine, Sloan.
Lattice-based cryptography and quantum Oded Regev Tel-Aviv University.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
1 The unique-SVP World 1. Ajtai-Dwork’97/07, Regev’03  PKE from worst-case uSVP 2. Lyubashvsky-Micciancio’09  Relations between worst-case uSVP, BDD,
P & NP.
Information geometry.
Vector Spaces B.A./B.Sc. III: Mathematics (Paper II) 1 Vectors in Rn
Lap Chi Lau we will only use slides 4 to 19
Probabilistic Algorithms
Information Complexity Lower Bounds
Dana Ron Tel Aviv University
Topics in Algorithms Lap Chi Lau.
Approximating the MST Weight in Sublinear Time
On Bounded Distance Decoding, Unique Shortest Vectors, and the
Randomized Algorithms
Streaming & sampling.
Topic 24: Finding Prime Numbers, RSA
Public Key Cryptosystems - RSA
Number Theory (Chapter 7)
Lecture 18: Uniformity Testing Monotonicity Testing
Knapsack Cryptosystems
The Learning With Errors Problem
Knapsack Cryptosystems
Digital Signature Schemes and the Random Oracle Model
Pseudorandomness when the odds are against you
Background: Lattices and the Learning-with-Errors problem
Distinct Distances in the Plane
Equivalence of Search and Decisional (Ring-) LWE
Digital Signature Schemes and the Random Oracle Model
Parameterised Complexity
Vadim Lyubashevsky INRIA / ENS, Paris
Linear sketching with parities
Randomized Algorithms
RS – Reed Solomon List Decoding.
Locally Decodable Codes from Lifting
The Curve Merger (Dvir & Widgerson, 2008)
Cryptography Lecture 12 Arpita Patra © Arpita Patra.
Linear sketching over
Summarizing Data by Statistics
S.Safra I.Dinur G.Kindler
Linear sketching with parities
Lattices. Svp & cvp. lll algorithm. application in cryptography
Cryptography Lecture 5.
Cryptography Lecture 8.
Every set in P is strongly testable under a suitable encoding
Vector Spaces 1 Vectors in Rn 2 Vector Spaces
Instructor: Aaron Roth
President’s Day Lecture: Advanced Nearest Neighbor Search
Patrick Lee 12 July 2003 (updated on 13 July 2003)
Instructor: Aaron Roth
Presentation transcript:

Cryptosystems from unique-SVP lattices Ajtai-Dwork’97/07, Regev’03 Shai Halevi, MIT, August 2009  Many slides borrowed from Oded Regev, denoted by

 f(n)-unique-SVP Promise: the shortest vector u is shorter by a factor of f(n) Algorithm for 2n-unique SVP [LLL82,Schnorr87] Believed to be hard for any polynomial nc 1 f(n) 1 nc 2n believed hard easy

Ajtai-Dwork & Regev’03 PKEs Worst-case Search u-SVP Regev03: “Hensel lifting” AD97: Geometric “Worst-case Distinguisher” Wavy-vs-Uniform Worst-case Decision u-SVP Basic Intuition AD97 PKE bit-by-bit n-dimensional Leftover hash lemma worst-case/average-case Nearly-trivial reductions AD07 PKE O(n)-bits n-dimensional Regev03 PKE bit-by-bit 1-dimensional Projecting to a line Amortizing by adding dimensions

n-dimensional distributions  Distinguish between the distributions: ? Wavy Uniform (In a random direction)

Dual Lattice L L* Given a lattice L, the dual lattice is  Given a lattice L, the dual lattice is L* = { x | for all yL, <x,y>Z } 1/5 L L* 5

L* - the dual of L  L* L n Case 1 1/n n n Case 2

Reduction Input: a basis B* for L* Produce a distribution that is: Wavy if L has unique shortest vector (|u|1/n) Uniform (on P(B*)) if l1(L) > n Choose a point from a Gaussian of radius n, and reduce mod P(B*) Conceptually, a “random L* point” with a Gaussian(n) perturbation

Creating the Distribution  L* L*+ perturb Case 1 n Case 2

Analyzing the Distribution  Theorem: (using [Banaszczyk’93]) The distribution obtained above depends only on the points in L of distance n from the origin (up to an exponentially small error) Therefore, Case 1: Determined by multiples of u  wavy on hyperplanes orthogonal to u Case 2: Determined by the origin  uniform

Proof of Theorem For a set A in Rn, define:  For a set A in Rn, define: Poisson Summation Formula implies: Banaszczyk’s theorem: For any lattice L,

Proof of Theorem (cont.)  In Case 2, the distribution obtained is very close to uniform: Because:

Ajtai-Dwork & Regev’03 PKEs Worst-case Search u-SVP Regev03: “Hensel lifting” AD97: Geometric “Worst-case Distinguisher” Wavy-vs-Uniform Worst-case Decision u-SVP Basic Intuition next

DistinguishSearch, AD97 Reminder: L* lives in hyperplanes We want to identify u Using an oracle that distinguishes wavy distributions from uniform in P(B*) u H1 H0 H-1

The plan Use the oracle to distinguish points close to H0 from points close to H1 Then grow very long vectors that are rather close to H0 This gives a very good approximation for u, then we use it to find u exactly

Distinguishing H0 from H1 Input: basis B* for L*, ~length of u, point x And access to wavy/uniform distinguisher Decision: Is x 1/poly(n) close to H0 or to H1? Choose y from a wavy distribution near L* y = Gaussian(s)* with s < 1/2|u| Pick aR[0,1], set z = ax + y mod P(B*) Ask oracle if z is drawn from wavy or uniform distribution * Gaussian(s): variance s2 in each coordinate

Distinguishing H0 from H1 (cont.) Case 1: x close to H0 ax also close to H0 ax + y mod P(B*) close to L*, wavy x H0

Distinguishing H0 from H1 (cont.) Case 2: x close to H1 ax “in the middle” between H0 and H1 Nearly uniform component in the u direction ax + y mod P(B*) nearly uniform in P(B*) x H1 H0

Distinguishing H0 from H1 (cont.) Repeat poly(n) times, take majority Boost the advantage to near-certainty Below we assume a “perfect distinguisher” Close to H0  always says NO Close to H1  always says YES Otherwise, there are no guarantees Except halting in polynomial time

we’ll see how in a minute Growing Large Vectors Start from some x0 between H-1 and H+1 e.g. a random vector of length 1/|u| In each step, choose xi s.t. |xi| ~ 2|xi-1| xi is somewhere between H-1 and H+1 Keep going for poly(n) steps Result is x* between H1 with |x*|=N/|u| Very large N, e.g., N=2n we’ll see how in a minute 2

From xi-1 to xi Try poly(n) many candidates: Candidate w = 2xi-1 + Gaussian(1/|u|) For j = 1,…, m=poly(n) wj = j/m · w Check if wj is near H0 or near H1 If none of the wj’s is near H1 then accept w and set xi = w Else try another candidate w=wm w2 w1

From xi-1 to xi: Analysis xi-1 between H1  w is between Hn Except with exponentially small probability w is NOT between H1  some wj near H1 So w will be rejected So if we make progress, we know that we are on the right track

From xi-1 to xi: Analysis (cont.) With probability 1/poly(n), w is close to H0 The component in the u direction is Gaussian with mean < 2/|u| and variance 1/|u|2 noise 2xi-1 H1 H0

From xi-1 to xi: Analysis (cont.) With probability 1/poly, w is close to H0 The component in the u direction is Gaussian with mean < 2/|u| and standard deviation 1/|u| w is close to H0, all wj’s are close to H0 So w will be accepted After polynomially many candidates, we will make progress whp

Finding u Find n-1 x*’s Compute u’  {x*1,…,x*n-1}, with |u’|=1 x*t+1 is chosen orthogonal to x*1,…,x*t By choosing the Gaussians in that subspace Compute u’  {x*1,…,x*n-1}, with |u’|=1 u’ is exponentially close to u/|u| u/|u| = (u’+e), |e|=1/N Can make N  2n (e.g., N=2n ) Diophantine approximation to solve for u 2 (slide 60)

Ajtai-Dwork & Regev’03 PKEs (slide 36) Worst-case Search u-SVP Regev03: “Hensel lifting” AD97: Geometric “Worst-case Distinguisher” Wavy-vs-Uniform Worst-case Decision u-SVP Basic Intuition AD97 PKE bit-by-bit n-dimensional Worst-case/average-case +leftover hash lemma next

Average-case Distinguisher Intuition: lattice only matters via the direction of u Security parameter n, other parameters N,e A random u in n-dim. unit sphere defines Du(N,e) c = disceret-Gaussian(N) in one dimension Defines a vector x=c·u/<u,u>, namely xu and <x,u>=c y = Gaussian(N) in the other n-1 dimensions e = Gaussian(e) in all n dimensions Output x+y+e The average-case problem Distinguish Du(N,e) from G(N,e)=Gaussian(N)+Gaussian(e) For a noticeable fraction of u’s

Worst-case/average-case (cont.) Thm: Distinguishing Du(N,e) from Uniform  Distinguishing WavyB* from UniformB* for all B* When L(B) is unique-SVP, we know l1(L(B)) upto (1+1/poly(n))-factor, for params N = 2W(n), e=n-4 Pf: Given B*, scale it s.t. l1(L(B))  [1,1+1/poly) Also apply random rotation Given samples x (from UniformB* / WavyB*) Sample y=discrete-GaussianB*(N) Can do this for large enough N Output z=x+y “Clearly” z is close to G(N) /Du(N) respectively

The AD97 Cryptosystem Secret key: a random u  unit sphere Public key: n+m+1 vectors (m=8n log n) b1,…bn Du(2n,n-4), v0,v1,…,vm  Du(n2n,n-3) So <bi,u>, <vi,u> ~ integer We insist on <v0,u> ~ odd integer Will use P(b1,…bn) for encryption Need P(b1,…bn) with “width” > 2n/n

The AD97 Cryptosystem (cont.) Encryption(s): c’  random-subset-sum(v1,…vm) + sv0/2 output c = (c’+Gaussian(n-4)) mod P(B) Decryption(c): If <u,c> is closer than ¼ to integer say 0, else say 1 Correctness due to <bi,u>,<vj,u>~integer and width of P(B)

AD97 Security The bi’s, vi’s chosen from Du(something) By hardness assumption, can’t distinguish from Gu(something) Claim: if they were from Gu(something), c would have no information on the bit s Proven by leftover hash lemma + smoothing Note: vi’s have variance n2 larger than bi’s  In the Gu case vi mod P(B) is nearly uniform

AD97 Security (cont.) Partition P(B) to qn cells, q~n7 For each point vi, consider the cell where it lies ri is the corner of that cell SSvi mod P(B) = SSri mod P(B) + n-5 “error” S is our random subset SSri mod P(B) is a nearly-random cell We’ll show this using leftover hash The Gaussian(n-4) in c drowns the error term q q

Leftover Hashing Consider hash function HR:{0,1}m  [q]n The key is R=[r1,…,rm] [q]nm The input is a bit vector b=[s1,…,sm]T{0,1}m HR(b) = Rb mod q H is “pairwise independent” (well, almost..) Yay, let’s use the leftover hash lemma <R,HR(b)>, <R,U> statistically close For random R [q]nm, b{0,1}m, U[q]n Assuming m  n log q

AD97 Security (cont.) We proved SSri mod P(B) is nearly-random Recall: c0 = SSri + error(n-5) + Gaussian(n-4) mod P(B) For any x and error e, |e|~n-5, the distr. x+e+Gaussian(n-5), x+Gaussian(n-4) are statistically close So c0 ~ SSri + Gaussian(n-3) mod P(B) Which is close to uniform in P(B) Also c1 = c0 + v0/2 mod P(B) close to uniform

Ajtai-Dwork & Regev’03 PKEs Worst-case Search u-SVP Average-case Decision Wavy-vs-Uniform Worst-case u-SVP Basic Intuition Regev03: “Hensel lifting” AD97: Geometric AD97 PKE bit-by-bit n-dimensional Leftover hash lemma (slide 49) Not today AD07 PKE O(n)-bits n-dimensional Regev03 PKE bit-by-bit 1-dimensional Projecting to a line Amortizing by adding dimensions

Backup Slides Regev’s Decision-to-Search uSVP Regev’s dimension reduction Diophantine Approximation

Decision mod-p problem uSVP DecisionSearch  Search-uSVP Decision mod-p problem Decision-uSVP

Reduction from: Decision mod-p  Reduction from: Decision mod-p Given a basis (v1…vn) for n1.5-unique lattice, and a prime p>n1.5 Assume the shortest vector is: u = a1v1+a2v2+…+anvn Decide whether a1 is divisible by p

Reduction to: Decision uSVP  Reduction to: Decision uSVP Given a lattice, distinguish between: Case 1. Shortest vector is of length 1/n and all non-parallel vectors are of length more than n Case 2. Shortest vector is of length more than n

The reduction Input: a basis (v1,…,vn) of a n1.5 unique lattice  Input: a basis (v1,…,vn) of a n1.5 unique lattice Scale the lattice so that the shortest vector is of length 1/n Replace v1 by pv1. Let M be the resulting lattice If p | a1 then M has shortest vector 1/n and all non-parallel vectors more than n If p a1 then M has shortest vector more than n |

The input lattice L  L 1/n n -u u 2u

The lattice M M The lattice M is spanned by pv1,v2,…,vn:  The lattice M is spanned by pv1,v2,…,vn: If p|a1, then u = (a1/p)•pv1 + a2v2 +…+ anvn M : M n 1/n u

The lattice M M The lattice M is spanned by pv1,v2,…,vn:  The lattice M is spanned by pv1,v2,…,vn: If p a1, then uM: | M n -pu pu

Decision mod-p problem uSVP DecisionSearch  Search-uSVP Decision mod-p problem  Decision-uSVP

Reduction from: Decision mod-p  Reduction from: Decision mod-p Given a basis (v1…vn) for n1.5-unique lattice, and a prime p>n1.5 Assume the shortest vector is: u = a1v1+a2v2+…+anvn Decide whether a1 is divisible by p

The Reduction Idea: decrease the coefficients of the shortest vector  Idea: decrease the coefficients of the shortest vector If we find out that p|a1 then we can replace the basis with pv1,v2,…,vn . u is still in the new lattice: u = (a1/p)•pv1 + a2v2 + … + anvn The same can be done whenever p|ai for some i

The Reduction But what if p ai for all i ? |  But what if p ai for all i ? Consider the basis v1,v2-v1,v3,…,vn The shortest vector is u = (a1+a2)v1 + a2(v2-v1) + a3v3 + … + anvn The first coefficient is a1+a2 Similarly, we can set it to a1-bp/2ca2 ,…, a1-a2 , a1 , a1+a2 , … , a1+bp/2ca2 One of them is divisible by p, so we choose it and continue |

The Reduction Repeating this process decreases the coefficients of u are by a factor of p at a time The basis that we started from had coefficients  22n The coefficients are integers After  2n2 steps, all the coefficient but one must be zero The last vector standing must be u

Regev’s dimension reduction

Reducing from n to 1-dimension  Distinguish between the 1-dimensional distributions: Uniform: R-1 Wavy: R-1

Reducing from n to 1-dimension  First attempt: sample and project to a line

Reducing from n to 1-dimension  But then we lose the wavy structure! We should project only from points very close to the line

The solution Use the periodicity of the distribution  Use the periodicity of the distribution Project on a ‘dense line’ :

The solution 

The solution  We choose the line that connects the origin to e1+Ke2+K2e3…+Kn-1en where K is large enough The distance between hyperplanes is n The sides are of length 2n Therefore, we choose K=2O(n) Hence, d<O(Kn)=2^(O(n2))

Worst-case vs. Average-case  So far: a problem that is hard in the worst-case: distinguish between uniform and d,γ-wavy distributions for all integers d<2^(n2) For cryptographic applications, we would like to have a problem that is hard on the average: distinguish between uniform and d,γ-wavy distributions for a non-negligible fraction of d in [2^(n2), 2•2^(n2)]

Compressing  The following procedure transforms d,γ-wavy into 2d,γ-wavy for all integer d: Sample a from the distribution Return either a/2 or (a+R)/2 with probability ½ In general, for any real a1, we can compress d,γ-wavy into ad,γ-wavy Notice that compressing preserves the uniform distribution We show a reduction from worst-case to average-case

Reduction  Assume there exists a distinguisher between uniform and d,γ-wavy distribution for some non-negligible fraction of d in [2^(n2), 2•2^(n2)] Given either a uniform or a d,γ-wavy distribution for some integer d<2^(n2) repeat the following: Choose a in {1,…,22^(n2)} according to a certain distribution Compress the distribution by a Check the distinguisher’s acceptance probability If for some a the acceptance probability differs from that of uniform sequences, return ‘wavy’; otherwise, return ‘uniform’

Reduction Distribution is uniform: Distribution is d,γ-wavy: 1 2^(n2)  Distribution is uniform: After compression it is still uniform Hence, the distinguisher’s acceptance probability equals that of uniform sequences for all a Distribution is d,γ-wavy: After compression it is in the good range with some probability Hence, for some a, the distinguisher’s acceptance probability differs from that of uniform sequences 1 2^(n2) 22^(n2) … … d

Diophantine Approximation

Solving for u (from slide 24) Recall: We have B=(b1,…bn) and u’ Shortest vector uL(B) is u = Smibi, |mi| < 2n Because the basis B is LLL reduced u’ is very very close to u/|u| u/|u| = (u’+e), |e|=1/N, N  2n (e.g., N=2n ) Express u’ = S xibi (xi‘s are reals) Set ni = xi/xn for i=1,…,n-1 ni very very close to mi/mn ( ni·mn = mi+O(2n/N) ) 2

Diophantine Approximation Look for mn<2n s.t. for all i, ni·mn is 2n/N away from an integer (for N = 2n ) z is the unique shortest in L(M) by a factor~N/2n Use LLL to find it Compute the mi’s and u 2 1 –n1 1 –n2 … 1 –nn-1 1/N m1 m2 … mn O(2n/N) ... = basis M integer vector short lattice point z

Why is z unique-shortest? Assume we have another short vector yL(M) mn not much larger than 2n, also the other mi’s Every small yL(M) corresponds to vL(B) such that v/|v| very very close to u’ So also v/|v| very very close to u/|u| (~2n/N) Smallish coefficient  v not too long (~22n)  v very close to its projection on u (~23n/N)   c s.t. (v–cu)L(B) is short Of length  23n/N + l1/2 < l1  v must be a multiple of u q~2n/N |v|~22n u v ~23n/N