Public-Key Encryption from Different Assumptions Benny Applebaum Boaz Barak Avi Wigderson
Plan Background Our results -assumptions & constructions Proof idea Conclusions and Open Problems
Private Key Cryptography (2000BC-1970s) Public Key Cryptography (1976-…) k Secret key
Public Key Crypto Talk Securely with no shared key Private Key Crypto Share key and then talk securely Beautiful Math Few Candidates Discrete Logarithm [DiffieHellman76,Miller85,Koblitz87] Integer Factorization [RivestShamirAdleman77,Rabin79] Error Correcting Codes [McEliece78,Alekhnovich03,Regev05] Lattices [AjtaiDwork96, Regev04] Many Candidates DES [Feistel+76] RC4 [Rivest87] Blowfish [Schneie93] AES [RijmenDaemen98] Serpent [AndersonBihamKnudsen98] MARS [Coppersmith+98] Unstructured Beautiful structure may lead to unforeseen attacks!
The ugly side of beauty Factorization of n bit integers Trial Division ~exp(n/2) 300BC1974 Pollards Alg ~exp(n/4) 1975 Continued Fraction ~exp(n 1/2 ) RSA invented Quadratic Sieve ~exp(n 1/2 ) 1990 Number Field Sieve ~exp(n 1/3 ) 1994 Shors Alg ~poly*(n)
The ugly side of beauty Factorization of n bit integers Trial Division ~exp(n/2) 300BC1974 Pollards Alg ~exp(n/4) 1975 Continued Fraction ~exp(n 1/2 ) RSA invented Quadratic Sieve ~exp(n 1/2 ) 1990 Number Field Sieve ~exp(n 1/3 ) 1994 Shors Alg ~poly*(n) Cryptanalysis of DES 1976 Trivial 2 56 attack 1993 Linear Attack [Matsui] 2 43 time+examples DES invented Differntial Attack [Biham Shamir] 2 47 time+examples 1990 Are there ugly public key cryptosystems?
Complexity Perspective Our goals as complexity theorists are to prove that: NP P NP is hard on average private key cryptography public key cryptography (clique hard) (clique hard on avg) (planted clique hard) (factoring hard)
Complexity Perspective Our goals as complexity theorists are to prove that: NP P NP is hard on average one-way functions public key cryptography (clique hard) (clique hard on avg) (planted clique hard) (factoring hard)
Complexity Perspective Our goals as complexity theorists are to prove that: NP P one-way functions public key cryptography (clique hard) (planted clique hard) (factoring hard)
Goal: PKC based on more combinatorial problems –increase our confidence in PKC –natural step on the road to Ultimate-Goal –understand avg-hardness/algorithmic aspects of natural problems This work: Several constructions based on combinatorial problems Disclaimer: previous schemes are much better in many (most?) aspects – Efficiency – Factoring: old and well-studied – Lattice problems based on worst-case hardness (e.g., n 1.5 -GapSVP) What should be done? Ultimate Goal: public-key cryptosystem from private-key crypto
Goal: PKC based on more combinatorial problems –increase our confidence in PKC –natural step on the road to Ultimate-Goal –understand avg-hardness/algorithmic aspects of natural problems This work: Several constructions based on combinatorial problems Disclaimer: previous schemes are much better in many (most?) aspects – Efficiency – Factoring: old and well-studied – Lattice problems based on worst-case hardness (e.g., n 1.5 -GapSVP) What should be done? Ultimate Goal: public-key cryptosystem from one-way function
Plan Background Our results -assumptions & constructions Proof idea Conclusions and Open Problems
Assumption DUE Decisional-Unbalanced-Expansion: Hard to distinguish G from H Cant approximate vertex expansion in random unbalanced bipartite graphs Well studied problem though not exactly in this setting (densest-subgraph) G random (m,n,d) graph H random (m,n,d) graph + planted shrinking set S of size q n d m n d m T of size<q/3
Assumption DUE Decisional-Unbalanced-Expansion: Hard to distinguish G from H G random (m,n,d) graph H random (m,n,d) graph + planted shrinking set S of size q n d m n d m We prove: Thm. Cant distinguish via cycle-counting / spectral techniques Thm. Implied by variants of planted-clique in random graphs T of size q/3
Assumption 2 Decisional-Unbalanced-Expansion: Hard to distinguish G from H G random (m,n,d) graph H random (m,n,d) graph + planted shrinking set S of size n 0.01 n d m n d m We prove: Thm 2. 3LIN(m=n 1.4, =n -0.2 ) PKC T of size<|S| d-LIN(m=n log n, = n -0.1 )+DUE(d,m=O(n), q= n 0.1 ) PKC
Decisional-Sparse-Function: Let G be a random (m,n,d) graph. Hard to solve random sparse (non-linear) equations Conjectured to be one-way function when m=n [Goldreich00] Thm: Hard for: myopic-algorithms, linear tests, low-depth circuits (AC 0 ) (as long as P is good e.g., 3-majority of XORs) Assumption DSF m n d x1xnx1xn y1yiymy1yiym =P(x 2,x 3,x 6 ) Then, y is pseudorandom. random input random string P is (non-linear) predicate
Decisional-Sparse-Function: Let G be a random (m,n,d) graph. Hard to solve random sparse (non-linear) equations Conjectured to be one-way function when m=n [Goldreich00] Thm: Hard for: myopic-algorithms, linear tests, AC 0 circuits (as long as P is good e.g., 3-majority of XORs) Assumption DSF m n d x1xnx1xn y1yiymy1yiym =P(x 2,x 3,x 6 ) Then, y is pseudorandom. random input random string P is (non-linear) predicate
SearchLIN : Let G be a random (m,n,d) graph. Hard to solve sparse noisy random linear equations Well studied hard problem, sparseness doesnt seem to help. Thm: SLIN is Hard for: low-degree polynomials (via [Viola08]) low-depth circuits (via [MST03+Brav n -order Lasserre SDPs [Schoen08] Assumption SLIN m n x1xnx1xn y1yiymy1yiym =x2+x3+x6=x2+x3+x6 Given G and y, cant recover x. random input Goal: find x. - noisy bit +err
SearchLIN : Let G be a random (m,n,d) graph. Hard to solve sparse noisy random linear equations Well studied hard problem, sparseness doesnt seem to help. Thm: SLIN is Hard for: low-degree polynomials (via [Viola08]) AC 0 circuits (via [MST03+Brav09] ) n -order Lasserre SDPs [Schoen08] Assumption SLIN m n x1xnx1xn y1yiymy1yiym =x2+x3+x6=x2+x3+x6 Given G and y, cant recover x. random input Goal: find x. - noisy bit +err
Search3LIN : Let G be a random (m,n,3) graph. Thm 1: 3LIN(m=n 1.4, =n -0.2 ) PKC Assumption 3LIN m n x1xnx1xn y1yiymy1yiym =x 2 +x 3 +x 6 +err Given G and y, cant recover x. random input - noisy bit
Main Results PKC from: Thm 1: DUE(m, q= log n, d)+DSF(m, d). -e.g., m=n 1.1 and d= O(1) -pro: combinatorial/private-key nature -con: only n log n security q q/3 n m DUE: graph looks random P(x 2,x 3,x 6 ) d x1xnx1xn DSF: output looks random input output d x1xnx1xn dLIN: cant find x input output x 2 +x 3 +x 6 +err
Main Results PKC from: Thm 1: DUE(m, q= log n, d)+DSF(m, d) -e.g., m=n 1.1 and d= O(1) -pro: combinatorial/private-key nature -con: only n log n security q q/3 n m DUE: graph looks random P(x 2,x 3,x 6 ) d x1xnx1xn DSF: output looks random input output d x1xnx1xn dLIN: cant find x input output x 2 +x 3 +x 6 +err
Main Results PKC from: Thm 1: DUE(m, q= log n, d)+DSF(m, d) Thm 2: SLIN(m=n 1.4, =n -0.2,d=3) q q/3 n m DUE: graph looks random P(x 2,x 3,x 6 ) d x1xnx1xn DSF: output looks random input output d x1xnx1xn dLIN: cant find x input output x 2 +x 3 +x 6 +err
Main Results PKC from: Thm 1: DUE(m, q= log n, d)+DSF(m, d) Thm 2: SLIN(m=n 1.4, =n -0.2,d=3) Thm 3: SLIN(m=n log n,,d) q q/3 n m DUE: graph looks random P(x 2,x 3,x 6 ) d x1xnx1xn DSF: output looks random input output d x1xnx1xn dLIN: cant find x input output x 2 +x 3 +x 6 +err + DUE(m=10000n, q=1/, d)
3LIN vs. Related Schemes d x1xnx1xn dLIN: cant find x input output x 2 +x 3 +x 6 +err Our scheme[Alekhnovich03][Regev05] #equationsO(n 1.4 )O(n) noise rate1/n 0.2 1/ n degree (locality)3n/2 fieldbinary large evidenceresists SDPs, related to refute- 3SAT implied by n 1.5 -SVP [Regev05,Peikert09] Our intuition: 1/ n noise was a real barrier for PKC construction 3LIN is more combinatorial (CSP) low-locality-noisy-parity is universal for low-locality
Plan Background Our results -assumptions & constructions Proof idea Conclusions and Open Problems
Evidence for S3LIN Our Assumption: Search-3LIN(m=n 1.4, =n -0.2 ) is hard Distinguish version D-3LIN(n 1.4,n 0.2 ) hard for exp(n ) size AC 0 circuits [MST03+Brav09] n rounds of Lasserre (strongest SDP hierarchy) [Schoenebeck08] D-3LIN(m=1000n, =0.01) assumed to be hard [MST03,Alekh03, AIK06] E.g., Yes = -satisfiable; No = random (unsatisfiable) Algorithmic tasks for CSPs (avg case): No Yes Space of all instances Refute: certify No Distinguish: tell a part Search: certify Yes > >
Evidence for S3LIN Our Assumption: Search-3LIN(m=n 1.4, =n -0.2 ) is hard Refutation version Follows from hardness of R-3SAT(n 1.4 ) [Feige02] E.g., Yes = -satisfiable; No = random (unsatisfiable) Algorithmic tasks for CSPs (avg case): No Yes Space of all instances Refute: certify No Distinguish: tell a part Search: certify Yes n 1.5 satisfiability threshold conjectured to be hard [Feige 02] non-deterministic algorithm [Feige-Kim-Ofek 02] poly-time algorithm [Goerdt-Krivelevich01 Friedman-G- Krivelevich, G-Lanka, Feige-Ofek 03] n n 4.2n > >
Evidence for S3LIN Our Assumption: Search-3LIN(m=n 1.4, =n -0.2 ) is hard Alternative construction based on Search-3LIN(m=n log n, =n -0.2 ) + Planted dense subgraph problem E.g., Yes = -satisfiable; No = random (unsatisfiable) Algorithmic tasks for CSPs (avg case): No Yes Space of all instances Refute: certify No Distinguish: tell a part Search: certify Yes < <
Comparison to Other Schemes Our: S3LIN(n 1.4,n -0.2 ) Evidence: refutation follows R3SAT(n 1.4 ), resists Lessere(n ) [Alekhnovich03]: LPN(O(n), 1/ n) [Regev05]: LWE p (O(n), 1/ n) y M x += e err vector of rate m n random 3-sparse matrix random n-bit vector Over F p
Comparison to Other Schemes Our: S3LIN(n 1.4,n -0.2 ) Evidence: refutation follows R3SAT(n 1.41 ), resists Lessere(n ) [Alekhnovich03]: LPN(O(n), 1/ n) [Regev05]: LWE p (O(n), 1/ n) Evidence: implied by n 1.5 -SVP [Regev05,Peikert09] Implication: SZK is hard Our intuition: 1/ n noise was a real barrier for PKC construction –R3LIN(m,1/ n) does not seem to follow from R3SAT 3LIN is more combinatorial (Constraint-Satisfaction-Problem) low-locality-noisy-parity is universal for low-locality functions (learning junta, Feiges xor priniciple, crypto in NC 0 ?)
Plan Background Main Thm: Construction of Cryptosystem Search Approximate Search Prediction Prediction over planted distribution PKC Variants Conclusions and Open Problems
S3LIN(m=n 1.4, =n -0.2 ) PKE Proof outline: Search Approximate Search Prediction Prediction over planted distribution PKC Goal: find x y M x += e err vector of rate m n random 3-sparse matrix n x1xnx1xn =x 2 +x 3 +x 6 +err random input - noisy bit y1yiymy1yiym
S3LIN(m=n 1.4, =n -0.2 ) PKE Goal: find x y M x += e err vector of rate m n random 3-sparse matrix n x1xnx1xn =x 2 +x 3 +x 6 +err random input - noisy bit y1yiymy1yiym
z Our Encryption Scheme Public-key: Matrix M Private-key: S s.t M m = i S M i Decryption: w/p (1- ) |S| >0.9 no noise in e S i S y i =0 i S z i =b Encrypt(b): choose x,e and output z=(y 1, y 2,…, y m +b) y x += e M Given ciphertext z output i S z i y b + Params: m=10000n 1.4 =n -0.2 |S|=0.1n 0.2 S z =
z Our Encryption Scheme Public-key: Matrix M Private-key: S s.t M m = i S M i Thm. (security): If M is at most 0.99-far from uniform S3LIN(m, ) hard Cant distinguish E(0) from E(1) Proof outline: Search Approximate Search Prediction Prediction over planted distribution security Encrypt(b): choose x,e and output z=(y 1, y 2,…, y m +b) y x += e M y b + Params: m=10000n 1.4 =n -0.2 |S|=0.1n 0.2 S z =
z Our Encryption Scheme Public-key: Matrix M Private-key: S s.t M m = i S M i Thm. (security): If M is at most 0.99-far from uniform S3LIN(m, ) hard E(0) 0.1 E(1) Amplify via standard techniques [Holenstein-Renner 05] Encrypt(b): choose x,e and output z=(y 1, y 2,…, y m +b) y x += e M Given ciphertext z output i S z i y b + Params: m=10000n 1.4 =n -0.2 |S|=0.1n 0.2 S z =
Search Approximate Search S3LIN(m, ): Given M,y find x whp AS3LIN(m, ): Given M,y find w 0.9 x whp Lemma: Solver A for AS3LIN(m, ) allows to solve S3LIN(m+10n lg n, ) y M x += e err vector of rate m n random 3-sparse matrix random n-bit vector search app-search prediction prediction over planted PKC
Search Approximate Search S3LIN(m, ): Given M,y find x whp AS3LIN(m, ): Given M,y find w 0.9 x whp Lemma: Solver A for AS3LIN(m, ) allows to solve S3LIN(m+10n lg n, ) Use A and first m equations to obtain w. Use w and remaining equations to recover x as follows. Recovering x 1 : – for each equation x 1 +x i +x k =y compute a vote x 1 =x i +x k +y – Take majority Analysis: Assume w S = x S for set S of size 0.9n Vote is good w/p>>1/2 as Pr[i S], Pr[k S], Pr[y row is not noisy]>1/2 If x 1 appears in 2log n distinct equations. Then, majority is correct w/p 1-1/n 2 Take union bound over all variables =wi+wk+y=wi+wk+y
Search Approximate Search S3LIN(m, ): Given M,y find x whp AS3LIN(m, ): Given M,y find w 0.9 x whp Lemma: Solver A for AS3LIN(m, ) allows to solve S3LIN(m+10n lg n, ) Giveninvoke A on (T,z) to obtain w. Recover x from w and (M,y) as follows. Recovering x 1 : – for each row (1,i,k) compute a vote x 1 =w i +w k +y row – Take majority Analysis: Assume w S = x S for set S of size 0.9n Vote is good w/p>>1/2 as Pr[i S], Pr[k S], Pr[y row is not noisy]>1/2 Assume x 1 has 2log n distinct neighbors. Then, majority is correct w/p 1-1/n 2 Take union bound over all variables T, z M,y
Approximate Search Prediction AS3LIN(m, ): Given M,y find w 0.9 x w/p 0.8 P3LIN(m, ): Given M,y, (i,j,k) find x i +x j +x k w/p 0.9 Lemma: Solver A for P3LIN(m, ) allows to solve AS3LIN(m+1000 n, ) y M x += e m n ? search app-search prediction prediction over planted PKC
Approximate Search Prediction Proof: y M 1000n z T m
11 Approximate Search Prediction Proof: Do 100n times Analysis: By Markov, whp T, z are good i.e., Pr t,j,k [A(T,z,(t,j,k))=x t +x j +x k ]>0.8 Conditioned on this, each red prediction is good w/p>>1/2 whp will see 0.99 of vars many times – each prediction is independent y M 1000n z T m Invoke Predictor A + 2 noisy 0.2 noisy x i i
Prediction over Related Distribution P3LIN(m, ): Given M,y, r=(i,j,k) find x i +x j +x k w/p 0.9 D = distribution over (M,r) which at most 0.99-far from uniform Lemma: Solver A for P3LIN D (m, ) allows to solve P3LIN U (O(m), ) Problem: A might be bad predictor over uniform distribution Sol: Verify that (M i,r) is good for A with respect to random x and random noise y M x + = e ? r search app-search prediction prediction over planted PKC
Prediction over Related Distribution P3LIN(m, ): Given M,y, r=(i,j,k) find x i +x j +x k w/p 0.9 D = distribution over (M,r) which at most 0.99-far from uniform Lemma: Solver A for P3LIN D (m, ) allows to solve P3LIN U (O(m), ) Problem: A might be bad predictor over uniform distribution Sol: Test that (M,r) is good for A with respect to random x and random noise Good prediction w/p 0.01 Otherwise, I dont know y M x + = e ? r search app-search prediction prediction over planted PKC Uniform D
Prediction over Related Distribution Lemma: Solver A for P3LIN D (m, ) allows to solve P3LIN U (O(m), ) Sketch: Partition M,y to many pieces M i,y i then invoke A(M i,y i,r) and take majority Problem: All invocations use the same r and x Sol: Re-randmization ! M x + = ? r e y
Prediction over Related Distribution P3LIN(m, ): Given M,y, r=(i,j,k) find x i +x j +x k w/p 0.9 D = distribution over (M,r) which at most 0.99-far from uniform Lemma: Solver A for P3LIN D (m, ) allows to solve P3LIN U (O(m), ) Sketch: Partition M,y to many pieces M i,y i then invoke A(M i,y i,r) and take majority Problem: success prob is small even in a single invocation Sol: Verify that (M i,r) is good for A with respect to random x and random noise Re-randomize x: redefine y i to be y i +M i x Problem: All invocations use the same r Sol: Re-randmization: randomly permute cols of r and M i y M x + = e ? r search app-search prediction prediction over planted PKC
Prediction over Related Distribution P3LIN(m, ): Given M,y, r=(i,j,k) find x i +x j +x k w/p 0.9 D = distribution over (M,r) which is 0.01-close to uniform Lemma: Solver A for P3LIN D (m, ) allows to solve P3LIN U (O(m), ) Sketch: Partition M,y to many pieces M i,y i then invoke A(M i,y i,r) and take majority Problem: success prob is small even in a single invocation Sol: Verify that (M i,r) is good for A with respect to random x and random noise Re-randomize x: redefine y i to be y i +M i x Problem: All invocations use the same r Sol: Re-randmization: randomly permute cols of r and M i y M x + = e ? r search app-search prediction prediction over planted PKC
Distribution with Short Linear Dependency H q,n = Uniform over matrices with q-rows each with 3 ones and n cols each with either 0 ones or 2 ones P m,n q = (m,n,3)-uniform conditioned on existence of sub-matrix H H q that touches the last row Lemma : Let m=n 1.4 and q=n 0.2 Then, (m,n,3)-uniform and P m,n q are -close for const Proof: follows from [FKO06]
Distribution with Short Linear Dependency H q,n = Uniform over matrices with q-rows each with 3 ones and n cols each with either 0 ones or 2 ones P m,n q = (m,n,3)-uniform conditioned on existence of sub-matrix H H q that touches the last row Lemma : Let m=n 1.4 and q=n 0.2 Then, (m,n,3)-uniform and P m,n q are at most statistially far Proof: follows from [FKO06] stat
Plan Background Our results -assumptions & constructions Proof idea Conclusions and Open Problems
Other Results Assumptions Oblivious-Transfer General secure computation New construction of PRG with large stretch + low locality Assumptions Learning k-juntas requires time n (k)
Conclusions New Cryptosystems with arguably less structured assumptions Future Directions: Improve assumptions - use random 3SAT ? Better theoretical understanding of public-key encryption -public-key cryptography can be broken in NP co-NP ?
Thank You !