FOC-2 Cryptography with Low Complexity: 3

Slides:



Advertisements
Similar presentations
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Advertisements

An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
1 Efficient Pseudorandom Generators from Exponentially Hard One-Way Functions Iftach Haitner, Danny Harnik, Omer Reingold.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
CS151 Complexity Theory Lecture 8 April 22, 2004.
The Many Entropies of One-Way Functions Thomas Holenstein Iftach Haitner Salil VadhanHoeteck Wee Joint With Omer Reingold.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
CS151 Complexity Theory Lecture 8 April 22, 2015.
GOING DOWN HILL: MORE EFFICIENT PSEUDORANDOM GENERATORS FROM ANY ONE-WAY FUNCTION Joint with Iftach Haitner and Salil Vadhan Omer Reingold&
Lecturer: Moni Naor Foundations of Cryptography Lecture 6: pseudo-random generators, hardcore predicate, Goldreich-Levin Theorem, Next-bit unpredictability.
The Many Entropies of One-Way Functions Thomas Holenstein Iftach Haitner Salil VadhanHoeteck Wee Joint With Omer Reingold.
On Everlasting Security in the Hybrid Bounded Storage Model Danny Harnik Moni Naor.
CS151 Complexity Theory Lecture 9 April 27, 2004.
1 On the Power of the Randomized Iterate Iftach Haitner, Danny Harnik, Omer Reingold.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
GOING DOWN HILL : EFFICIENCY IMPROVEMENTS IN CONSTRUCTING PSEUDORANDOM GENERATORS FROM ONE-WAY FUNCTIONS Iftach Haitner Omer Reingold Salil Vadhan.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
CS555Spring 2012/Topic 51 Cryptography CS 555 Topic 5: Pseudorandomness and Stream Ciphers.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
Cryptography Lecture 2 Arpita Patra. Summary of Last Class  Introduction  Secure Communication in Symmetric Key setting >> SKE is the required primitive.
Foundations of Privacy Lecture 5 Lecturer: Moni Naor.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Dan Boneh Stream ciphers PRG Security Defs Online Cryptography Course Dan Boneh.
CS555Spring 2012/Topic 81 Cryptography CS 555 Topic 8: Pseudorandom Functions and CPA Security.
The Power of Negations in Cryptography
Encoding/Decoding May 9, 2016 Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck 1.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Topic 36: Zero-Knowledge Proofs
B504/I538: Introduction to Cryptography
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
Randomness and Computation
Computational Fuzzy Extractors
Cryptography Lecture 13 Arpita Patra © Arpita Patra.
Circuit Lower Bounds A combinatorial approach to P vs NP
FOC-2 Cryptography with Low Complexity: 2
Pseudorandomness when the odds are against you
Background: Lattices and the Learning-with-Errors problem
Cryptography Lecture 4.
Cryptography Lecture 5.
B504/I538: Introduction to Cryptography
Bin Fu Department of Computer Science
Cryptography Lecture 7.
B504/I538: Introduction to Cryptography
Linear sketching with parities
The Curve Merger (Dvir & Widgerson, 2008)
Cryptography Lecture 12 Arpita Patra © Arpita Patra.
Cryptography Lecture 11.
Linear sketching over
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
Linear sketching with parities
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
Cryptography Lecture 5.
Cryptography Lecture 8.
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
Emanuele Viola Harvard University June 2005
Cryptography Lecture 7.
Cryptography Lecture 3.
Cryptography Lecture 10.
Cryptography Lecture 6.
CS151 Complexity Theory Lecture 5 April 16, 2019.
On Derandomizing Algorithms that Err Extremely Rarely
Pseudorandomness: New Results and Applications
Small Set Expansion in The Johnson Graph
Presentation transcript:

FOC-2 Cryptography with Low Complexity: 3 Benny Applebaum and Iftach Haitner Tel-Aviv University

Reminder: Local Functions Function fG,Q defined by: (m,n,d) graph G single predicate Q:{0,1}d{0,1} Fm,n,Q collection {fG,Q} where G is random (m,n,d) graph y1 ym yi= Q(x1,x2,x5) OUTPUT INPUT X1 Xn

One-Wayness  Pseudorandomness [A11] Conjecture: f is one-way for predicate Q, random graph with m outputs Implication 1: f is 0.99-unpredictable for predicate Q & random graph with m outputs Implication 2: f is -pseudorandom for sensitive Q & random graph with m1/3/2 outputs inapproximability Extension to expanders [AR16]: One-wayness over all expanders  Pseudorandomness over all expanders Supports the PRG conjecture

One-Wayness  Pseudorandomness [A11] Conjecture: f is one-way for predicate Q, random graph with m outputs m=1.1n outputs Implication 1: f is 0.99-unpredictable for predicate Q & random graph with m outputs Thm:  Linear-stretch local PRG Implication 2: f is -pseudorandom for sensitive Q & random graph with m1/3/2 outputs inapproximability

One-Wayness  Pseudorandomness [A11] Conjecture: f is one-way for predicate Q, random graph with m outputs m=n1.1 outputs Implication 1: f is 0.99-unpredictable for predicate Q & random graph with m outputs Thm:  Linear-stretch local PRG Thm: poly-stretch local PRG 1/poly distinguishing advntage or negl(n) with locality lg*(n) Implication 2: f is -pseudorandom for sensitive Q & random graph with m1/3/2 outputs inapproximability Drawbacks: Polynomial security loss Yields collections of local primitives Relies on hardness for “most functions” Open: Poly-stretch, constant locality, negligible security Requires Highly unbalanced constant-degree expanders!

(½+ )-predicting FQ,m  Inverting FQ,mt ym= Q(xi,xj,xk) graph G i j k

(½+ )-predicting FQ,m  Inverting FQ,mt It seems that P tells us something that we already know – Isn’t it useless? Observation: P works for many (random) graphs – let’s apply it on a modified graph ! P b = Q(xi,xj,xk) w.p ½+ ym= Q(xi,xj,xk) graph G i j k

(½+ )-predicting FQ,m  Inverting FQ,mt Step 1: From prediction to a single 2-LIN equation Idea: Run the predictor on a modified (m,n,d) graph G’ Assuming P is right: b=ym iff xi=xr We learned a noisy 2-LIN equation xrxi= Random r P b = Q(xr,xj,xk) w.p ½+ ym= Q(xi,xj,xk) graph G r i j k

(½+ )-predicting FQ,m  Inverting FQ,mt Step 1: From prediction to a single 2-LIN equation Step 2: Collect tn2/2 random noisy equations Given (G,y=fG,Q(x)) of length tm Parse as (G1,y1=fG1,Q(x)),…, (Gt,yt=fGt,Q(x)) Apply basic procedure X1X10=0 X5X11=1 X1X7=0

(½+ )-predicting FQ,m  Inverting FQ,mt Step 1: From prediction to a single 2-LIN equation Step 2: Collect tn2/2 random noisy equations Step 3: Clean noise via majority vote and solve linear equations Problem: Noise is adversarial  No amplification? E.g., Predictor works only when output depends on x1 Also same x is used all times, analysis? For random input: (i,r) uniform =xi+xr w/p ½+ But correlation! graph G P predict w.p ½+ G,fG,Q(x) (i,r,) 2-LIN constructor

Basic Step Input: (m,n,d) graph G=(Si)i[m-1] and m-bit string y Change the last hyperedge Sm=(i,a2,…,ad) to S’m=(r,a2,…,ad) where r[n] is random G’= (S1,…,Sm-1,S’) Output: (i,r, = Predictor(G’ ,y[1:m-1])  ym) Success(G,x) be the event for which predictor succeed on (G,y=fG,Q(x)) x is good if PrG[Success(G,x)]>1/2+/2. By Markov, /2 fraction of x are good. For random G and good x (G’,i,r) are random  is correct w/p at least 1/2+/2 The event “ is correct” may depend on r but ind. of i

Basic Step Input: (m,n,d) graph G=(Si)i[m-1] and m-bit string y Change the last hyperedge Sm=(i,a2,…,ad) to S’m=(r,a2,…,ad) where r[n] is random G’= (S1,…,Sm-1,S’) Output: (i,r, = Predictor(G’ ,y[1:m-1])  ym) Success(G,x) be the event for which predictor succeed on (G,y=fG,Q(x)) x is good if PrG[Success(G,x)]>1/2+/2. By Markov, /2 fraction of x are good. r is good for x if PrG[Success(G,x)|Sm[1]=r]>1/2+/2. By averaging every x has good r

Basic Step Input: (m,n,d) graph G=(Si)i[m-1] and m-bit string y Change the last hyperedge Sm=(i,a2,…,ad) to S’m=(r,a2,…,ad) where r[n] is random G’= (S1,…,Sm-1,S’) Output: (i,r, = Predictor(G’ ,y[1:m-1])  ym) For random G and good x,r (G’,i) is random  is correct w/p at least 1/2+/2 The event “ is correct” is ind. of I

Inversion Input: (tm,n,d) graph G and tm-bit string y Choose random r Parse as (G1,y1=fG1,Q(x)),…, (Gt,yt=fGt,Q(x)) Apply basic procedure for each block with the same r Get t equations x[i1]+x[r]= 1,…, x[it]+x[r]= t Guess x[r] For each equation x[i]+x[r]=  record vote x[r]+ 1 for x[i] Output majority vote for each x[i] Conditioned on good (x,r) The i‘s are uniform and ind. Each vote is correct w/p at least 1/2+/2 ind. of the i‘s

Inversion Input: (tm,n,d) graph G and tm-bit string y Choose random r Parse as (G1,y1=fG1,Q(x)),…, (Gt,yt=fGt,Q(x)) Apply basic procedure for each block with the same r Get t equations x[i1]+x[r]= 1,…, x[it]+x[r]= t Guess x[r] For each equation x[i]+x[r]=  record vote x[r]+ 1 for x[i] Output majority vote for each x[i] Conditioned on good (x,r) Fix i. Set t=O(n log2 n/2). By Chernoff, exept w/p 1/n2, get log n/2 votes for i. Conditioned on the above, Maj is correct for x[i] except w/p 1/n2 x[i] is recovered w/p 1-1/n2 By Union-Bound x is fully recovered.

Optimizing via Randomization New version Randomly permute input nodes :[n][n] Using more optimizations get Thm. (1/2+)-predicting FQ,m  inverting FQ,m/2 P graph G ( ) t r i j k For random input: (i,r) uniform =xi+xr w/p ½+ But correlation! G,fG,Q(x) (i,r,) 2-LIN constructor No correlation

One-Wayness  Pseudorandomness [A11] Conjecture: f is one-way for predicate Q, random graph with m outputs Implication 2: f is -pseudorandom for sensitive Q & random graph with m1/3/2 outputs inapproximability Thm: poly-stretch local PRG 1/poly distinguishing advntage or negl(n) with locality lg*(n)

Amplifying the parameters Thm. If FQ,m is one-way for m>n1+ and Q sensitive. Then,  a>0  Local PRG with stretch na security 1/na Lemma t-XOR Unpredictable Gen m=n1+/3 , d Security: (1/2+n-/3), UnpredGen m=n1+/3 , td Sec:1/2+n-t’ OWF m=n1+ , d t-Composition Yao PRG m=n1+/3 , td Sec: mn-t’ =n-t’’ PRG m=n(1+/3)^t , (td)t Sec: n-t’’

One-Wayness  Pseudorandomness [A11] Conjecture: f is one-way for predicate Q, random graph with m outputs Implication 1: f is 0.99-unpredictable for predicate Q & random graph with m outputs Thm:  Linear-stretch local PRG inapproximability

Handling Insensitive Predicates b=ym does not mean than xi=xr Another types of noise: due to insenitivity Predictor’s noise can be correlated Need Predictor with good (constant) advantage (e.g. 2/3) P b = Maj(xr,xj,xk) w.p ½+ 00 ym= Maj(xi,xj,xk) 00 graph G r i j k

Handling Insensitive Predicates Assuming Predictor with advantage better than sensitivity* Condition on: last entry is sensitive, P predicts well on G,x Can recover only constant fraction of indices Use existing algorithms to fully recover x [BQ] P b1 = Maj(x1,10), …, bn = Maj(xn,10) ym= Maj(xi,xj,xk) 10 graph G  x1xi, …, xnxi r i j k