Learning Parities with Structured Noise Sanjeev Arora, Rong Ge Princeton University.

Slides:



Advertisements
Similar presentations
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Advertisements

Attacking Cryptographic Schemes Based on Perturbation Polynomials Martin Albrecht (Royal Holloway), Craig Gentry (IBM), Shai Halevi (IBM), Jonathan Katz.
Objective : 1)Students will be able to identify linear, exponential, and quadratic equations when given an equation, graph, or table. 2)Students will be.
Shortest Vector In A Lattice is NP-Hard to approximate
COMP 553: Algorithmic Game Theory Fall 2014 Yang Cai Lecture 21.
Heuristics for the Hidden Clique Problem Robert Krauthgamer (IBM Almaden) Joint work with Uri Feige (Weizmann)
Agrawal-Kayal-Saxena Presented by: Xiaosi Zhou
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
The PCP theorem. Summary -Proof structure -Basic ingredients -Arithmetization -Properties of polynomials -3-SAT belongs to PCP[O(n 3 ),O(1)]
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Lattice-Based Cryptography. Cryptographic Hardness Assumptions Factoring is hard Discrete Log Problem is hard  Diffie-Hellman problem is hard  Decisional.
Asaf Shapira (Georgia Tech) Joint work with: Arnab Bhattacharyya (MIT) Elena Grigorescu (Georgia Tech) Prasad Raghavendra (Georgia Tech) 1 Testing Odd-Cycle.
Ch 7.4: Basic Theory of Systems of First Order Linear Equations
Analysis of Boolean Functions Fourier Analysis, Projections, Influence, Junta, Etc… And (some) applications Slides prepared with help of Ricky Rosen.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
Dimensionality Reduction
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Asymmetric Cryptography part 1 & 2 Haya Shulman Many thanks to Amir Herzberg who donated some of the slides from
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
Dasgupta, Kalai & Monteleoni COLT 2005 Analysis of perceptron-based active learning Sanjoy Dasgupta, UCSD Adam Tauman Kalai, TTI-Chicago Claire Monteleoni,
Locally Decodable Codes Uri Nadav. Contents What is Locally Decodable Code (LDC) ? Constructions Lower Bounds Reduction from Private Information Retrieval.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
ON THE PROVABLE SECURITY OF HOMOMORPHIC ENCRYPTION Andrej Bogdanov Chinese University of Hong Kong Bertinoro Summer School | July 2014 based on joint work.
ON MULTIVARIATE POLYNOMIAL INTERPOLATION
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Bell Work: Find the values of all the unknowns: R T = R T T + T = 60 R = 3 R =
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Quantum Computing MAS 725 Hartmut Klauck NTU TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A.
Linear Equations in Linear Algebra
1 1.7 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra LINEAR INDEPENDENCE.
Public-key cryptanalysis: lattice attacks Nguyen Dinh Thuc University of Science, HCMC
More Computational Complexity Shirley Moore CS4390/5390 Fall August 29,
Algorithms for SAT Based on Search in Hamming Balls Author : Evgeny Dantsin, Edward A. Hirsch, and Alexander Wolpert Speaker : 羅正偉.
Alternative Wide Block Encryption For Discussion Only.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Foundations of Privacy Lecture 5 Lecturer: Moni Naor.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
CS555Spring 2012/Topic 31 Cryptography CS 555 Topic 3: One-time Pad and Perfect Secrecy.
Lattice-based cryptography and quantum Oded Regev Tel-Aviv University.
Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.
5.3 Algorithmic Stability Bounds Summarized by: Sang Kyun Lee.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Solving Quadratic Equations by Factoring
Property Testing (a.k.a. Sublinear Algorithms )
Markov Chains and Random Walks
On the Size of Pairing-based Non-interactive Arguments
Sample Mean Distributions
The Learning With Errors Problem
Sum of Squares, Planted Clique, and Pseudo-Calibration
Digital Signature Schemes and the Random Oracle Model
Background: Lattices and the Learning-with-Errors problem
Tight Fourier Tails for AC0 Circuits
Equivalence of Search and Decisional (Ring-) LWE
Find all solutions of the polynomial equation by factoring and using the quadratic formula. x = 0 {image}
Introduction to PCP and Hardness of Approximation
Function - when every x is paired to one y
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
Impossibility of SNARGs
On Derandomizing Algorithms that Err Extremely Rarely
Ch 7.4: Basic Theory of Systems of First Order Linear Equations
Linear Equations in Linear Algebra
Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity Lijie Chen Ryan Williams.
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

Learning Parities with Structured Noise Sanjeev Arora, Rong Ge Princeton University

Learning Parities with Noise Secret u = (1,0,1,1,1) u ∙ (0,1,0,1,1) = 0 u ∙ (1,1,1,0,1) = 1 u ∙ (0,1,1,1,0) = 1

Learning Parities with Noise  Secret vector u  Oracle returns random a and u∙a  u∙a is incorrect with probability p  Best known algorithm: 2 O(n/log n)  Used in designing public-key crypto

Learning Parities with Structured Noise Secret u = (1,0,1,1,1) u ∙ (0,1,0,1,1) = 0 u ∙ (1,1,0,1,0) = 1 u ∙ (0,1,1,0,0) = 1

Learning Parities with Structured Noise  Secret vector u  Oracle returns random a 1, a 2, …, a m and b 1 =u∙a 1, b 2 =u∙a 2, …, b m =u∙a m  “Not all inner-products are incorrect”  The error has a certain structure Can the secret be learned in polynomial time?

Structures as Polynomials  c i =1 iff i-th inner-product is incorrect  P(c) = 0 if an answer pattern is allowed  “At least one of the inner-products is correct” P(c) = c 1 c 2 c 3 …c m = 0  “No 3 consecutive wrong inner-products” P(c) = c 1 c 2 c 3 +c 2 c 3 c 4 +…+c m-2 c m-1 c m = 0

Notations  Subscripts are used for indexing vectors u i, c i  Superscripts are used for a list of vectors a i  High dimensional vectors are indexed like Z i,j,k  a, b are known constants, u, c are unknown constants used in analysis, x, y, Z are variables in equations.

Main Result  For ANY non-trivial structure P of degree d, the secret can be learned using n O(d) queries and n O(d) time.

Proof Outline Answers from Oracle Linearization Linear Equations Change View Unique Solution

Linearization c1c2c 3 = 0 c i = b i +a i ∙x (a 1 ∙x+b1)(a2 ∙x+b 2)(a 3 ∙x+b 3 ) = 0 (*) y 1 = x 1, y 2 =x 2,…, y 1,2 = x 1 x 2,…, y 1,2,3 =x 1 x 2 x 3 a 1 1a 22 a3 3 y 1,2,3+…+b 1 b2b3 = 0 (**) Linear Equations of y Variables (**) = L((*)) Observation y 1 =u 1,y 2 =u 2,…,y 1,2,3 =u 1 u 2 u 3 always satisfies the equation (**) Call it the Canonical solution Coming Up Prove when we have enough equations, this is the only possible solution.

Form of the Linear Equation  Let Z 3 i,j,k = L((x i +u i )(x j +u j )(x k +u k )) Z 3 1,2,3 = y 1,2,3 +u 1 y 2,3 +u 2 y 1,3 +u 3 y 1,2 +u 1 u 2 y 3 + u 1 u 3 y 2 +u 1 u 2 y 3 +u 1 u 2 u 3  When c 1 =c 2 =c 3 = 0  Recall (a 1 ∙x+b 1 )(a 2 ∙x+b 2 )(a 3 ∙x+b 3 ) = 0 (*)  (a 1 ∙(x+u)+c 1 )(a 2 ∙(x+u)+c 2 )(a 3 ∙(x+u)+c 3 ) = 0

Change View Linear Equation over y variables Polynomial over a’s  Lemma When Z 3 ≠0, the equation is a non-zero polynomial over a’s  Schwartz-Zippel The polynomial is non-zero w.p. at least 2 -d

Main Lemma  Theorem Low Probability Non-zero Z 3 vector, Poly(a) = 0 for all equations Schwartz-ZippelUnion Bound Non-Canonical Solution With High Probability No Non-Canonical Solutions

Learning With Errors  Used in designing new crypto systems  Resistant to “side channel attacks”  Provable reduction from worst case lattice problems

Learning With Errors  Secret u in Z q n  Oracle returns random a and a∙u+c  c is chosen from Discrete Gaussian distribution with standard deviation δ  When δ = Ω(n 1/2 ) lattice problems can be reduced to LWE

Learning With Structured Errors  Represent structures using polynomials  Thm: When the polynomial has degree d < q/4, the secret can be learned in n O(d) time.  Cor: When δ = o(n 1/2 ), LWE has a sub- exponential time algorithm

Learning With Structured Errors  Take structure to be |c| < Cδ 2  # of equations required = exp(O(Cδ 2 ))  Probability that the structure is violated by a random answer (LWE oracle) = exp(-O(C 2 δ 2 ))  LWE oracle ≈ LWSE oracle  With high probability the oracle answers satisfy the structure, the algorithm succeeds in finding the secret in time exp(O(δ 2 )) = exp(o(n)) when δ 2 = o(n).

Open Problems  Can linearization techniques provide a non-trivial algorithm for the original model?  Are there more applications by choosing appropriate patterns?  Is it possible to improve the algorithm for learning with errors?

Thank You Questions?

Pretend (0,1,1,0,0) Adversarial Noise  Structure = “not all inner-products are incorrect” Secret u = (1,0,1,1,1) u ∙ (0,1,0,1,1) = u ∙ (1,1,0,1,0) = u ∙ (0,1,1,0,0) = 1 1 0

Adversarial Noise  The adversary can fool ANY algorithm for some structures.  Thm: If there exists a vector c that cannot be represented as c = c 1 +c 2, P(c 1 )=P(c 2 )=0, then the secret can be learned using n O(d) queries in n O(d) time, otherwise no algorithm can learn the secret with probability > 1/2