Discrete Gaussian Leftover Hash Lemma Shweta Agrawal IIT Delhi With Craig Gentry, Shai Halevi, Amit Sahai.

Slides:



Advertisements
Similar presentations
Efficient Lattice (H)IBE in the standard model Shweta Agrawal, Dan Boneh, Xavier Boyen.
Advertisements

The Average Case Complexity of Counting Distinct Elements David Woodruff IBM Almaden.
Why Simple Hash Functions Work : Exploiting the Entropy in a Data Stream Michael Mitzenmacher Salil Vadhan And improvements with Kai-Min Chung.
Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
On Complexity, Sampling, and -Nets and -Samples. Range Spaces A range space is a pair, where is a ground set, it’s elements called points and is a family.
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
Estimation in Sampling
Locally Decodable Codes
Simple Affine Extractors using Dimension Expansion. Matt DeVos and Ariel Gabizon.
G. Alonso, D. Kossmann Systems Group
Hard and easy components of collision search in the Zémor- Tillich hash function: New attacks and reduced variants with equivalent security Christophe.
Lattice-Based Cryptography. Cryptographic Hardness Assumptions Factoring is hard Discrete Log Problem is hard  Diffie-Hellman problem is hard  Decisional.
Simple Lattice Trapdoor Sampling from a Broad Class of Distributions Vadim Lyubashevsky and Daniel Wichs.
阮風光 Phong Q. Nguyên (École normale supérieure) עודד רגב Oded Regev עודד רגב Oded Regev (Tel Aviv University) Learning a Parallelepiped: Cryptanalysis of.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Chapter 7 Sampling and Sampling Distributions
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
Chapter 7 Estimation: Single Population
ON THE PROVABLE SECURITY OF HOMOMORPHIC ENCRYPTION Andrej Bogdanov Chinese University of Hong Kong Bertinoro Summer School | July 2014 based on joint work.
Inferential Statistics
T he Separability Problem and its Variants in Quantum Entanglement Theory Nathaniel Johnston Institute for Quantum Computing University of Waterloo.
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
Polynomial Factorization Olga Sergeeva Ferien-Akademie 2004, September 19 – October 1.
Choosing Frequencies for Multifrequency Animal Abundance Estimation Paul Roberts AOS Seminar March 3, 2005.
Sociology 5811: Lecture 7: Samples, Populations, The Sampling Distribution Copyright © 2005 by Evan Schofer Do not copy or distribute without permission.
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Statistics for Data Miners: Part I (continued) S.T. Balke.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Learning Parities with Structured Noise Sanjeev Arora, Rong Ge Princeton University.
Lattice-Based Cryptography: From Practice to Theory to Practice Vadim Lyubashevsky INRIA / CNRS / ENS Paris (September 12, 2011)
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Better Key Sizes (and Attacks) for LWE-Based Encryption Richard LindnerChris Peikert.
David Luebke 1 10/25/2015 CS 332: Algorithms Skip Lists Hash Tables.
Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar.
A Formal Analysis of Conservative Update Based Approximate Counting Gil Einziger and Roy Freidman Technion, Haifa.
Probability = Relative Frequency. Typical Distribution for a Discrete Variable.
Lecture V Probability theory. Lecture questions Classical definition of probability Frequency probability Discrete variable and probability distribution.
1 8. One Function of Two Random Variables Given two random variables X and Y and a function g(x,y), we form a new random variable Z as Given the joint.
Fall 2002Biostat Statistical Inference - Confidence Intervals General (1 -  ) Confidence Intervals: a random interval that will include a fixed.
Sampling and estimation Petter Mostad
One Sample Mean Inference (Chapter 5)
Lattice-based cryptography and quantum Oded Regev Tel-Aviv University.
Confidence Interval Estimation For statistical inference in decision making: Chapter 9.
Least Squares Estimate Additional Notes 1. Introduction The quality of an estimate can be judged using the expected value and the covariance matrix of.
CS6045: Advanced Algorithms Data Structures. Hashing Tables Motivation: symbol tables –A compiler uses a symbol table to relate symbols to associated.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Warsaw Summer School 2015, OSU Study Abroad Program Normal Distribution.
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
On Public Key Encryption from Noisy Codewords Yuval Ishai Technion & UCLA Eli Ben-Sasson (Technion) Iddo Ben-Tov (Technion) Ivan Damgård (Aarhus) Noga.
Statistics for Business and Economics 8 th Edition Chapter 7 Estimation: Single Population Copyright © 2013 Pearson Education, Inc. Publishing as Prentice.
1 IAS, Princeton ASCR, Prague. The Problem How to solve it by hand ? Use the polynomial-ring axioms ! associativity, commutativity, distributivity, 0/1-elements.
(5) Notes on the Least Squares Estimate
Computational Fuzzy Extractors
On Bounded Distance Decoding, Unique Shortest Vectors, and the
Sampling of min-entropy relative to quantum knowledge Robert König in collaboration with Renato Renner TexPoint fonts used in EMF. Read the TexPoint.
Analyzing Redistribution Matrix with Wavelet
The Learning With Errors Problem
Digital Signature Schemes and the Random Oracle Model
Background: Lattices and the Learning-with-Errors problem
Linear sketching with parities
Sampling Distribution
Sampling Distribution
Summarizing Data by Statistics
Linear sketching with parities
3.1 Sums of Random Variables probability of z = x + y
CSCI B609: “Foundations of Data Science”
Presentation transcript:

Discrete Gaussian Leftover Hash Lemma Shweta Agrawal IIT Delhi With Craig Gentry, Shai Halevi, Amit Sahai

2 Need Good Randomness Crucially need ideal randomness in many areas, eg. cryptography However, often deal with imperfect randomness physical sources, biometric data, partial knowledge about secrets… Can we “extract” good randomness from ill-behaved random variables? EXTRACTORS (NZ96) Yes!

Classic Leftover Hash Lemma  Universal Hash Family H = { h: X  Y } For all x ≠ y Pr h [ h(x) = h(y) ] = 1/| Y |  Leftover Hash Lemma (HILL) : Universal hash functions yield good extractors ( h(x), h) ≈ (U, h)

Classic use of LHL Universal Hash Function : Inner Product over finite field  H = { h a : Z q m  Z q }  Pick a 1 …..a m uniformly over Z q  Define h a (x) = Σ a i x i mod q h a (x) uniform over Z q Simple, useful randomness extractor !

Discrete Gaussian LHL ? What if target distribution we need is discrete Gaussian instead of uniform? What if domain is infinite ring instead of finite field? When do generalized subset sums of lattice points yield nice discrete Gaussians ?

You ask … What are discrete Gaussians ? Why do we care ?

Because they help us build “Multilinear Maps” from lattices (GGH12)!

WHAT ARE DISCRETE GAUSSIANS?

Lattices… A set of points with periodic arrangement Discrete subgroup in R n v1v1 v2v2 v’ 2 v’ 1

What are discrete Gaussians ? D Λ, r : Gaussian distribution with std deviation r but support restricted to points over lattice Λ More formally ….. D Λ, r (x) α exp(- Π ||x|| 2 / r 2 ) if x in Λ 0 otherwise

Why study discrete Gaussians ? Ubiquitous in lattice based crypto At the technical core of most proofs in the area, notably in the famous “Learning with Errors” assumption Not as well understood as their continuous counterparts

Our Results: Discrete Gaussian LHL over infinite domains Fix once and for all, vectors x 1 …..x m Λ We choose x i from discrete Gaussian D Λ, s Let X = [x 1 |…..|x m ] Z n x m Choose vector z from discrete Gaussian D Z m, s’ Then the distribution Σ z i x i is statistically close to D Λ, s’X D Λ, s’X is a “roughly spherical” discrete Gaussian of “moderate width” (under certain conditions)

Oblivious Gaussian Sampler Our result yields an oblivious Gaussian sampler: Given enc(x 1 )…..enc(x m ) If enc is additively homomorphic, can compute enc(g) where g is discrete Gaussian. Just sample z and compute Σ z i enc(x i ) Previous Gaussian samplers [GPV08, Pei10] too complicated to use within additively homomorphic scheme.

Why is the Gaussian LHL true ?

Analyzing Σ z i x i : Proof Idea Recall our setup: Fix once and for all, vectors x 1 …..x m Λ We sample x i from discrete Gaussian D Λ, s Let X = [x 1 |…..|x m ] Z n x m Sample vector z from discrete Gaussian D Z m, s’ Define A = {v Z m : X v = 0} Note, A is a lattice.

Analyzing Σ z i x i : Broad Outline of Proof Thm 1: Σ z i x i ≈ D Λ, s’X if lattice A is “smooth” relative to s’ Thm 2: A is “smooth” if matrix X is “regularly shaped” Thm 3: X is “regularly shaped” if x i ~ D Λ, s Σ z i x i ≈ D Λ, s’X “near spherical” discrete Gaussian of moderate width A = {v : X v = 0}

Analyzing Σ z i x i : Broad Outline of Proof Thm 1: Σ z i x i ≈ D Λ, s’X if lattice A is “smooth” relative to s’ Thm 2: A is “smooth” if matrix X is “regularly shaped” Thm 3: X is “regularly shaped” if x i ~ D Λ, s Σ z i x i ≈ D Λ, s’X “near spherical” discrete Gaussian of moderate width A = {v : X v = 0}

Analyzing Σ z i x i : Broad Outline of Proof Thm 1: Σ z i x i ≈ D Λ, s’X if lattice A is “smooth” relative to s’ Thm 2: A is “smooth” if matrix X is “regularly shaped” Thm 3: X is “regularly shaped” if x i ~ D Λ, s Σ z i x i ≈ D Λ, s’X “near spherical” discrete Gaussian of moderate width A = {v : X v = 0}

Smoothness of a Lattice Want to wipe out the structure of the lattice Add noise to lattice points till we get the uniform distribution * Smoothness animation from Regev’s slides

Smoothness of a Lattice Want to wipe out the structure of the lattice Add noise to lattice points till we get the uniform distribution * Smoothness animation from Regev’s slides

Smoothness of a Lattice Want to wipe out the structure of the lattice Add noise to lattice points till we get the uniform distribution * Smoothness animation from Regev’s slides

Smoothness of a Lattice Want to wipe out the structure of the lattice Add noise to lattice points till we get the uniform distribution * Smoothness animation from Regev’s slides

Smoothness of a Lattice  How much noise is needed to blur the lattice depends on its structure  Informally, if the noise magnitude needed is “small”, we may say that a lattice is “smooth”  Measured by smoothing parameter smooth(L) [MR04]  Smooth(L) is the smallest “s” s.t. adding Gaussian noise of radius s to L yields an essentially uniform distribution

X is regularly shaped if its singular values lie within small interval. Thm 3: If x i ~ D Λ, s then X is regularly shaped  Start with random matrix theory. Know that if matrix M has continuous Gaussian entries and m >2n, then all the singular values of M are within constant sized interval  Can extend this to discrete Gaussians, “ Regularly shaped”

Broad Outline of Proof Thm 1: Σ z i x i ≈ D Λ, s’X if s’ > smooth(A) Thm 2: If matrix X is “regularly shaped” then smooth(A) is small. Thm 3: If x i ~ D Λ, s then X is “regularly shaped” Σ z i x i ≈ D Λ, s’X “near spherical” discrete Gaussian of moderate width

Thm 2: smooth(A) is small if X is regularly shaped.  Argue that λ n+1 ( M q ), the (n+ 1 ) st minima of M q is large if X regularly shaped  Embed A into a full rank lattice A q  Consider dual lattice M q : dual of A q  Convert to upper bound λ m-n (A q ) using thm by Banasczcyk  Argue these m-n short vectors belong to A  Relate λ m-n (A) to smooth(A) using bound by MR04

 Typical application would use our LHL to drown out some value it wishes to hide, a la GGH12. Applicability  Need the minimum width of the Gaussian to be wide enough to drown out the value it is hiding  Our LHL can be seen as showing that this can be done in a frugal way, without wasting too many samples.  Can be used within additively homomorphic scheme.  Care needs to be taken if basis X has to be kept secret. Better use other samplers (GPV08, Pei10)

 Discrete Gaussians are important and not as well understood. Our work makes progress towards understanding their behavior. Conclusions  Provided a discrete Gaussian LHL over infinite rings.  May be used as an oblivious Gaussian sampler within an additively homomorphic scheme.

Thank you! Questions?