Computational Fuzzy Extractors

Slides:



Advertisements
Similar presentations
Efficient Lattice (H)IBE in the standard model Shweta Agrawal, Dan Boneh, Xavier Boyen.
Advertisements

1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padro,
Computational Privacy. Overview Goal: Allow n-private computation of arbitrary funcs. –Impossible in information-theoretic setting Computational setting:
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Strong Key Derivation from Biometrics
1 Adam O’Neill Leonid Reyzin Boston University A Unified Approach to Deterministic Encryption and a Connection to Computational Entropy Benjamin Fuller.
CIS 5371 Cryptography 3b. Pseudorandomness.
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
Theoretical Program Checking Greg Bronevetsky. Background The field of Program Checking is about 13 years old. Pioneered by Manuel Blum, Hal Wasserman,
Lattice-Based Cryptography. Cryptographic Hardness Assumptions Factoring is hard Discrete Log Problem is hard  Diffie-Hellman problem is hard  Decisional.
 Secure Authentication Using Biometric Data Karen Cui.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
1 Leonid Reyzin May 23, th International Conference on Information Theoretic Security Minentropy and its Variations for Cryptography.
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Key Derivation from Noisy Sources with More Errors Than Entropy Benjamin Fuller Joint work with Ran Canetti, Omer Paneth, and Leonid Reyzin May 5, 2014.
DIGITAL COMMUNICATIONS Linear Block Codes
Secure Conjunctive Keyword Search Over Encrypted Data Philippe Golle Jessica Staddon Palo Alto Research Center Brent Waters Princeton University.
Strong Key Derivation from Noisy Sources Benjamin Fuller December 12, 2014 Based on three works: Computational Fuzzy Extractors [FullerMengReyzin13] When.
Randomness Extraction Beyond the Classical World Kai-Min Chung Academia Sinica, Taiwan 1 Based on joint works with Xin Li, Yaoyun Shi, and Xiaodi Wu.
1 Leonid Reyzin Boston University Adam Smith Weizmann  IPAM  Penn State Robust Fuzzy Extractors & Authenticated Key Agreement from Close Secrets Yevgeniy.
When is Key Derivation from Noisy Sources Possible?
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
Correcting Errors Without Leaking Partial Information Yevgeniy Dodis New York University Adam SmithWeizmann Institute To appear in STOC 2005
On Public Key Encryption from Noisy Codewords Yuval Ishai Technion & UCLA Eli Ben-Sasson (Technion) Iddo Ben-Tov (Technion) Ivan Damgård (Aarhus) Noga.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Correcting Errors Without Leaking Partial Information
Strong Key Derivation from Noisy Sources
Reusable Fuzzy Extractors for Low-Entropy Distributions
Cryptography Lecture 5 Arpita Patra © Arpita Patra.
New Characterizations in Turnstile Streams with Applications
On the Size of Pairing-based Non-interactive Arguments
Introduction to Machine Learning
Modern symmetric-key Encryption
Topic 14: Random Oracle Model, Hashing Applications
The Learning With Errors Problem
Cryptographic Hash Functions Part I
Background: Lattices and the Learning-with-Errors problem
Cryptography Lecture 4.
Lattice Signature Schemes
Cryptography Lecture 19.
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Four-Round Secure Computation without Setup
CMSC 414 Computer and Network Security Lecture 3
Cryptography Lecture 6.
Linear sketching with parities
When are Fuzzy Extractors Possible?
RS – Reed Solomon List Decoding.
Locally Decodable Codes from Lifting
The Curve Merger (Dvir & Widgerson, 2008)
Conditional Computational Entropy
Cryptography Lecture 25.
Linear sketching over
When are Fuzzy Extractors Possible?
Linear sketching with parities
Feature space tansformation methods
Cryptography Lecture 4.
On The Quantitative Hardness of the Closest Vector Problem
Cryptography Lecture 5.
Cryptography Lecture 8.
Cryptography Lecture 14.
Cryptography Lecture 6.
Cryptography Lecture 7.
Cryptography Lecture 3.
Cryptography Lecture 21.
Cryptography Lecture 15.
Presentation transcript:

Computational Fuzzy Extractors Benjamin Fuller, Xianrui Meng, and Leonid Reyzin December 2, 2013 1 1

Key Derivation from Noisy Sources High-entropy sources are often noisy Source value changes over time, w0≠ w1 Assume a bound on distance: d(w0, w1) ≤ dmax Consider Hamming distance today Want to derive a stable key from a noisy source Want w0, w1 to map to same key Want the key to be cryptographically strong Appear uniform to the adversary Physically Unclonable Functions (PUFs) Biometric Data Goal of this talk: provide meaningful security for more sources

Fuzzy Extractors key p Source Assume source has min-entropy k (no w is likelier than 2−k) Lots of work on reliable keys from noisy data [BennettBrassardRobert85] …Our formalism: Fuzzy Extractors [DodisOstrovskyReyzinSmith04] … Correctness: Gen, Rep give same key if d(w0, w1) < dmax Security: (key , p) ≈ (U , p) Key Public key Gen key Say information reconciliation on this page w0 Rep key p w1

Converts high entropy sources to uniform: H∞(W0)≥ k  Ext (W0 ) ≈ U Fuzzy Extractors Source Assume source has min-entropy k (no w is likelier than 2−k) Lots of work on reliable keys from noisy data [BennettBrassardRobert85] …Our formalism: Fuzzy Extractors [DodisOstrovskyReyzinSmith04] … Correctness: Gen, Rep give same key if d(w0, w1) < dmax Security: (key , p) ≈ (U , p) Typical Construction: - derive key using a randomness extractor Key Public Converts high entropy sources to uniform: H∞(W0)≥ k  Ext (W0 ) ≈ U Gen key Ext w0 Rep key p Ext w1

Fuzzy Extractors p Source Assume source has min-entropy k (no w is likelier than 2−k) Lots of work on reliable keys from noisy data [BennettBrassardRobert85] …Our formalism: Fuzzy Extractors [DodisOstrovskyReyzinSmith04] … Correctness: Gen, Rep give same key if d(w0, w1) < dmax Security: (key , p) ≈ (U , p) Typical Construction: - derive key using a randomness extractor - correct errors using a secure sketch Key Public Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1

Secure Sketches p Gen key Ext w0 Rep key w0 w1 Rec Ext Code Offset Sketch c = Gx G generates a code that corrects dmax errors p =c  w0

Secure Sketches Guarantee a bound on entropy reduction: ≤ redundancy of G Gen Extract from distributions of reduced entropy key Ext w0 Rep key p Ext Sketch w0 Rec w1 If w0 and w1 are close c’= c w0=c’  p c’=Dec(p  w1) Code Offset Sketch c = Gx Here we have the loss according to the redundancy of G. This isn’t a feature of this sketch, it applies to all sketches. p  w1 G generates a code that corrects dmax errors p =c  w0 p reveals information about w0

Entropy Loss From Fuzzy Extractors Entropy is at a premium for physical sources Iris ≈249 [Daugman1996] Fingerprint ≈82 [RathaConnellBolle2001] Passwords ≈31 [ShayKomanduri+2010] Above construction of fuzzy extractors, with standard analysis: Secure sketch loss = redundancy of code ≥ error correcting capability Loss necessary for information-theoretic sketch: [Smith07, DORS08] Randomness extractor loss ≥ 2log (1/ε) Can we improve on this? One approach: define secure sketches/fuzzy extractors computationally Give up on security against all-powerful adversaries, consider computational ones

Can we do better in computational setting? Our Results: For secure sketches: NO We show that defining a secure sketch in computational setting does not improve entropy loss For fuzzy extractors: YES We construct a lossless computational Fuzzy Extractor based on the Learning with Errors (LWE) problem Caveat: this result shows only feasibility of a different construction and analysis; we do not claim to have a specific set of parameters for beating the traditional construction

Computational Secure Sketches Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1 Information-theoretic goal: H∞( W0 | p) Computational goal: Hcomp( W0 | p) Can we improve on this computationally? What does Hcomp( W0 | p) mean? Most natural requirement: (W0 | p) is indistinguishable from (Y | p) and H∞(Y | p) ≥ k Known as HILL entropy [HåstadImpagliazzoLevinLuby99]

Computational Secure Sketches Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1 Computational goal: Good News: Extractors yield pseudorandom keys from HILL entropy HHILL( W0 | p) Can we improve on this computationally? What does Hcomp( W0 | p) mean? Most natural requirement: (W0 | p) is indistinguishable from (Y | p) and H∞(Y | p) ≥ k Known as HILL entropy [HåstadImpagliazzoLevinLuby99]

HILL Secure Sketches  Secure Sketches Our Theorem: If HHILL(W0 | p) ≥ k, then there exists an error-correcting code C with 2k−2 points and Rec corrects dmax random errors on C Corollary: (Using secure sketch of [Smith07]) If there exists a sketch with HILL entropy k, then there exists a sketch with true entropy k−2. We can fix a p value where Rec functions as a good decoder for W0. Rec must also decode on Y by indistinguishability, and Y is large.

Can we do better in computational setting? For secure sketches: NO A sketch that retains HILL entropy implies an information theoretic sketch For fuzzy extractors: YES Can’t just make the sketch “computational” Other approaches?

Building a Computational Fuzzy Extractor Gen key Can’t just work with sketch Ext w0 Rep key p Ext Sketch w0 Rec w1

Building a Computational Fuzzy Extractor Gen What about an extractor that outputs pseudorandom bits? key Ext Cext w0 Rep key p Cext Ext Sketch w0 Rec w1 Computational extractors convert high-entropy sources to pseudorandom bits [Krawczyk10] Natural construction: Cext(w0) = PRG(Ext(w0)) Extensions [DodisYu13DachmanSoledGennaroKrawczykMalkin12DodisPietrzakWichs13] All require enough residual entropy after Sketch to run crypto! See [Dachman-SoledGennaroKrawczykMalkin12] for conditions

Building a Computational Fuzzy Extractor Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1 We’ll try to combine a sketch and an extractor We’ll base our construction on the code offset sketch Instantiate with random linear code Base security on Learning with Errors (LWE) ec = Gx p=ec  w0

Learning with Errors A A x w0 b n m + = , p=Ax  w0 Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 Need error distribution where LWE is hard Start from result of [Döttling&Müller-Quade13] and make some progress

Learning with Errors A A x w0 b n m + = , p=Ax  w0 Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore

Learning with Errors A1 A A2 A A1 A2 x1 x2 x w0 b n n/2 n/2 m + = , p=Ax  w0 + = , Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore x2 | A, b is pseudorandom

Our Construction A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 n/2 n/2 m Source Key n/2 n/2 Public m A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 p=Ax  w0 + = , Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore x2 | A, b is pseudorandom

Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key + = p = (A, b) w1 Q: How are we avoiding our negative results? A We don’t extract from (we are not aware of any notion where w0|A, b has high entropy) Instead, we use secret randomness, and hide it using key w0 w0 |(A, b) w0

? Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key + = p = (A, b) ? w1

Rep b A A x1 w0 w1 x2 n m + − , Rep has A and something close to Ax This is a decoding problem (same as in the traditional construction) Decoding random codes is hard, but possible for small distances. (We can’t use LWE trapdoor, because there is no secret storage)

Rep n m A A x1 w0 w1 x2 + − , Example algorithm for log many errors:

Rep A A x1 w0–w1 x2 n m + , Example algorithm for log many errors: Select n random samples (hopefully, they have no errors) Solve linear system for x on these samples Verify correctness of x using other samples Repeat until successful

Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key = x2 + key = x2 = p = (A, b) A A w0 − w1 x1 x2 w1 + , Can correct as many errors as can be efficiently decoded for random linear code (our algorithm: logarithmically many) Each dimension of can be sampled with a fraction of the bits needed for each dimension of x (i.e., we can protect x using fewer than |x| bits) So we can get as many bits in as in −− lossless! Key length doesn’t depend on how many errors are being corrected Intuition: is encrypted by and decryption tolerates noise w0 key w0 key w0

Conclusion What about the Computational Setting? Fuzzy Extractors and Secure Sketches suffer from entropy losses in information theoretic setting May keep the resulting key from being useful What about the Computational Setting? Negative Result: Entropy loss inherent for Secure Sketches (Additional results about unpredictability of ( W0 | p ) ) Positive Result: Construct lossless Computational Fuzzy Extractor using the Learning with Errors problem For Hamming distance, with log errors and restricted class of sources (secure LWE error distributions)

Open Problems Questions? Improve error-tolerance Handle additional source distributions Beat information-theoretic constructions on practical parameter sizes Other computational assumptions? Using the result of Micciancio and Peikert we get security for all slightly deficient distributions. Questions?

Backups

Our Construction Source Key Gen key = x2 Public w0 A x1 x2 w0 w0 b Rep + key = x2 = p = (A, b) A A w0 − w1 x1 x2 w1 + , Theorem: If dmax = O(log n) and W is uniform, our construction 1) Is lossless 2) Runs in expected polynomial time 3) Yields pseudorandom key assuming GAPSVP and SIVP are hard to approximate within polynomial factors

Our Construction A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 n/2 n/2 m Source Key n/2 n/2 Public m A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 p=Ax  w0 + = , Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems Error w0 is drawn from Gaussian distribution (per coordinate) [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore: x2 | A, b is pseudorandom

Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key + = p = (A, b) w1 Unlikely that w0 comes from the correct (Gaussian) distribution w0 00010 100 01 111101 001 10 Can we use it to sample coordinate-wise Gaussian? Each coordinate requires a variable number of bits to sample Hard to do that in a noise-tolerant way because all this bits shift

Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key + = p = (A, b) w1 Unlikely that w0 comes from the correct (Gaussian) distribution w0 Recent Results of [Döttling&Müller-Quade13, Micciancio&Peikert13] LWE is secure with error drawn uniformly from a small interval w0 0001 1100 1101 1001 1111 1010 Now, differences between w0 and w1 in the same number of error coordinates (we’ll talk about Rep in a minute)

Can we do better in computational setting? Our Results: For secure sketches: NO We show that defining a secure sketch in computational setting does not improve entropy loss For fuzzy extractors: YES We construct a lossless computational Fuzzy Extractor for uniform sources based on the Learning with Errors (LWE) problem To get it work on distributions other than uniform, we extend hardness of LWE to case when some dimensions have known error (symbol-fixing error sources)

Symbol Fixing Sources w0 e Fixed A source W0 is a symbol fixing source if for each block W0,i of W0: 1) W0,i is a fixed value, or 2) W0,i is uniformly distributed Let α be the number of blocks that are fixed.

LWE w/ Fixed Errors A A x1 w0 e b x2 n/2 n/2 m + = , Source Key Public Def: α-fixed LWE is standard LWE except that α dimensions have a fixed (and adversarially known) error value.

LWE w/ Fixed Errors A A x1 w0 e b x2 n/2 n/2 m + = , Source Key Public Our Theorem: Security of LWE of matrices of dimension (n, m) implies Security of α-fixed LWE of matrices of dimension (n+α, m+α)

LWE w/ Fixed Errors A A x1 w0 e b x2 n/2 n/2 m + = , Source Key Public Corollary:(Applying [AkaviaGoldwasserVinod09]) If LWE is secure on n/3 variables, our construction is a computational fuzzy extractor for α-block fixing sources where α<n/3.

Theorem: Security of LWE of matrices of dimension (n, m) implies security of α-fixed LWE of matrices of dimension (n+α, m+α)

Assume: D distinguishes between A, Ax+w0 and A, U where last α samples of w0 have no error Goal: build D’ that distinguishes A’, A’x’+e from A’, U’ where e is from error distribution n+α D m+α A x w0 b U + =

D’ D A A’ x’ x e w0 b b U U ’ ’ n+α m+α + = Assume: D distinguishes between A, Ax+w0 and A, U where last α samples of w0 have no error Goal: build D’ that distinguishes A’, A’x’+e from A’, U’ where e is from error distribution n+α D’ D m+α A A’ x’ x e w0 b b U U + = ’ ’

D’ D A’ x’ e b U x3 ’ ’ R S $ $ n m + = Know last error terms fixed at 0 Generate last α samples uniformly random Our free variables “explain” the last α samples For R,S uniformly random, x3 is solution to Rx’+Sx3 = $ Randomize matrix and samples using rows with no error Add random multiple of each row in (R||S) to each row in A’ n D’ D m A’ x’ e b U + = x3 ’ ’ R S $ $ Main issues are ensuring that we have a valid solution x’||x3, and producing a random matrix

D A’ x’ e b U x3 ’ ’ R S $ $ Theorem: If LWE is secure on A’x’+e then LWE is secure on Ax + w0 n D m A’ x’ e b U + = x3 ’ ’ R S $ $

Our Construction Source Key Gen key = x2 Public w0 A x1 x2 w0 w0 b Rep + key = x2 = p = (A, b) A A w0 − w1 x1 x2 w1 + , Theorem: If dmax = O(log n) and W is uniform, our construction 1) Is lossless 2) Runs in expected polynomial time 3) Yields pseudorandom key assuming GAPSVP and SIVP are hard to approximate within polynomial factors symbol-fixing,