Key Derivation from Noisy Sources with More Errors Than Entropy Benjamin Fuller Joint work with Ran Canetti, Omer Paneth, and Leonid Reyzin May 5, 2014.

Slides:



Advertisements
Similar presentations
The Average Case Complexity of Counting Distinct Elements David Woodruff IBM Almaden.
Advertisements

An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padro,
Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Applied Algorithmics - week7
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Strong Key Derivation from Biometrics
1 Adam O’Neill Leonid Reyzin Boston University A Unified Approach to Deterministic Encryption and a Connection to Computational Entropy Benjamin Fuller.
Information Theory EE322 Al-Sanie.
Fuzzy Stuff Lecture 24, Outline Motivation: Biometric Architectures Motivation: Biometric Architectures New Tool (for us): Error Correcting.
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
Fuzzy extractor based on universal hashes
 Secure Authentication Using Biometric Data Karen Cui.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
Threshold Phenomena and Fountain Codes
Constant Degree, Lossless Expanders Omer Reingold AT&T joint work with Michael Capalbo (IAS), Salil Vadhan (Harvard), and Avi Wigderson (Hebrew U., IAS)
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Department of Computer Science & Engineering University of Washington
Practical Techniques for Searches on Encrypted Data Author: Dawn Xiaodong Song, David Wagner, Adrian Perrig Presenter: 紀銘偉.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
RAPTOR CODES AMIN SHOKROLLAHI DF Digital Fountain Technical Report.
CS151 Complexity Theory Lecture 10 April 29, 2004.
1 Constructing Pseudo-Random Permutations with a Prescribed Structure Moni Naor Weizmann Institute Omer Reingold AT&T Research.
Stream Ciphers 1 Stream Ciphers. Stream Ciphers 2 Stream Ciphers  Generalization of one-time pad  Trade provable security for practicality  Stream.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
Nir Bitansky Ran Canetti Henry Cohn Shafi Goldwasser Yael Tauman-Kalai
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Information Theory and Security
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Channel Capacity.
Threshold Phenomena and Fountain Codes Amin Shokrollahi EPFL Joint work with M. Luby, R. Karp, O. Etesami.
Coding and Algorithms for Memories Lecture 5 1.
On the Communication Complexity of SFE with Long Output Daniel Wichs (Northeastern) joint work with Pavel Hubáček.
DIGITAL COMMUNICATIONS Linear Block Codes
15-853:Algorithms in the Real World
1 Private codes or Succinct random codes that are (almost) perfect Michael Langberg California Institute of Technology.
Knock Yourself Out Secure Authentication with Short Re-Usable Passwords by Benjamin Guldenring, Volker Roth and Lars Ries PRESENTED BY EUNYOUNG CHO COLLEGE.
Strong Key Derivation from Noisy Sources Benjamin Fuller December 12, 2014 Based on three works: Computational Fuzzy Extractors [FullerMengReyzin13] When.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Error Detection and Correction – Hamming Code
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
1 Leonid Reyzin Boston University Adam Smith Weizmann  IPAM  Penn State Robust Fuzzy Extractors & Authenticated Key Agreement from Close Secrets Yevgeniy.
Raptor Codes Amin Shokrollahi EPFL. BEC(p 1 ) BEC(p 2 ) BEC(p 3 ) BEC(p 4 ) BEC(p 5 ) BEC(p 6 ) Communication on Multiple Unknown Channels.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
When is Key Derivation from Noisy Sources Possible?
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
Correcting Errors Without Leaking Partial Information Yevgeniy Dodis New York University Adam SmithWeizmann Institute To appear in STOC 2005
Learning with General Similarity Functions Maria-Florina Balcan.
Strong Key Derivation from Noisy Sources
Reusable Fuzzy Extractors for Low-Entropy Distributions
Stochastic Streams: Sample Complexity vs. Space Complexity
Computational Fuzzy Extractors
Modern symmetric-key Encryption
Topic 14: Random Oracle Model, Hashing Applications
When are Fuzzy Extractors Possible?
RS – Reed Solomon List Decoding.
The Curve Merger (Dvir & Widgerson, 2008)
Information Redundancy Fault Tolerant Computing
On the effect of randomness on planted 3-coloring models
When are Fuzzy Extractors Possible?
Cryptography Lecture 5.
Information Theoretical Analysis of Digital Watermarking
Presentation transcript:

Key Derivation from Noisy Sources with More Errors Than Entropy Benjamin Fuller Joint work with Ran Canetti, Omer Paneth, and Leonid Reyzin May 5, BWF 4/2/2014

Authenticating Users Users’ private data exists online in a variety of locations Must authenticate users before granting access to private data Passwords are widely used but guessable 2 BWF 4/2/2014 Are there alternatives to passwords with high entropy (uncertainty)?

Entropic sources are noisy – Source differs over time, first reading w later readings x, – Distance is bounded d(w, x) ≤ d max Derive stable and strong key from noisy source – w, x map to same key Different samples from source produce independent keys – Gen( w ) ≠ Gen( w’ ) Key Derivation from Noisy Sources Physically Unclonable Functions (PUFs) [PappuRechtTaylorGershenfield02] Biometric Data [Daugman04] 3 BWF 4/2/2014

Fuzzy Extractors Source Public (p) key Assume our source is strong – Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] ( interactive setting in aaaa[BennettBrassardRobert88]) Generate Reproduce key p w 4 BWF 4/2/2014 Goals: Correctness: Gen, Rep give same key if d(w, x) ≤ d max Security: (key, p) ≈ (U, p) Can be statistical or computational [FullerMengReyzin13]

Fuzzy Extractors Source Public (p) key Ext Generate Reproduce key Assume our source is strong – Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] ( interactive setting in aaaa[BennettBrassardRobert88]) Traditional Construction Derive a key using a randomness extractor p w Converts high entropy sources to uniform H ∞ (W 0 )≥ k Ext (W 0 ) ≈ U

Fuzzy Extractors SketchRec Ext Generate Reproduce Source Public (p) key p w Assume our source is strong – Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] ( interactive setting in aaaa[BennettBrassardRobert88]) Traditional Construction Derive a key using a randomness extractor Error correct to w with a secure sketch

Error-Correcting Codes 7 BWF 4/2/2014 ec 1 Subset, C, of metric space For ec 1, ec 2 in C, d(w, x) > 2d max For any ec’ find closest ec 1 in C Linear codes: – C is span of expanding matrix Gc (generating matrix) ec 2 2d max ec’

Secure Sketches Generate Reproduce Ext SketchRec Code Offset Sketch p =ec  w G – Generating matrix for code that corrects d max errors ec = Gc key p w

Secure Sketches Code Offset Sketch ec’=Dec(p  x) p  x p =ec  w ec = Gc If w and w are close then w = ec’  p. If w and w are close then w = ec’  p. G – Generating matrix for code that corrects d max errors Generate Reproduce Ext SketchRec key p w

p  x p =ec  w p  x’ w is unknown (knowing p ): (k−k’) – entropy loss w is unknown (knowing p ): (k−k’) – entropy loss Secure Sketches Generate Reproduce Ext SketchRec Code Offset Sketch Ext must be able to extract from distributions where G – Generating matrix for code that corrects d max errors key p w

Entropy Loss From Fuzzy Extractors Entropy is at a premium for physical sources – Iris ≈ 249 [Daugman1996] – Fingerprint ≈ 82 [RathaConnellBolle2001] – Passwords ≈ 31 [ShayKomanduri+2010] Fuzzy extractors have two losses: – Secure sketches lose error correcting capability of the code (k-k’) Iris ≈ 200 bit error rate – Randomness extractors lose 2log (1/ε) or between bits After these losses the key may be too short to be useful: bits After these losses, there may not be any key left!

Entropy Loss From Fuzzy Extractors Can we eliminate either of these entropy losses? [DodisOstrovskyReyzinSmith] Secure Sketch Code (corrects random errors) Means k−k’≥ log |B dmax | (Ball of radius d max ) [DodisOstrovskyReyzinSmith] Secure Sketch Code (corrects random errors) Means k−k’≥ log |B dmax | (Ball of radius d max ) Entropy is at a premium for physical sources – Iris ≈ 249 [Daugman1996] – Fingerprint ≈ 82 [RathaConnellBolle2001] – Passwords ≈ 31 [ShayKomanduri+2010] Fuzzy extractors have two losses: – Secure sketches lose error correcting capability of the code (k-k’) Iris ≈ 200 bit error rate – Randomness extractors lose 2log (1/ε) or between bits After these losses the key may be too short to be useful: bits

Error Tolerance and Security at Odds M Any input to Rep in this ball produces key w 13 BWF 4/2/2014 Adversary shouldn’t guess x* where d(w, x*) ≤ d max Easier as d max increases Consider a source W where initial readings w (for different physical devices) are close If there is a point x* close to all points in W, no security is possible

Error Tolerance and Security at Odds M Adversary shouldn’t guess x* where d(w, x*) ≤ d max Easier as d max increases Consider a source W where initial readings w (for different physical devices) are close If there is a point x* close to all points in W, no security is possible By providing x* to Rep the adversary always learns key x*x* 14 BWF 4/2/2014 Let B dmax represent the points with distance d max There is a W where

Error Tolerance and Security at Odds M Adversary shouldn’t guess x* where d(w, x*) ≤ d max Easier as d max increases Consider a source W where initial readings w (for different physical devices) are close If there is a point x* close to all points in W, no security is possible By providing x* to Rep the adversary always learns key x*x* 15 BWF 4/2/2014 There is a W where Call this minimum usable entropy, H usable (W)

Minimum Usable Entropy Standard Fuzzy Extractors provide worst case security guarantees – Implies |key|≤H usable (W) Many sources have no minimum usable entropy – Irises are thought to be the “best” biometric, for irises H usable (W) ≈ -707 Need property other than entropy to secure these sources (e.g. points are not close together) Can we find reasonable properties and accompanying constructions? 16 BWF 4/2/2014

Hamming Metric Security parameter n Sources W = W 1,…, W k symbols W i over alphabet Z (grows with n ) d(w, x) = # of symbols in that differ 17 BWF 4/2/ w x d(w, x)=4

Results Security relies on point obfuscation (secure under strong vector DDH [BitanskiCanetti10] ) 18 BWF 4/2/2014 Construction 1Construction 2 Security Requirement ω(log n) entropy in most symbols Ω(1) entropy in most symbols Errors Corrected Θ(k)

Point Obfuscation Obfuscator transforms program I into “black-box” [BarakGoldreichImpagliazzo RudichSahaiVadhanYang01] Possible for point programs (we use need a version achievable under number- theoretic assumptions due to [BitanskiCanetti10] ) 19 BWF 4/2/2014

Point Obfuscation Obfuscator transforms program I into “black-box” [BarakGoldreichImpagliazzo RudichSahaiVadhanYang01] Possible for point programs [Canetti97] – We use a strong version achievable under number-theoretic assumptions (composable virtual gray-box obfuscation [BitanskiCanetti10] ) 20 BWF 4/2/2014

Point Obfuscation Obfuscator transforms program I into “black-box” [BarakGoldreichImpagliazzo RudichSahaiVadhanYang01] Possible for point programs [Canetti97] – Need a strong version achievable under strong vector DDH (composable virtual gray-box obfuscation [BitanskiCanetti10] ) 21 BWF 4/2/2014 w w

Construction Attempt #1 Hide w using obfuscation Can check if x = w without revealing w Generate Reproduce key 1/0 Two Problems: No key No error tolerance Two Problems: No key No error tolerance w p w w w w 22 BWF 4/2/2014

Construction Attempt #2 Generate Reproduce key 1/0 Two Problems: No key No error tolerance Two Problems: No key No error tolerance Obfuscate each symbol (recall w = w 1,…, w k ) Can now learn which symbols match w p 23 BWF 4/2/2014 w w w w

Construction Attempt #2 Obfuscate each symbol (recall w = w 1,…, w k ) Can learn which symbols match Generate Reproduce key w01w01 … Two Problems: No key No error tolerance Two Problems: No key No error tolerance w01w01 … 1/0 w p w1w1 w1w1 wkwk wkwk w1w1 w1w1 wkwk wkwk 24 BWF 4/2/2014

Construction Attempt #2 Obfuscate each symbol (recall w = w 1,…, w k ) Can learn which symbols match Generate Reproduce key Knowing where errors occur is useful in coding theory w01w01 … w01w01 … 1/0 Leverage a technique from point obfuscation w p 25 BWF 4/2/2014 w1w1 w1w1 wkwk wkwk w1w1 w1w1 wkwk wkwk

Can specify output of point function [CanettiDakdouk08] Lets try this on our construction 26 BWF 4/2/2014 w w w w c c

Construction Attempt #3 For each symbol i, flip c i – Obfuscate Knowing where errors occur is useful in coding theory 27 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … 1/0 w p w1w1 w1w1 wkwk wkwk w1w1 w1w1 wkwk wkwk

Construction Attempt #3 For each symbol i, flip c i – Obfuscate Knowing where errors occur is useful in coding theory 28 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … 1/0 w p wkwk wkwk w1w1 w1w1 wkwk wkwk c 1,…,c k w1w1 w1w1

Construction Attempt #3 For each symbol i, flip c i – Obfuscate Knowing where errors occur is useful in coding theory 29 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … 1/0 w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck

Construction Attempt #3 For each symbol i, flip c i – Obfuscate 30 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … 1/0 w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Can run obfuscations and recover most bits of c

Construction Attempt #3 For each symbol i, flip c i – Obfuscate 31 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … 1/0 w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Can run obfuscations and recover most bits of c

Construction Attempt #3 For each symbol i, flip c i – Obfuscate 32 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Can run obfuscations and recover most bits of c

Construction Sample c C from binary error correcting code For each symbol i, Obfuscate 33 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Can run obfuscations and recover most bits of c

Construction 34 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Decode Can run obfuscations and recover most bits of c Sample c C from binary error correcting code For each symbol i, Obfuscate

Construction 35 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Use c as output (run c through comp. ext. [Krawczyk10] to create key ) Use c as output (run c through comp. ext. [Krawczyk10] to create key ) Decode Sample c C from binary error correcting code For each symbol i, Obfuscate

Correctness and Security Correctness: Recover all but d(w, x) ≤ d max bits of c Exist binary error correcting codes with error tolerance Θ(k) Security Question: What about w and c is revealed by obfuscations Security Question: What about w and c is revealed by obfuscations … ? 36 BWF 4/2/2014 w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Decode

What is revealed by obfuscations? Need to argue adversary learns little through equality oracle queries to symbols Enough to argue adversary sees as response to queries with overwhelming probability – That is, they rarely guess the stored value w i 37 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Decode

Block Unguessable Distributions Let A be an algorithm asking polynomial queries of the form: is w i = x i ? Def: W = W 1,…, W k is block unguessable if there exists a set such that for all A, Caution: Adaptivity is crucial, there are distributions with high overall entropy that can be guessed using equality queries to individual blocks 38 BWF 4/2/2014

Block Unguessable: Proceed with Caution W1W1 w1w1 … W2W2 WkWk An adversary can guess “easy” blocks, and use gained info to guess next block w2w2 wkwk 39 BWF 4/2/2014

Block Unguessable Distributions Caution: Adaptivity is crucial, there are distributions with high overall entropy that can be guessed using equality queries to individual blocks 40 BWF 4/2/2014 Positive Examples: block fixing sources [KampZuckerman07], blocks are independent and many are entropic, all entropic blocks Let A be an algorithm asking polynomial queries of the form: is w i = x i ? Def: W = W 1,…, W k is block unguessable if there exists a set such that for all A,

Security Let A be an algorithm asking polynomial queries of the form: is w i = x i ? Def: W = W 1,…, W k is block unguessable if there exists a set such that for all A, Thm: When the source is block unguessable, C has computational entropy Convertible to pseudorandom by comp. ext. 41 BWF 4/2/2014

Security Let A be an algorithm asking polynomial queries of the form: is w i = x i ? Def: W = W 1,…, W k is block unguessable if there exists a set such that for all A, Thm: When the source is block unguessable, C has computational entropy 42 BWF 4/2/2014

Security Let A be an algorithm asking polynomial queries of the form: is w i = x i ? Def: W = W 1,…, W k is block unguessable if there exists a set such that for all A, Thm: When the source is block unguessable, C has log(|C|) - (k-|J |) bits of comp. entropy size of the code minus the “guessable” positions 43 BWF 4/2/2014

Security Let A be an algorithm asking polynomial queries of the form: is w i = x i ? Def: W = W 1,…, W k is block unguessable if there exists a set such that for all A, Thm: When the source is block unguessable, C has log(|C|) - (k-|J |) bits of comp. entropy 44 BWF 4/2/2014 Note: In computational setting, size of key isn’t as crucial, can be expanded by computational extractor

Error Tolerance and Security at Odds M Adversary shouldn’t guess x* where d(w, x*) ≤ d max A block unguessable distribution has more unguessable symbols than are corrected There is at least one symbol an adversary must guess Get security from adversary’s inability to guess this one symbol w 45 BWF 4/2/2014

Error Tolerance and Security at Odds M Adversary shouldn’t guess x* where d(w, x*) ≤ d max A block unguessable distribution has more unguessable symbols than are corrected There is at least one symbol an adversary must guess Get security from adversary’s inability to guess this symbol 46 BWF 4/2/2014

Results 47 BWF 4/2/2014 Construction 1Construction 2 Security Requirement ω(log n) entropy in most symbols Ω(1) entropy in most symbols Errors Corrected Θ(k) Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Decode H usable ≤ 0 if |Z| = ω(poly(n)) & C corrects Θ(k) errors

Reducing Required Entropy Obfuscating symbols individually leaks equality, entropy ensures A can’t guess stored values Can we reduce the necessary entropy if we obfuscate multiple symbols together? – Obfuscating all symbols together works but eliminates error tolerance 48 BWF 4/2/2014 Generate Reproduce key w01w01 … w01w01 … w p c 1,…,c k w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck w1w1 w1w1 c1c1 c1c1 wkwk wkwk ckck ckck Decode

Generate key w01w01 … c 1,…,c k … Instead of having symbols/obfuscations in 1-1 correspondence, introduce level of indirection Create random bipartite graph between symbols and obfuscations (published in p ) – Each obfuscation has degree α Reducing Required Entropy w1w1 w2w2 wkwk p wkwk wkwk w1w1 w1w1 w2w2 w2w2 49 BWF 4/2/2014 c1c1 c1c1 c2c2 c2c2 ckck ckck

Generate key c 1,…,c k … Instead of having symbols/obfuscations in 1-1 correspondence, introduce level of indirection Create random bipartite graph between symbols and obfuscations (published in p ) – Each obfuscation has degree α Reducing Required Entropy w01w01 … w1w1 w2w2 wkwk p w1w1 w1w1 w2w2 w2w2 wkwk wkwk 50 BWF 4/2/2014 c1c1 c1c1 c2c2 c2c2 ckck ckck

Generate key Instead of having symbols/obfuscations in 1-1 correspondence, introduce level of indirection Create random bipartite graph between symbols and obfuscations (published in p ) – Each obfuscation has degree α Reducing Required Entropy p 51 BWF 4/2/2014 c 1,…,c k … w1w1 w1w1 w2w2 w2w2 wkwk wkwk w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck

Generate key Instead of having symbols/obfuscations in 1-1 correspondence, introduce level of indirection Create random bipartite graph between symbols and obfuscations (published in p ) – Each obfuscation has degree α Reducing Required Entropy p 52 BWF 4/2/2014 c 1,…,c k … w1w1 w1w1 w2w2 w2w2 wkwk wkwk w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck

Generate key Instead of having symbols/obfuscations in 1-1 correspondence, introduce level of indirection Create random bipartite graph between symbols and obfuscations (published in p ) – Each obfuscation has degree α Reducing Required Entropy p 53 BWF 4/2/2014 c 1,…,c k … v 1 =w 1 ||w 2 ||w 4 ||w 10 w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck v 2 =w 2 ||w 3 ||w 6 ||w 8 v k =w 3 ||w 4 ||w 7 ||w 9

The graph is an averaging sampler [Lu2002,Vadhan2003] Obfuscating multiple blocks together degrades error tolerance – If d(w, x) ≤ d max, then Pr. each v i contains an error is O(d max *α) – If C supports Θ(k) errors and α=ω(log k), construction correct w.h.p. if d(w, x)≤ k/ω(log k) (by Chernoff bound) Correctness 54 BWF 4/2/2014 Generate key p c 1,…,c k … v 1 =w 1 ||w 2 ||w 4 ||w 10 w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck v 2 =w 2 ||w 3 ||w 6 ||w 8 v k =w 3 ||w 4 ||w 7 ||w 9

Assume exists set of symbols J with Ω(1) entropy conditioned on values of all other symbols E[ H ∞ ( V i )] ≥ Ω( E|{ indices of J included in V i }|) Security The size of this set is hyper-geometrically distributed. Expected size is α*|J|/k. Distribution has a small tail [Chvátal79]. The size of this set is hyper-geometrically distributed. Expected size is α*|J|/k. Distribution has a small tail [Chvátal79]. 55 BWF 4/2/2014 Generate key p c 1,…,c k … v 1 =w 1 ||w 2 ||w 4 ||w 10 w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck v 2 =w 2 ||w 3 ||w 6 ||w 8 v k =w 3 ||w 4 ||w 7 ||w 9

Assume exists set of symbols J with Ω(1) entropy conditioned on values of all other symbols E[ H ∞ ( V i )] ≥ Ω( E|{ indices of J included in V i }|) If α = ω(log n), all H ∞ (V i ) ≥ ω(log n) entropy w.h.p. V = V 1,…,V k is a block unguessable distribution, security follows from previous construction Security 56 BWF 4/2/2014 Generate key p c 1,…,c k … v 1 =w 1 ||w 2 ||w 4 ||w 10 w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck v 2 =w 2 ||w 3 ||w 6 ||w 8 v k =w 3 ||w 4 ||w 7 ||w 9

57 BWF 4/2/2014 Construction 1Construction 2 Security Requirement ω(log n) entropy in most symbols Ω(1) entropy in most symbols Errors Corrected Θ(k) Generate key p c 1,…,c k … v 1 =w 1 ||w 2 ||w 4 ||w 10 w01w01 … w1w1 w2w2 wkwk c1c1 c1c1 c2c2 c2c2 ckck ckck v 2 =w 2 ||w 3 ||w 6 ||w 8 v k =w 3 ||w 4 ||w 7 ||w 9 Results

Noisy Point Obfuscation A noisy point obfuscator is stronger than a fuzzy extractor – Cannot leak any partial information about w [DodisSmith05] achieve weaker distributional notion of noisy point obfuscation when H usable >> 0 Our constructions leak information (value of individual blocks, locations of errors) and are not standard obfuscation Can we construct noisy point obfuscation for all distributions? From indistinguishability obfuscation? [GargGentryHaleviRaykovaSahaiWaters13] 58 BWF 4/2/2014

Conclusion Construct the first (computational) fuzzy extractors when H usable ≤ 0 using point obfuscation Constructions allow H usable ≤ 0 when alphabet is super-polynomial – Necessary? Constructions for small alphabet? We restricted W, could restrict errors (that is restrict X ) 59 BWF 4/2/2014

Questions? 60 BWF 4/2/2014