When is Key Derivation from Noisy Sources Possible?

Slides:



Advertisements
Similar presentations
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Advertisements

Randomness Extractors & their Cryptographic Applications Salil Vadhan Harvard University
Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padro,
Short seed extractors against quantum storage Amnon Ta-Shma Tel-Aviv University 1.
Quantum Money from Hidden Subspaces Scott Aaronson and Paul Christiano.
Raef Bassily Penn State Local, Private, Efficient Protocols for Succinct Histograms Based on joint work with Adam Smith (Penn State) (To appear in STOC.
Mustafa Cayci INFS 795 An Evaluation on Feature Selection for Text Clustering.
Mining Compressed Frequent- Pattern Sets Dong Xin, Jiawei Han, Xifeng Yan, Hong Cheng Department of Computer Science University of Illinois at Urbana-Champaign.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Strong Key Derivation from Biometrics
Generating Random Numbers
Chapter 6 Sampling and Sampling Distributions
1 Adam O’Neill Leonid Reyzin Boston University A Unified Approach to Deterministic Encryption and a Connection to Computational Entropy Benjamin Fuller.
1 U NIVERSITY OF M ICHIGAN Reliable and Efficient PUF- Based Key Generation Using Pattern Matching Srini Devadas and Zdenek Paral (MIT), HOST 2011 Thomas.
Fuzzy Stuff Lecture 24, Outline Motivation: Biometric Architectures Motivation: Biometric Architectures New Tool (for us): Error Correcting.
Foundations of Cryptography Lecture 5 Lecturer: Moni Naor.
Sheng Xiao, Weibo Gong and Don Towsley,2010 Infocom.
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
Fuzzy extractor based on universal hashes
 Secure Authentication Using Biometric Data Karen Cui.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Chapter 7 Sampling and Sampling Distributions
Luddite: An Information Theoretic Library Design Tool Jennifer L. Miller, Erin K. Bradley, and Steven L. Teig July 18, 2002.
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
CS151 Complexity Theory Lecture 10 April 29, 2004.
BB84 Quantum Key Distribution 1.Alice chooses (4+  )n random bitstrings a and b, 2.Alice encodes each bit a i as {|0>,|1>} if b i =0 and as {|+>,|->}
1 Leonid Reyzin May 23, th International Conference on Information Theoretic Security Minentropy and its Variations for Cryptography.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
Lo-Chau Quantum Key Distribution 1.Alice creates 2n EPR pairs in state each in state |  00 >, and picks a random 2n bitstring b, 2.Alice randomly selects.
Collecting Correlated Information from a Sensor Network Micah Adler University of Massachusetts, Amherst.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Information Theory and Security
Channel Polarization and Polar Codes
On Error Preserving Encryption Algorithms for Wireless Video Transmission Ali Saman Tosun and Wu-Chi Feng The Ohio State University Department of Computer.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Template attacks Suresh Chari, Josyula R. Rao, Pankaj Rohatgi IBM Research.
Fall 2004/Lecture 201 Cryptography CS 555 Lecture 20-b Zero-Knowledge Proof.
Key Derivation from Noisy Sources with More Errors Than Entropy Benjamin Fuller Joint work with Ran Canetti, Omer Paneth, and Leonid Reyzin May 5, 2014.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Privacy-preserving rule mining. Outline  A brief introduction to association rule mining  Privacy preserving rule mining Single party  Perturbation.
Strong Key Derivation from Noisy Sources Benjamin Fuller December 12, 2014 Based on three works: Computational Fuzzy Extractors [FullerMengReyzin13] When.
The question Can we generate provable random numbers? …. ?
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
Stable Biometric Features Description (not definition): Biometric features whose value change very infrequently among multiple prints of a finger Deformation.
1 Generating Strong Keys from Biometric Data Yevgeniy Dodis, New York U Leonid Reyzin, Boston U Adam Smith, MIT.
1 Leonid Reyzin Boston University Adam Smith Weizmann  IPAM  Penn State Robust Fuzzy Extractors & Authenticated Key Agreement from Close Secrets Yevgeniy.
A Brief Maximum Entropy Tutorial Presenter: Davidson Date: 2009/02/04 Original Author: Adam Berger, 1996/07/05
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
Correcting Errors Without Leaking Partial Information Yevgeniy Dodis New York University Adam SmithWeizmann Institute To appear in STOC 2005
Secure Remote Authentication Using Biometrics Portions of this work done with Xavier Boyen, Yevgeniy Dodis, Rafail Ostrovsky, Adam Smith Jonathan Katz.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Real-life cryptography Pfeiffer Alain.  Types of PRNG‘s  History  General Structure  User space  Entropy types  Initialization process  Building.
By Kyle Bickel. Road Map Biometric Authentication Biometric Factors User Authentication Factors Biometric Techniques Conclusion.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
ENTROPY Entropy measures the uncertainty in a random experiment. Let X be a discrete random variable with range S X = { 1,2,3,... k} and pmf p k = P X.
Chapter 6 Sampling and Sampling Distributions
Correlation Clustering
Strong Key Derivation from Noisy Sources
Reusable Fuzzy Extractors for Low-Entropy Distributions
Coexistence Among Cryptography and Noisy Data Theory and Applications
Computational Fuzzy Extractors
Cryptography Lecture 4.
When are Fuzzy Extractors Possible?
The Curve Merger (Dvir & Widgerson, 2008)
When are Fuzzy Extractors Possible?
Cryptographic Hash Functions Part I
Cryptography Lecture 4.
Presentation transcript:

When is Key Derivation from Noisy Sources Possible? Benjamin Fuller, Leonid Reyzin, and Adam Smith Additional Work with Ran Canetti, Xianrui Meng, Omer Paneth August 26, 2014 1 1

Outline Noisy Authentication Sources Fuzzy Extractors Limitations of Standard Techniques Current Work

Authenticating Users Users’ private data exists online in a variety of locations Must authenticate users before granting access to private data Passwords are widely used but guessable

Are there alternatives to passwords with high entropy (uncertainty)? Authenticating Users Users’ private data exists online in a variety of locations Must authenticate users before granting access to private data Passwords are widely used but guessable CLOUD!!! Are there alternatives to passwords with high entropy (uncertainty)?

Physical Unclonable Functions (PUFs) [PappuRechtTaylorGershenField02] Hardware that implements random function Impossible to copy precisely Large challenge/response space On fixed challenge, responses close together Key storage [TuylsSchrijenSkoricVanGelovenVerhaeghWolters06] Proof of Possession Multi-party computation [OstrovskyScafuroViscontiWadia12]

Biometrics Measure unique physical phenomenon Unique, collectable, permanent, universal Repeated readings exhibit significant noise Uniqueness/Noise vary widely Human iris believed to be “best” [Daugman04, PrabhakarPankantiJain03]

Key Derivation from Noisy Sources Entropic sources are noisy Source differs over time, first reading w later readings x, Distance is bounded d(w, x) ≤ dmax Derive stable and strong key from noisy source w, x map to same key Different samples from source produce independent keys Gen( w ) ≠ Gen( w’ ) Physically Unclonable Functions (PUFs) [PappuRechtTaylorGershenfield02] Biometric Data [Daugman04] To perform cryptographic authentication we need to get a key from somewhere. There are many different possible sources for a key: a password, a physical token, a biometric. <click> Often the sources that have enough randomness or entropy to derive a key are noisy. Two good examples of this are physically unclonable functions and biometrics. <click, click, click> We will call the initial reading of a particular source, w_0. We call a source noisy if subsequent readings w_1 are not equal, to the initial reading but their distance is bounded. We want a tool that is able to derive a stable/repeatable key from this source. We should be able to produce a key from either w_0 or w_1. However, to have any notion of security, we must be sure that different samples of the source (e.g. two people’s irises) don’t map to the same key. So there is an inherit tradeoff between the errors we try and correct and the strength of our resulting key.

Outline Noisy Authentication Sources Fuzzy Extractors Limitations of Standard Techniques Current Work

Fuzzy Extractors Generate key Reproduce w key key Source Fuzzy Extractors key Public (p) Assume our source is strong Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] (interactive setting in aaaa[BennettBrassardRobert88]) Goals: Correctness: Gen, Rep same key if d(w, x) ≤ dmax Security: (key , p) ≈ (U , p) Can be statistical or computational [FullerMengReyzin13] Generate key Fuzzy extractors perform key derivation from such sources non-interactively. <click> We start by assuming that our source is high quality. Traditionally this means the source has high min-entropy. This is denoted H infinity. It means that no outcome in the distribution is too likely. That is, every possible outcome has probability no more than 2^{-k}. <click> Back to fuzzy extractors, they derive stable keys from high min-entropy sources. They were introduced by Dodis, Ostrovsky, Reyzin, and Smith in 2004. Note there was considerable prior research on the interactive version of this problem introduced by Bennett, Brassard, and Robert in 1988. The basic setting is we have an algorithm Gen that takes the source value w_0 and produces a key. It also produces a helper value p. This helper value exists so we can reproduce the key. The algorithm Rep accomplishes this goal. It takes the helper value output by Gen and new reading of the source w_1. If the distance between w_0 and w_1 is small, <click> it produces the same key. In all of our analysis we assume the adversary has access to the Generate and Reproduce algorithms and this helper value p. Reproduce w key p

Converts high entropy sources to uniform H∞(W)≥ k Ext (W ) ≈ U Fuzzy Extractors key Public (p) Assume our source is strong Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] (interactive setting in aaaa[BennettBrassardRobert88]) Traditional Construction Derive a key using a randomness extractor Generate key Ext The key is derived using a standard tool called a randomness extractor. <click> A randomness extractor converts all high min-entropy distributions to the uniform distribution. For the purposes of clarity, I have omitted the seed from the extractor, the seed is added to the public value p. So to generate our key, we simply extract from the value w_0. In our reproduce procedure, we will generate the same key so we will run the extractor in Rep. Unfortunately, we don’t have the value w_0 to run the extractor. The interesting part is how to reproduce the value w_0 so we can run the extractor in Rep Reproduce w key Converts high entropy sources to uniform H∞(W)≥ k Ext (W ) ≈ U p Ext

Fuzzy Extractors Generate key Ext Reproduce w key Ext key Source Fuzzy Extractors key Public (p) Assume our source is strong Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] (interactive setting in aaaa[BennettBrassardRobert88]) Traditional Construction Derive a key using a randomness extractor Error correct to w with a secure sketch Generate key Ext This is accomplished using a tool called a secure sketch. <click> A secure sketch produces a public value p that is used to error correct w_0. It consists of a sketch algorithm that is run inside Gen, And a Rec algorithm. The Rec algorithm takes p and a close value w_1 and reproduces w_0. You can think of this as an error correcting procedure for w_0. Once we have the value w_0, we can run the extractor and obtain the original key. I am now going to describe a little bit more about how secure sketches work. Any questions on the fuzzy extractor paradigm so far? Reproduce w key p Ext Sketch Rec

Secure Sketches Generate key Reproduce key w Code Offset Sketch Ext p Rec Sketch ec = Gc Code Offset Sketch I’ll start by describing the sketch algorithm. <click, click> The sketch I am going to describe is called the “code-offset sketch” in the literature. Assume we have an error correcting code that can correct dmax errors. <click> We will start by selecting a random codeword. So we select a random value x and encode x using the error correcting code. We will use this ec value as a mask for our value w_0. Our public value p will be the exclusive or of the value ec and our original reading. Remember, we want two properties from p: it should allow recovery from a close value w_1 and it shouldn’t give much information about w_0. p =ec  w G – Generating matrix for code that corrects dmax errors

Secure Sketches Generate key Reproduce key w Code Offset Sketch Ext p Rec Sketch ec = Gc ec’=Dec(p  x) If w and x are close then w = ec’  p. Code Offset Sketch I’ll now show have the first property is fulfilled. <click> This is done in the Recovery function. So the recovery function has just the public value p and the new reading w_1. It adds these two values together. We then run the decoding procedure of the error correcting code on this value p\oplus w_1. This gives us a value ec’. If w_0 and w_1 are within the decoding radius for the error correcting code then ec’ = ec. This means we can recover the value w_0 from a close w_1. p  x p =ec  w G – Generating matrix for code that corrects dmax errors

Secure Sketches Generate key Reproduce key w Code Offset Sketch Ext must be able to extract from distributions where Generate key Ext Reproduce w key p Ext Rec Sketch w is unknown (knowing p): (k−k’) entropy loss Code Offset Sketch We will now work on the second property. We need the value p not to give much information about w_0. <click> Consider the case where we collect a reading from a different source, say w_0’. <click, click> This means that p \oplus w_0’ will not be close to the original codeword and decoding will give an unrelated value. <click> Formally, we can say that W_0 retains high entropy even conditioned on the public value p. This is the main novel contribution of a secure sketch (otherwise we could just provide error correcting information. Recall the starting entropy was k. We will call k-k’ the entropy loss of a secure sketch. This value is important as it determines the strength of our key. In particular, the extractor must be able to produce a good key with only k’ bits of entropy. p  x p =ec  w G – Generating matrix for code that corrects dmax errors p  x’

Implementing Fuzzy Extractors Need to know: Starting metric space M Starting Entropy Desired Error tolerance dmax Code C that corrects dmax errors Hash from M to bit-strings Can output key of length: k – (log |M| - log |C|)- 2 log (1/ε) Losses are significant. No key for many sources! Generate(w) Select c from C Set key = Hash(w) Set p = c  w Output (key, p) Reproduce(x, p) Set c’ = p  x Decode c = Decode(c’) Recover w=p  c Output key = Hash(w)

Outline Noisy Authentication Sources Fuzzy Extractors Limitations of Standard Techniques Current Work

Entropy Loss From Fuzzy Extractors Entropy is at a premium for physical sources Iris ≈249 [Daugman96] Fingerprint ≈82 [RathaConnellBolle01] Passwords ≈31 [ShayKomanduri+10] PUFs [KoeberlLiRejanWu14] Entropy loss is considered in information-theoretic setting (all powerful adversary) Fuzzy extractors have two losses: Secure sketches lose at least the error correcting capability of the code (k-k’) Iris ≈200 bit error rate Randomness extractors lose 2log (1/ε) or between 60-100 bitsits After these losses, there may not be any key left! Lets dig into the entropy loss a little bit more. <click> Most biometrics have a couple hundred bits of entropy. These sources are not overflowing with entropy, where we can afford entropy losses without much consideration. Entropy loss for fuzzy extractors consider an information theoretic adversary. This is primarily due to the construction using two “information-theoretic” primitives. This lose comes from two sources. The secure sketch due to an error correcting code is at least the error correcting capability. In practice, this entropy loss may be considerably higher Randomness extractors give a distribution that is close to uniform. There is some error \epsilon that indicates the distance from uniform. The length of the extractor key is proportional to the distance from uniform. <click> The point of all this is we may not have enough bits for a key after all of these losses. The main question of this work is whether we can eliminate these entropy losses. <click, click> If we consider the information theoretic setting we already have an answer to these questions. Any secure sketch that corrects dmax errors implies an error correcting code that corrects dmax errors and vice versa. This means that all bounds on the rate of a code carry over to secure sketch. One such bound is the sphere packing bound. This tells us the entropy loss must be at least the volume of the ball we are trying to correct. As stated above we are measuring the entropy loss in an information theoretic setting. Maybe we can do better by considering computationally bounded adversaries.

Entropy Loss From Fuzzy Extractors Entropy is at a premium for physical sources Iris ≈249 [Daugman96] Fingerprint ≈82 [RathaConnellBolle01] Passwords ≈31 [ShayKomanduri+10] PUFs [KoeberlLiRejanWu14] Entropy loss is considered in information-theoretic setting (all powerful adversary) Fuzzy extractors have two losses: Secure sketches lose at least the error correcting capability of the code (k-k’) Iris ≈200 bit error rate Randomness extractors lose 2log (1/ε) or between 60-100 bitsits Can we eliminate either of these entropy losses? [DodisOstrovskyReyzinSmith] Secure Sketch Error-Correcting Code (corrects random errors) Means k−k’≥ log|Bdmax| (Ball of radius dmax) Current Situation: PUFs Biometrics Lets dig into the entropy loss a little bit more. <click> Most biometrics have a couple hundred bits of entropy. These sources are not overflowing with entropy, where we can afford entropy losses without much consideration. Entropy loss for fuzzy extractors consider an information theoretic adversary. This is primarily due to the construction using two “information-theoretic” primitives. This lose comes from two sources. The secure sketch due to an error correcting code is at least the error correcting capability. In practice, this entropy loss may be considerably higher Randomness extractors give a distribution that is close to uniform. There is some error \epsilon that indicates the distance from uniform. The length of the extractor key is proportional to the distance from uniform. <click> The point of all this is we may not have enough bits for a key after all of these losses. The main question of this work is whether we can eliminate these entropy losses. <click, click> If we consider the information theoretic setting we already have an answer to these questions. Any secure sketch that corrects dmax errors implies an error correcting code that corrects dmax errors and vice versa. This means that all bounds on the rate of a code carry over to secure sketch. One such bound is the sphere packing bound. This tells us the entropy loss must be at least the volume of the ball we are trying to correct. As stated above we are measuring the entropy loss in an information theoretic setting. Maybe we can do better by considering computationally bounded adversaries.

Outline Noisy Authentication Sources Fuzzy Extractors Limitations of Standard Techniques Current Work

When is security possible? M Some distributions are inherently insecure If points are close, no security possible Ideal world: Possible using: Multi-Party Computation (in interactive setting) Obfuscation (under very strong assumptions) [BitanskiCanettiKalaiPaneth14] By providing x* to Rep the adversary always learns key key key x* We answer this question for both secure sketches and fuzzy extractors. <click> For secure sketches we show the answer is no. We show that defining a secure sketch with a computational adversary is not helpful. I will be more precise in a minute about what this means. For fuzzy extractors, we provide an affirmative answer. We construct a lossless computational fuzzy extractor based on Learning with Errors or LWE problem Along the way we extend the hardness of LWE to the case when some dimensions have known error.

When is security possible? key M By providing x* to Rep the adversary always learns key Hope: provide strong key whenever a negligible fraction of probability mass is within distance dmax New entropy notion! Fuzzy min-entropy is the maximum weight ball of a probability distribution x* We answer this question for both secure sketches and fuzzy extractors. <click> For secure sketches we show the answer is no. We show that defining a secure sketch with a computational adversary is not helpful. I will be more precise in a minute about what this means. For fuzzy extractors, we provide an affirmative answer. We construct a lossless computational fuzzy extractor based on Learning with Errors or LWE problem Along the way we extend the hardness of LWE to the case when some dimensions have known error.

When is security possible? key M Hope: provide strong key whenever a negligible fraction of probability mass is within distance dmax New entropy notion! Fuzzy min-entropy is the maximum weight ball of a probability distribution We answer this question for both secure sketches and fuzzy extractors. <click> For secure sketches we show the answer is no. We show that defining a secure sketch with a computational adversary is not helpful. I will be more precise in a minute about what this means. For fuzzy extractors, we provide an affirmative answer. We construct a lossless computational fuzzy extractor based on Learning with Errors or LWE problem Along the way we extend the hardness of LWE to the case when some dimensions have known error.

When is security possible? M Goal: good key for dist. with super-log fuzzy min-entropy Feasibility: consider information-theoretic constructions Two settings: Distribution-aware: algorithms encode probability mass function (pmf) of source Distribution-oblivious: algorithms must work for a family of distributions

Key derivation from fuzzy min entropy? Error correction without leaking information Standard setting, know only some properties of source distribution Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Combine error correction and key derivation Standard transform from secure sketch to fuzzy extractor

Key derivation from fuzzy min entropy? Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Thm: For any distribution W with fuzzy min-entropy there exists a secure sketch (and fuzzy extractor) with

Distribution Aware M w x Sketch knows probability mass function of W Consider initial reading w Recover algorithm will receive a nearby point x Sketch must disambiguate nearby points Sketch consists of a description of which nearby point was seen Problem: A ball can contain an unbounded number of points that rarely occur Idea: write down the probability of original w and limit recover to w with this probability w x

Distribution Aware M x w Sketch knows probability mass function of W Consider initial reading w Recover algorithm will receive a nearby point x Sketch must disambiguate nearby points Sketch consists of a description of which nearby point was seen Problem: A ball can contain an unbounded number of points that rarely occur Idea: write down the probability of original w and limit recover to w with this probability w Correcting on these unlikely points may reveal w Sketch is correct for all of W, Adv ignores low probability points

Key derivation from fuzzy min entropy? Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Thm: For any distribution W there exists a secure sketch

Key derivation from fuzzy min entropy? Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Thm: There is a family of distributions V (where all members have fuzzy entropy) such that any secure sketch for V removes all entropy

Distribution Oblivious Adversary selects what color will be provided Constraints imposed by Sketch Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M

Distribution Oblivious Adversary selects what color will be provided Constraints imposed by Sketch Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M Adversary’s search space Maybe those were “bad” constraints?

Distribution Oblivious Adversary selects what color will be provided Alternative Constraints Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M

Distribution Oblivious Adversary selects what color will be provided Alternative Constraints Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M Adversary’s search space Every Sketch must create constraints that are independent of the color, Leaves few points of each color

Conclusion Distribution-Aware: Show that key derivation is possible for any distribution with fuzzy min-entropy when algorithms know the p.m.f. Distribution-Oblivious: There exist families of distributions with no secure sketch Negative result extends to computational secure sketches defined using pseudoentropy using [FullerMengReyzin13] Can build computational secure sketch that provides unpredictability [BitanskiCanettiKalaiPaneth14] We have a fuzzy extractor for this family [CanettiFullerPanethReyzinSmith14] Open Question: Are there families of distributions where fuzzy extraction is not possible? Evidence constructing fuzzy extractors easier than secure sketches Questions?