Presentation is loading. Please wait.

Presentation is loading. Please wait.

When is Key Derivation from Noisy Sources Possible?

Similar presentations


Presentation on theme: "When is Key Derivation from Noisy Sources Possible?"— Presentation transcript:

1 When is Key Derivation from Noisy Sources Possible?
Benjamin Fuller, Leonid Reyzin, and Adam Smith Additional Work with Ran Canetti, Xianrui Meng, Omer Paneth August 26, 2014 1 1

2 Outline Noisy Authentication Sources Fuzzy Extractors
Limitations of Standard Techniques Current Work

3 Authenticating Users Users’ private data exists online in a variety of locations Must authenticate users before granting access to private data Passwords are widely used but guessable

4 Are there alternatives to passwords with high entropy (uncertainty)?
Authenticating Users Users’ private data exists online in a variety of locations Must authenticate users before granting access to private data Passwords are widely used but guessable CLOUD!!! Are there alternatives to passwords with high entropy (uncertainty)?

5 Physical Unclonable Functions (PUFs) [PappuRechtTaylorGershenField02]
Hardware that implements random function Impossible to copy precisely Large challenge/response space On fixed challenge, responses close together Key storage [TuylsSchrijenSkoricVanGelovenVerhaeghWolters06] Proof of Possession Multi-party computation [OstrovskyScafuroViscontiWadia12]

6 Biometrics Measure unique physical phenomenon
Unique, collectable, permanent, universal Repeated readings exhibit significant noise Uniqueness/Noise vary widely Human iris believed to be “best” [Daugman04, PrabhakarPankantiJain03]

7 Key Derivation from Noisy Sources
Entropic sources are noisy Source differs over time, first reading w later readings x, Distance is bounded d(w, x) ≤ dmax Derive stable and strong key from noisy source w, x map to same key Different samples from source produce independent keys Gen( w ) ≠ Gen( w’ ) Physically Unclonable Functions (PUFs) [PappuRechtTaylorGershenfield02] Biometric Data [Daugman04] To perform cryptographic authentication we need to get a key from somewhere. There are many different possible sources for a key: a password, a physical token, a biometric. <click> Often the sources that have enough randomness or entropy to derive a key are noisy. Two good examples of this are physically unclonable functions and biometrics. <click, click, click> We will call the initial reading of a particular source, w_0. We call a source noisy if subsequent readings w_1 are not equal, to the initial reading but their distance is bounded. We want a tool that is able to derive a stable/repeatable key from this source. We should be able to produce a key from either w_0 or w_1. However, to have any notion of security, we must be sure that different samples of the source (e.g. two people’s irises) don’t map to the same key. So there is an inherit tradeoff between the errors we try and correct and the strength of our resulting key.

8 Outline Noisy Authentication Sources Fuzzy Extractors
Limitations of Standard Techniques Current Work

9 Fuzzy Extractors Generate key Reproduce w key key
Source Fuzzy Extractors key Public (p) Assume our source is strong Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] (interactive setting in aaaa[BennettBrassardRobert88]) Goals: Correctness: Gen, Rep same key if d(w, x) ≤ dmax Security: (key , p) ≈ (U , p) Can be statistical or computational [FullerMengReyzin13] Generate key Fuzzy extractors perform key derivation from such sources non-interactively. <click> We start by assuming that our source is high quality. Traditionally this means the source has high min-entropy. This is denoted H infinity. It means that no outcome in the distribution is too likely. That is, every possible outcome has probability no more than 2^{-k}. <click> Back to fuzzy extractors, they derive stable keys from high min-entropy sources. They were introduced by Dodis, Ostrovsky, Reyzin, and Smith in Note there was considerable prior research on the interactive version of this problem introduced by Bennett, Brassard, and Robert in 1988. The basic setting is we have an algorithm Gen that takes the source value w_0 and produces a key. It also produces a helper value p. This helper value exists so we can reproduce the key. The algorithm Rep accomplishes this goal. It takes the helper value output by Gen and new reading of the source w_1. If the distance between w_0 and w_1 is small, <click> it produces the same key. In all of our analysis we assume the adversary has access to the Generate and Reproduce algorithms and this helper value p. Reproduce w key p

10 Converts high entropy sources to uniform H∞(W)≥ k Ext (W ) ≈ U
Fuzzy Extractors key Public (p) Assume our source is strong Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] (interactive setting in aaaa[BennettBrassardRobert88]) Traditional Construction Derive a key using a randomness extractor Generate key Ext The key is derived using a standard tool called a randomness extractor. <click> A randomness extractor converts all high min-entropy distributions to the uniform distribution. For the purposes of clarity, I have omitted the seed from the extractor, the seed is added to the public value p. So to generate our key, we simply extract from the value w_0. In our reproduce procedure, we will generate the same key so we will run the extractor in Rep. Unfortunately, we don’t have the value w_0 to run the extractor. The interesting part is how to reproduce the value w_0 so we can run the extractor in Rep Reproduce w key Converts high entropy sources to uniform H∞(W)≥ k Ext (W ) ≈ U p Ext

11 Fuzzy Extractors Generate key Ext Reproduce w key Ext key
Source Fuzzy Extractors key Public (p) Assume our source is strong Traditionally, high entropy Fuzzy Extractors derive reliable keys from noisy data [DodisOstrovskyReyzinSmith04, 08] (interactive setting in aaaa[BennettBrassardRobert88]) Traditional Construction Derive a key using a randomness extractor Error correct to w with a secure sketch Generate key Ext This is accomplished using a tool called a secure sketch. <click> A secure sketch produces a public value p that is used to error correct w_0. It consists of a sketch algorithm that is run inside Gen, And a Rec algorithm. The Rec algorithm takes p and a close value w_1 and reproduces w_0. You can think of this as an error correcting procedure for w_0. Once we have the value w_0, we can run the extractor and obtain the original key. I am now going to describe a little bit more about how secure sketches work. Any questions on the fuzzy extractor paradigm so far? Reproduce w key p Ext Sketch Rec

12 Secure Sketches Generate key Reproduce key w Code Offset Sketch Ext p
Rec Sketch ec = Gc Code Offset Sketch I’ll start by describing the sketch algorithm. <click, click> The sketch I am going to describe is called the “code-offset sketch” in the literature. Assume we have an error correcting code that can correct dmax errors. <click> We will start by selecting a random codeword. So we select a random value x and encode x using the error correcting code. We will use this ec value as a mask for our value w_0. Our public value p will be the exclusive or of the value ec and our original reading. Remember, we want two properties from p: it should allow recovery from a close value w_1 and it shouldn’t give much information about w_0. p =ec  w G – Generating matrix for code that corrects dmax errors

13 Secure Sketches Generate key Reproduce key w Code Offset Sketch Ext p
Rec Sketch ec = Gc ec’=Dec(p  x) If w and x are close then w = ec’  p. Code Offset Sketch I’ll now show have the first property is fulfilled. <click> This is done in the Recovery function. So the recovery function has just the public value p and the new reading w_1. It adds these two values together. We then run the decoding procedure of the error correcting code on this value p\oplus w_1. This gives us a value ec’. If w_0 and w_1 are within the decoding radius for the error correcting code then ec’ = ec. This means we can recover the value w_0 from a close w_1. p  x p =ec  w G – Generating matrix for code that corrects dmax errors

14 Secure Sketches Generate key Reproduce key w Code Offset Sketch
Ext must be able to extract from distributions where Generate key Ext Reproduce w key p Ext Rec Sketch w is unknown (knowing p): (k−k’) entropy loss Code Offset Sketch We will now work on the second property. We need the value p not to give much information about w_0. <click> Consider the case where we collect a reading from a different source, say w_0’. <click, click> This means that p \oplus w_0’ will not be close to the original codeword and decoding will give an unrelated value. <click> Formally, we can say that W_0 retains high entropy even conditioned on the public value p. This is the main novel contribution of a secure sketch (otherwise we could just provide error correcting information. Recall the starting entropy was k. We will call k-k’ the entropy loss of a secure sketch. This value is important as it determines the strength of our key. In particular, the extractor must be able to produce a good key with only k’ bits of entropy. p  x p =ec  w G – Generating matrix for code that corrects dmax errors p  x’

15 Implementing Fuzzy Extractors
Need to know: Starting metric space M Starting Entropy Desired Error tolerance dmax Code C that corrects dmax errors Hash from M to bit-strings Can output key of length: k – (log |M| - log |C|)- 2 log (1/ε) Losses are significant. No key for many sources! Generate(w) Select c from C Set key = Hash(w) Set p = c  w Output (key, p) Reproduce(x, p) Set c’ = p  x Decode c = Decode(c’) Recover w=p  c Output key = Hash(w)

16 Outline Noisy Authentication Sources Fuzzy Extractors
Limitations of Standard Techniques Current Work

17 Entropy Loss From Fuzzy Extractors
Entropy is at a premium for physical sources Iris ≈249 [Daugman96] Fingerprint ≈82 [RathaConnellBolle01] Passwords ≈31 [ShayKomanduri+10] PUFs [KoeberlLiRejanWu14] Entropy loss is considered in information-theoretic setting (all powerful adversary) Fuzzy extractors have two losses: Secure sketches lose at least the error correcting capability of the code (k-k’) Iris ≈200 bit error rate Randomness extractors lose 2log (1/ε) or between bitsits After these losses, there may not be any key left! Lets dig into the entropy loss a little bit more. <click> Most biometrics have a couple hundred bits of entropy. These sources are not overflowing with entropy, where we can afford entropy losses without much consideration. Entropy loss for fuzzy extractors consider an information theoretic adversary. This is primarily due to the construction using two “information-theoretic” primitives. This lose comes from two sources. The secure sketch due to an error correcting code is at least the error correcting capability. In practice, this entropy loss may be considerably higher Randomness extractors give a distribution that is close to uniform. There is some error \epsilon that indicates the distance from uniform. The length of the extractor key is proportional to the distance from uniform. <click> The point of all this is we may not have enough bits for a key after all of these losses. The main question of this work is whether we can eliminate these entropy losses. <click, click> If we consider the information theoretic setting we already have an answer to these questions. Any secure sketch that corrects dmax errors implies an error correcting code that corrects dmax errors and vice versa. This means that all bounds on the rate of a code carry over to secure sketch. One such bound is the sphere packing bound. This tells us the entropy loss must be at least the volume of the ball we are trying to correct. As stated above we are measuring the entropy loss in an information theoretic setting. Maybe we can do better by considering computationally bounded adversaries.

18 Entropy Loss From Fuzzy Extractors
Entropy is at a premium for physical sources Iris ≈249 [Daugman96] Fingerprint ≈82 [RathaConnellBolle01] Passwords ≈31 [ShayKomanduri+10] PUFs [KoeberlLiRejanWu14] Entropy loss is considered in information-theoretic setting (all powerful adversary) Fuzzy extractors have two losses: Secure sketches lose at least the error correcting capability of the code (k-k’) Iris ≈200 bit error rate Randomness extractors lose 2log (1/ε) or between bitsits Can we eliminate either of these entropy losses? [DodisOstrovskyReyzinSmith] Secure Sketch Error-Correcting Code (corrects random errors) Means k−k’≥ log|Bdmax| (Ball of radius dmax) Current Situation: PUFs Biometrics Lets dig into the entropy loss a little bit more. <click> Most biometrics have a couple hundred bits of entropy. These sources are not overflowing with entropy, where we can afford entropy losses without much consideration. Entropy loss for fuzzy extractors consider an information theoretic adversary. This is primarily due to the construction using two “information-theoretic” primitives. This lose comes from two sources. The secure sketch due to an error correcting code is at least the error correcting capability. In practice, this entropy loss may be considerably higher Randomness extractors give a distribution that is close to uniform. There is some error \epsilon that indicates the distance from uniform. The length of the extractor key is proportional to the distance from uniform. <click> The point of all this is we may not have enough bits for a key after all of these losses. The main question of this work is whether we can eliminate these entropy losses. <click, click> If we consider the information theoretic setting we already have an answer to these questions. Any secure sketch that corrects dmax errors implies an error correcting code that corrects dmax errors and vice versa. This means that all bounds on the rate of a code carry over to secure sketch. One such bound is the sphere packing bound. This tells us the entropy loss must be at least the volume of the ball we are trying to correct. As stated above we are measuring the entropy loss in an information theoretic setting. Maybe we can do better by considering computationally bounded adversaries.

19 Outline Noisy Authentication Sources Fuzzy Extractors
Limitations of Standard Techniques Current Work

20 When is security possible?
M Some distributions are inherently insecure If points are close, no security possible Ideal world: Possible using: Multi-Party Computation (in interactive setting) Obfuscation (under very strong assumptions) [BitanskiCanettiKalaiPaneth14] By providing x* to Rep the adversary always learns key key key x* We answer this question for both secure sketches and fuzzy extractors. <click> For secure sketches we show the answer is no. We show that defining a secure sketch with a computational adversary is not helpful. I will be more precise in a minute about what this means. For fuzzy extractors, we provide an affirmative answer. We construct a lossless computational fuzzy extractor based on Learning with Errors or LWE problem Along the way we extend the hardness of LWE to the case when some dimensions have known error.

21 When is security possible?
key M By providing x* to Rep the adversary always learns key Hope: provide strong key whenever a negligible fraction of probability mass is within distance dmax New entropy notion! Fuzzy min-entropy is the maximum weight ball of a probability distribution x* We answer this question for both secure sketches and fuzzy extractors. <click> For secure sketches we show the answer is no. We show that defining a secure sketch with a computational adversary is not helpful. I will be more precise in a minute about what this means. For fuzzy extractors, we provide an affirmative answer. We construct a lossless computational fuzzy extractor based on Learning with Errors or LWE problem Along the way we extend the hardness of LWE to the case when some dimensions have known error.

22 When is security possible?
key M Hope: provide strong key whenever a negligible fraction of probability mass is within distance dmax New entropy notion! Fuzzy min-entropy is the maximum weight ball of a probability distribution We answer this question for both secure sketches and fuzzy extractors. <click> For secure sketches we show the answer is no. We show that defining a secure sketch with a computational adversary is not helpful. I will be more precise in a minute about what this means. For fuzzy extractors, we provide an affirmative answer. We construct a lossless computational fuzzy extractor based on Learning with Errors or LWE problem Along the way we extend the hardness of LWE to the case when some dimensions have known error.

23 When is security possible?
M Goal: good key for dist. with super-log fuzzy min-entropy Feasibility: consider information-theoretic constructions Two settings: Distribution-aware: algorithms encode probability mass function (pmf) of source Distribution-oblivious: algorithms must work for a family of distributions

24 Key derivation from fuzzy min entropy?
Error correction without leaking information Standard setting, know only some properties of source distribution Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Combine error correction and key derivation Standard transform from secure sketch to fuzzy extractor

25 Key derivation from fuzzy min entropy?
Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Thm: For any distribution W with fuzzy min-entropy there exists a secure sketch (and fuzzy extractor) with

26 Distribution Aware M w x Sketch knows probability mass function of W
Consider initial reading w Recover algorithm will receive a nearby point x Sketch must disambiguate nearby points Sketch consists of a description of which nearby point was seen Problem: A ball can contain an unbounded number of points that rarely occur Idea: write down the probability of original w and limit recover to w with this probability w x

27 Distribution Aware M x w Sketch knows probability mass function of W
Consider initial reading w Recover algorithm will receive a nearby point x Sketch must disambiguate nearby points Sketch consists of a description of which nearby point was seen Problem: A ball can contain an unbounded number of points that rarely occur Idea: write down the probability of original w and limit recover to w with this probability w Correcting on these unlikely points may reveal w Sketch is correct for all of W, Adv ignores low probability points

28 Key derivation from fuzzy min entropy?
Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Thm: For any distribution W there exists a secure sketch

29 Key derivation from fuzzy min entropy?
Distribution Aware Distribution Oblivious Secure Sketches Fuzzy Extractors ? Thm: There is a family of distributions V (where all members have fuzzy entropy) such that any secure sketch for V removes all entropy

30 Distribution Oblivious
Adversary selects what color will be provided Constraints imposed by Sketch Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M

31 Distribution Oblivious
Adversary selects what color will be provided Constraints imposed by Sketch Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M Adversary’s search space Maybe those were “bad” constraints?

32 Distribution Oblivious
Adversary selects what color will be provided Alternative Constraints Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M

33 Distribution Oblivious
Adversary selects what color will be provided Alternative Constraints Sketch supports a family V After Sketch is fixed, adversary selects a distribution W from V Sketch does not know what distribution it is being asked to correct (only receives sample from distribution) Sketch imposes constraints on W Constraints only depend on received sample and are independent of rest of W’s p.m.f. When W has little entropy Sketch constraints remove all entropy M Adversary’s search space Every Sketch must create constraints that are independent of the color, Leaves few points of each color

34 Conclusion Distribution-Aware: Show that key derivation is possible for any distribution with fuzzy min-entropy when algorithms know the p.m.f. Distribution-Oblivious: There exist families of distributions with no secure sketch Negative result extends to computational secure sketches defined using pseudoentropy using [FullerMengReyzin13] Can build computational secure sketch that provides unpredictability [BitanskiCanettiKalaiPaneth14] We have a fuzzy extractor for this family [CanettiFullerPanethReyzinSmith14] Open Question: Are there families of distributions where fuzzy extraction is not possible? Evidence constructing fuzzy extractors easier than secure sketches Questions?


Download ppt "When is Key Derivation from Noisy Sources Possible?"

Similar presentations


Ads by Google