Download presentation
Presentation is loading. Please wait.
1
Computational Fuzzy Extractors
Benjamin Fuller, Xianrui Meng, and Leonid Reyzin December 2, 2013 1 1
2
Key Derivation from Noisy Sources
High-entropy sources are often noisy Source value changes over time, w0≠ w1 Assume a bound on distance: d(w0, w1) ≤ dmax Consider Hamming distance today Want to derive a stable key from a noisy source Want w0, w1 to map to same key Want the key to be cryptographically strong Appear uniform to the adversary Physically Unclonable Functions (PUFs) Biometric Data Goal of this talk: provide meaningful security for more sources
3
Fuzzy Extractors key p Source
Assume source has min-entropy k (no w is likelier than 2−k) Lots of work on reliable keys from noisy data [BennettBrassardRobert85] …Our formalism: Fuzzy Extractors [DodisOstrovskyReyzinSmith04] … Correctness: Gen, Rep give same key if d(w0, w1) < dmax Security: (key , p) ≈ (U , p) Key Public key Gen key Say information reconciliation on this page w0 Rep key p w1
4
Converts high entropy sources to uniform: H∞(W0)≥ k Ext (W0 ) ≈ U
Fuzzy Extractors Source Assume source has min-entropy k (no w is likelier than 2−k) Lots of work on reliable keys from noisy data [BennettBrassardRobert85] …Our formalism: Fuzzy Extractors [DodisOstrovskyReyzinSmith04] … Correctness: Gen, Rep give same key if d(w0, w1) < dmax Security: (key , p) ≈ (U , p) Typical Construction: - derive key using a randomness extractor Key Public Converts high entropy sources to uniform: H∞(W0)≥ k Ext (W0 ) ≈ U Gen key Ext w0 Rep key p Ext w1
5
Fuzzy Extractors p Source
Assume source has min-entropy k (no w is likelier than 2−k) Lots of work on reliable keys from noisy data [BennettBrassardRobert85] …Our formalism: Fuzzy Extractors [DodisOstrovskyReyzinSmith04] … Correctness: Gen, Rep give same key if d(w0, w1) < dmax Security: (key , p) ≈ (U , p) Typical Construction: - derive key using a randomness extractor correct errors using a secure sketch Key Public Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1
6
Secure Sketches p Gen key Ext w0 Rep key w0 w1 Rec Ext
Code Offset Sketch c = Gx G generates a code that corrects dmax errors p =c w0
7
Secure Sketches Guarantee a bound on entropy reduction: ≤ redundancy of G Gen Extract from distributions of reduced entropy key Ext w0 Rep key p Ext Sketch w0 Rec w1 If w0 and w1 are close c’= c w0=c’ p c’=Dec(p w1) Code Offset Sketch c = Gx Here we have the loss according to the redundancy of G. This isn’t a feature of this sketch, it applies to all sketches. p w1 G generates a code that corrects dmax errors p =c w0 p reveals information about w0
8
Entropy Loss From Fuzzy Extractors
Entropy is at a premium for physical sources Iris ≈249 [Daugman1996] Fingerprint ≈82 [RathaConnellBolle2001] Passwords ≈31 [ShayKomanduri+2010] Above construction of fuzzy extractors, with standard analysis: Secure sketch loss = redundancy of code ≥ error correcting capability Loss necessary for information-theoretic sketch: [Smith07, DORS08] Randomness extractor loss ≥ 2log (1/ε) Can we improve on this? One approach: define secure sketches/fuzzy extractors computationally Give up on security against all-powerful adversaries, consider computational ones
9
Can we do better in computational setting?
Our Results: For secure sketches: NO We show that defining a secure sketch in computational setting does not improve entropy loss For fuzzy extractors: YES We construct a lossless computational Fuzzy Extractor based on the Learning with Errors (LWE) problem Caveat: this result shows only feasibility of a different construction and analysis; we do not claim to have a specific set of parameters for beating the traditional construction
10
Computational Secure Sketches
Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1 Information-theoretic goal: H∞( W0 | p) Computational goal: Hcomp( W0 | p) Can we improve on this computationally? What does Hcomp( W0 | p) mean? Most natural requirement: (W0 | p) is indistinguishable from (Y | p) and H∞(Y | p) ≥ k Known as HILL entropy [HåstadImpagliazzoLevinLuby99]
11
Computational Secure Sketches
Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1 Computational goal: Good News: Extractors yield pseudorandom keys from HILL entropy HHILL( W0 | p) Can we improve on this computationally? What does Hcomp( W0 | p) mean? Most natural requirement: (W0 | p) is indistinguishable from (Y | p) and H∞(Y | p) ≥ k Known as HILL entropy [HåstadImpagliazzoLevinLuby99]
12
HILL Secure Sketches Secure Sketches
Our Theorem: If HHILL(W0 | p) ≥ k, then there exists an error-correcting code C with 2k−2 points and Rec corrects dmax random errors on C Corollary: (Using secure sketch of [Smith07]) If there exists a sketch with HILL entropy k, then there exists a sketch with true entropy k−2. We can fix a p value where Rec functions as a good decoder for W0. Rec must also decode on Y by indistinguishability, and Y is large.
13
Can we do better in computational setting?
For secure sketches: NO A sketch that retains HILL entropy implies an information theoretic sketch For fuzzy extractors: YES Can’t just make the sketch “computational” Other approaches?
14
Building a Computational Fuzzy Extractor
Gen key Can’t just work with sketch Ext w0 Rep key p Ext Sketch w0 Rec w1
15
Building a Computational Fuzzy Extractor
Gen What about an extractor that outputs pseudorandom bits? key Ext Cext w0 Rep key p Cext Ext Sketch w0 Rec w1 Computational extractors convert high-entropy sources to pseudorandom bits [Krawczyk10] Natural construction: Cext(w0) = PRG(Ext(w0)) Extensions [DodisYu13DachmanSoledGennaroKrawczykMalkin12DodisPietrzakWichs13] All require enough residual entropy after Sketch to run crypto! See [Dachman-SoledGennaroKrawczykMalkin12] for conditions
16
Building a Computational Fuzzy Extractor
Gen key Ext w0 Rep key p Ext Sketch w0 Rec w1 We’ll try to combine a sketch and an extractor We’ll base our construction on the code offset sketch Instantiate with random linear code Base security on Learning with Errors (LWE) ec = Gx p=ec w0
17
Learning with Errors A A x w0 b n m + = , p=Ax w0
Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 Need error distribution where LWE is hard Start from result of [Döttling&Müller-Quade13] and make some progress
18
Learning with Errors A A x w0 b n m + = , p=Ax w0
Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore
19
Learning with Errors A1 A A2 A A1 A2 x1 x2 x w0 b n n/2 n/2 m + = ,
p=Ax w0 + = , Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore x2 | A, b is pseudorandom
20
Our Construction A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 n/2 n/2 m
Source Key n/2 n/2 Public m A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 p=Ax w0 + = , Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems LWE Error Distribution = Source Distribution W0 [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore x2 | A, b is pseudorandom
21
Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key
+ = p = (A, b) w1 Q: How are we avoiding our negative results? A We don’t extract from (we are not aware of any notion where w0|A, b has high entropy) Instead, we use secret randomness, and hide it using key w0 w0 |(A, b) w0
22
? Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key
+ = p = (A, b) ? w1
23
Rep b A A x1 w0 w1 x2 n m + − , Rep has A and something close to Ax
This is a decoding problem (same as in the traditional construction) Decoding random codes is hard, but possible for small distances. (We can’t use LWE trapdoor, because there is no secret storage)
24
Rep n m A A x1 w0 w1 x2 + − , Example algorithm for log many errors:
25
Rep A A x1 w0–w1 x2 n m + , Example algorithm for log many errors:
Select n random samples (hopefully, they have no errors) Solve linear system for x on these samples Verify correctness of x using other samples Repeat until successful
26
Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key = x2
+ key = x2 = p = (A, b) A A w0 − w1 x1 x2 w1 + , Can correct as many errors as can be efficiently decoded for random linear code (our algorithm: logarithmically many) Each dimension of can be sampled with a fraction of the bits needed for each dimension of x (i.e., we can protect x using fewer than |x| bits) So we can get as many bits in as in −− lossless! Key length doesn’t depend on how many errors are being corrected Intuition: is encrypted by and decryption tolerates noise w0 key w0 key w0
27
Conclusion What about the Computational Setting?
Fuzzy Extractors and Secure Sketches suffer from entropy losses in information theoretic setting May keep the resulting key from being useful What about the Computational Setting? Negative Result: Entropy loss inherent for Secure Sketches (Additional results about unpredictability of ( W0 | p ) ) Positive Result: Construct lossless Computational Fuzzy Extractor using the Learning with Errors problem For Hamming distance, with log errors and restricted class of sources (secure LWE error distributions)
28
Open Problems Questions? Improve error-tolerance
Handle additional source distributions Beat information-theoretic constructions on practical parameter sizes Other computational assumptions? Using the result of Micciancio and Peikert we get security for all slightly deficient distributions. Questions?
29
Backups
30
Our Construction Source Key Gen key = x2 Public w0 A x1 x2 w0 w0 b Rep + key = x2 = p = (A, b) A A w0 − w1 x1 x2 w1 + , Theorem: If dmax = O(log n) and W is uniform, our construction 1) Is lossless 2) Runs in expected polynomial time 3) Yields pseudorandom key assuming GAPSVP and SIVP are hard to approximate within polynomial factors
31
Our Construction A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 n/2 n/2 m
Source Key n/2 n/2 Public m A A1 A A A2 A A1 A A2 x1 x2 x w0 w0 b b x2 p=Ax w0 + = , Recovering x is known as learning with errors [Regev05] shows solving LWE implies approximating lattice problems Error w0 is drawn from Gaussian distribution (per coordinate) [AkaviaGoldwasserVaiku…09] show if LWE is secure on n/2 variables, any additional variables are hardcore: x2 | A, b is pseudorandom
32
Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key
+ = p = (A, b) w1 Unlikely that w0 comes from the correct (Gaussian) distribution w0 00010 100 01 111101 001 10 Can we use it to sample coordinate-wise Gaussian? Each coordinate requires a variable number of bits to sample Hard to do that in a noise-tolerant way because all this bits shift
33
Our Construction Source Key Gen key = x2 Public w0 A w0 b Rep key
+ = p = (A, b) w1 Unlikely that w0 comes from the correct (Gaussian) distribution w0 Recent Results of [Döttling&Müller-Quade13, Micciancio&Peikert13] LWE is secure with error drawn uniformly from a small interval w0 0001 1100 1101 1001 1111 1010 Now, differences between w0 and w1 in the same number of error coordinates (we’ll talk about Rep in a minute)
34
Can we do better in computational setting?
Our Results: For secure sketches: NO We show that defining a secure sketch in computational setting does not improve entropy loss For fuzzy extractors: YES We construct a lossless computational Fuzzy Extractor for uniform sources based on the Learning with Errors (LWE) problem To get it work on distributions other than uniform, we extend hardness of LWE to case when some dimensions have known error (symbol-fixing error sources)
35
Symbol Fixing Sources w0 e Fixed
A source W0 is a symbol fixing source if for each block W0,i of W0: 1) W0,i is a fixed value, or 2) W0,i is uniformly distributed Let α be the number of blocks that are fixed.
36
LWE w/ Fixed Errors A A x1 w0 e b x2 n/2 n/2 m + = , Source Key Public
Def: α-fixed LWE is standard LWE except that α dimensions have a fixed (and adversarially known) error value.
37
LWE w/ Fixed Errors A A x1 w0 e b x2 n/2 n/2 m + = , Source Key Public
Our Theorem: Security of LWE of matrices of dimension (n, m) implies Security of α-fixed LWE of matrices of dimension (n+α, m+α)
38
LWE w/ Fixed Errors A A x1 w0 e b x2 n/2 n/2 m + = , Source Key Public
Corollary:(Applying [AkaviaGoldwasserVinod09]) If LWE is secure on n/3 variables, our construction is a computational fuzzy extractor for α-block fixing sources where α<n/3.
39
Theorem: Security of LWE of matrices of dimension (n, m) implies
security of α-fixed LWE of matrices of dimension (n+α, m+α)
40
Assume: D distinguishes between A, Ax+w0 and A, U where last α samples of w0 have no error
Goal: build D’ that distinguishes A’, A’x’+e from A’, U’ where e is from error distribution n+α D m+α A x w0 b U + =
41
D’ D A A’ x’ x e w0 b b U U ’ ’ n+α m+α + =
Assume: D distinguishes between A, Ax+w0 and A, U where last α samples of w0 have no error Goal: build D’ that distinguishes A’, A’x’+e from A’, U’ where e is from error distribution n+α D’ D m+α A A’ x’ x e w0 b b U U + = ’ ’
42
D’ D A’ x’ e b U x3 ’ ’ R S $ $ n m + =
Know last error terms fixed at 0 Generate last α samples uniformly random Our free variables “explain” the last α samples For R,S uniformly random, x3 is solution to Rx’+Sx3 = $ Randomize matrix and samples using rows with no error Add random multiple of each row in (R||S) to each row in A’ n D’ D m A’ x’ e b U + = x3 ’ ’ R S $ $ Main issues are ensuring that we have a valid solution x’||x3, and producing a random matrix
43
D A’ x’ e b U x3 ’ ’ R S $ $ Theorem: If LWE is secure on A’x’+e then
LWE is secure on Ax + w0 n D m A’ x’ e b U + = x3 ’ ’ R S $ $
44
Our Construction Source Key Gen key = x2 Public w0 A x1 x2 w0 w0 b Rep + key = x2 = p = (A, b) A A w0 − w1 x1 x2 w1 + , Theorem: If dmax = O(log n) and W is uniform, our construction 1) Is lossless 2) Runs in expected polynomial time 3) Yields pseudorandom key assuming GAPSVP and SIVP are hard to approximate within polynomial factors symbol-fixing,
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.