Download presentation
Presentation is loading. Please wait.
Published byAnnabella Annice Harmon Modified over 9 years ago
1
Correcting Errors Without Leaking Partial Information Yevgeniy Dodis New York University Adam SmithWeizmann Institute To appear in STOC 2005 http://theory.csail.mit.edu/~asmith
2
2 Reconciling Errors in a Shared String Bob recovers w when w, w’ differ in · bits How little can S( w ) reveal about w ? Trivial solution: S ( w ) = w reveals everything Previous work: bound entropy H(W | S(W) ) Does not rule out learning parts of w (e.g. first 5 bits) Can S( w ) reveal nothing about w ? w w’w’ w ¼ w’w ¼ w’ “Sketch” S( w ) Recover w AliceBob
3
3 This Paper Construct S(W) which hides all functions f(W) when entropy of W is high Applications:1. Noisy passwords in crypto 2. “Bounded storage” model 3. Weak code obfuscation Predictor S(w)f(w) Simulator f(w) w w’w’ w ¼ w’w ¼ w’ “Sketch” S( w ) Recover w AliceBob ¼
4
4 Outline Example of sketch via “code offset” Background on codes “Code offset” construction and syndromes Definition of secrecy (“entropic” security) Connection to randomness extraction Main Construction Applications
5
5 Error-Correcting Codes Encoding ECC: k bits ! n bits (n > k) Any two codewords differ by at least d bits Balls of radius < d/2 are disjoint Corrects < d/2 errors Poly-time algorithms only for a few specific codes d corrupted codeword original codeword
6
6 Example: “Code Offset” Construction [BBR,Cré,JW] Here R is random string How does Bob recover w given w ’? d w ECC(R) S(w)S(w) w’w’ S( w ) = w © ECC(R) w w’w’ Recover w AliceBob
7
7 Example: “Code Offset” Construction [BBR,Cré,JW] Given S( w ) and w ’ close to w : Compute w ’ © S( w ) Decode to get ECC(R) Compute w = S( w ) © ECC(R) d w ECC(R) S(w)S(w) w’w’ S( w ) = w © ECC(R) w w’w’ Recover w AliceBob This corrects d/2 errors Q:Why bother? Just send w! A:Reveal less information
8
8 Example: “Code Offset” Construction [BBR,Cré,JW] H 1 (W | S(W)) = H 1 (W, R | S(W) ) ¸ H 1 (W) + H 1 (R) – |S(W)| = H 1 (W) + k – n d w ECC(R) S(w)S(w) S( w ) = w © ECC(R) w w’w’ Recover w AliceBob Revealing n bits costs · n bits of entropy Entropy loss = n – k = redundancy of code
9
9 Linear Codes Encoding ECC: k bits ! n bits (n > k) Any two codewords differ by at least d bits Linear code: C = Im( ECC ) is linear subspace of Z 2 n Parity check matrix: H 2 {0,1} n £ (n-k) C = ker(H) = {y: Hy=0 n-k } Hy = “syndrome of w” y = ECC(x) © e ) H y = H e H y determines the error pattern (e s.t. y © e 2 C) Decoding problem: given Hy, find lightest e s.t. H e = H y ECC( x ) ¢ H = 000000
10
10 Example: 7-bit Hamming Code ECC: 4 bits ! 7 bits Distance d =3 ) corrects 1 error Suppose y is ECC (x) with i th bit flipped: 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1 0 1 0 1 0 1 H = Hy = H ( ) = H = = ECC( x ) 00001000000100 © 00001000000100 101101 binary rep. of i
11
11 Syndrome Sketch (state of the art, [1970’s]) S( w ) = w © ECC (R) S(w) = syndrome of w w w’w’ Recover w AliceBob Claim: For linear codes, S(w) = Hw is equivalent Proof: (a) H( w © ECC (R)) = Hw (b) y = w © ECC (R) is random sol n to Hy=Hw. QED Bob’s decoding algorithm: Compute syndrome(w © w’) = Hw’ © (received message) Decode to obtain e = w © w’ Output w’ © e
12
12 Outline Example of sketch via “code offset” Background on codes “Code offset” construction and syndromes Definition of secrecy (“entropic” security) Connection to randomness extraction Main Construction Applications
13
13 Defining Secrecy Suppose Eve sees S(w). How much does she learn? Previous work: entropy loss Prove that H 1 (W | S(W) ) is high Does not prevent leaking particular functions of W E.g.: syndrome construction always leaks a fixed linear f n Q: Can S( ¢ ) leak no information at all? A: Depends on definition of “no info” randomized “sketch” S(w) w AliceBob w’w’ Recover w
14
14 Standard notions of “no info” do not fit Two typical definitions: 1.Shannon: W and S(W) are (close to) independent r.v.’s Impossible: to correct errors, need mutual info ( ) 2.Semantic Security: for all values w 0,w 1, distributions computationally indistinguishable: S(w 0 ) ¼ PPT S(w 1 ) Impossible: if w 0, w 1 are close, then use Bob’s algorithm: Recover wbwb w0w0 S(wb)S(wb) randomized “sketch” S(w) w AliceBob w’w’ Recover w
15
15 Entropic Security [CMR’98, RW’02] Standard definitions look at low-entropy distributions Entropic Security: if Eve is uncertain about W, then for any fixed function f, Eve learns nothing about f(W) [CMR’98] hash functions which hide all functions of input [RW’02] information-theoretic encryption with short key Measure min-entropy H 1 (W) = – log(max w Pr[W=w]) If W is uniform over a set of size 2 t, then H 1 (W) = t H 1 (W) = t ) W is a “ t -source” randomized “sketch” S(w) w AliceBob w’w’ Recover w
16
16 – Entropic Security [CMR’98, RW’02] Definition : S( ¢ ) is (t, )– entropically secure if 8 t-source W, 8 predictors, 9 simulator, 8 functions f, Main Theorem ( Hamming distance on n bit strings ): If t = (n), then 9 S( ¢ ) such that S( ¢ ) corrects = (n) errors S( ¢ ) has leakage = 2 – (n) randomized “sketch” S(w) w AliceBob w’w’ Recover w Predictor S(W)S(W) f(W)f(W) Pr Simulator f(W)f(W) Pr · ·
17
17 Tool: Connection to Randomness Extraction Lemma [DS’05]: S( ¢ ) is (t, )-entropically secure, S( ¢ ) is ( t –1, ’)-indistinguishable. S( ¢ ) is ( t, )-indistinguishable if 8 t -sources W 1,W 2 : S(W 1 ) ¼ S(W 2 ) Randomness Extractor: same definition with S(W 1 ) ¼ S(W 2 ) ¼ uniform Sufficient to construct randomness extractors which can correct errors in the input statistically indistinguishable distributions
18
18 Error-Correcting Extractors Main Theorem (restated): If entropy t = (n), then 9 extractor S( ¢ ) such that Output allows correcting = (n) errors efficiently Error = 2 – (n). In particular, H 1 (W | S(W)) = ( n ) w w’ S random coins S(w)S(w) Recover noise ( · flipped bits) w uniform ¼¼
19
19 Intuition: Randomized Code Offset Pick random-looking family of codes { ECC i : i 2 I} S( w ) = ( i, w © ECC i (R) ) S(w) = ( i, syn i ( w ) ) For each code ECC i, some function of w is leaked For w à W of high entropy and all fixed f Equivalently: syn i ( w ) gives info about f(w) Pr i à I statistical distance of syn i ( w ) from uniform EiÃIEiÃI · · · ·
20
20 Error-Correcting Extractors Main Theorem (restated): If entropy t = (n), then 9 extractor S( ¢ ) such that Output allows correcting = (n) errors efficiently Error = 2 – (n). In particular, H 1 (W | S(W)) = ( n ) w w’ S random coins S(w)S(w) Recover noise ( · flipped bits) w uniform ¼¼
21
21 Error-Correcting Extractors Main Theorem (restated): If entropy t = (n), then 9 extractor S( ¢ ) such that Output allows correcting = (n) errors efficiently Error = 2 – (n). In particular, H 1 (W | S(W)) = ( n ) w w’ S i syn i (w) Recover noise ( · flipped bits) w uniform ¼¼ i
22
22 Caveat: Cryptographically weak definition “Composition” is delicate Info about w might come from other parts of protocol Example: S 1 (w), S 2 (w),… eventually reveals w Always true for unbounded adversaries Robust construction? Open question Nevertheless First non-trivial secrecy guarantee Secure against unbounded adversary “Composition” can be done if entropy is tracked Sufficient for several applications
23
23 Outline Example of sketch via “code offset” Definition of secrecy (“entropic” security) Connection to randomness extraction Main Construction Bias and Secrecy Small-bias Code Families via Random Binary Images Applications
24
24 Construction Overview Main idea: Twists on code offset construction S(w) = w © ECC (R) 1.“Bias” of random variables X has small bias ) W © X ¼ uniform 2.Extend bias to families of random variables Pair ( i, w © X i ) ¼ uniform if family {X i } i 2 I has small bias Bias of codes, “evasive” dual codes 3. Construct code families with evasive duals Random binary images of algebraic geometry codes
25
25 Small-Bias Distributions [NN’93] Useful tool in derandomization, algorithms, … Say X is distributed over {0,1} n How “random-looking” is X ? Consider tests ¯ X, for 2 {0,1} n bias (X) = 2 ( Pr[ ¯ X = 0] – ½ ) = E [ (-1) ¯ X ] X is -biased if | bias (X) | · for all . ¯ x = j j x j (mod 2) Fact : If X is -biased, W is t-source then W © X ¼ uniform for = ¢ 2 –(n–t)/2
26
26 Using a Code with Small Bias How can we use bias to make a sketch an extractor? Recall code offset S(W) = W © ECC (R) If ECC (R) has bias = 2 - (n), then S( ¢ ) is secure! Random codes have good bias, but no decoding (Open) Question: Do there exist sequences of codes s.t. bias 2 – (n) minimum distance ( n ) and poly-time error correction ? Fact : If X is -biased, W is t-source then W © X ¼ uniform for = ¢ 2 –(n–t)/2 Solved by Amir Sphilka. (Parameters?)
27
27 Small-Bias Families of Random Variables Instead, use S(w) = (i, w © ECC i (R)) Extend “bias” to family {X i : i 2 I} of r.v.’s in {0,1} n Recall bias (X) = E X [ (-1) ¯ X ] {X i : i 2 I} is -biased if E i [ bias ( X i ) 2 ] · 8 Typically, index set is I = {0,1} poly(n) Lemma : If {X i : i 2 I} is -biased, W is t-source then ( i, W © X i ) ¼ uniform on I £ {0,1} n for = ¢ 2 –(n–t)/2 Proof : Fourier analysis over {0,1} n
28
28 Small-Bias Families of Random Variables Instead, use S(w) = (i, w © ECC i (R)) Extend “bias” to family {X i : i 2 I} of r.v.’s in {0,1} n Recall bias (X) = E X [ (-1) ¯ X ] {X i : i 2 I} is -biased if E i [ bias ( X i ) 2 ] · 8 . Typically, index set is I = {0,1} poly(n) Lemma : If {X i : i 2 I} is -biased, W is t-source then ( i, W © X i ) ¼ uniform on I £ {0,1} n for = ¢ 2 –(n–t)/2 Proof : Fourier analysis over {0,1} n Sufficient to construct families of codes { ECC i : i 2 I} with exponentially small bias and poly-time decoding
29
29 Small-Bias Families of Random Variables Instead, use S(w) = (i, w © ECC i (R)) Recall bias (X) = E X [ (-1) ¯ X ] {X i : i 2 I} is -biased if E i [ bias ( X i ) 2 ] · 8 When is a family of linear codes -biased? C i ? = {y: y ¯ u=0, 8 u 2 C i } bias (C i ) = {C i : i 2 I} -biased if Pr i [ 2 C i ? ] · 8 0 We need code families with “evasive” duals 1 if 2 C i ? 0 otherwise Since ¯ u = 0 for all u 2 C i Since a linear function is always balanced
30
30 Construction Overview Main idea: Twists on code offset construction S(w) = w © ECC (R) 1.“Bias” of random variables X has small bias ) X © W ¼ uniform No known codes with small bias 2.Extend bias to families of random variables Pair ( i, w © X i ) ¼ uniform if family {X i } i 2 I has small bias Bias of codes, “evasive” dual codes 3.Construct code families with evasive duals Random binary images of algebraic geometry codes
31
31 Codes with evasive duals Main Lemma : For all < 1, 9 family of codes {C i : i 2 I} such that Pr i [ 2 C i ? ] · 2 = 2 – ¢ n ( 8 0) Each C i corrects ( n ) errors in polynomial time
32
32 Basic Idea: Permuting a fixed code Fix a known code C 0 with decoding algorithm Construct {C : 2 S n } For each permutation of {1,…,n}: Let (y) = y (1),…,y (n), for y=y 1,…,y n 2 {0,1} n C = { (c) | c 2 C 0 } Each of the codes C can decode errors: Decode (y) = (Decode 0 ( -1 (y)) Q: Are the duals C ? evasive? A: Yes, if C 0 ? has no words near 0 n or 1 n
33
33 Fix 2 {0,1} n Slice of {0,1} n = {x : weight(x)=weight( )} Pr [ 2 C ? ] ? Look at event “ ( ) 2 C 0 ? ” Pr[ ( ) 2 C 0 ? ] = #(C 0 ? Å slice). # (slice) This ratio is 2 – (n) iff C 0 ? has no words near 0 n or 1 n Problem: don’t know codes with good parameters Permuting a fixed code: analysis of duals 0n0n 1n1n ’s slice No pts in C 0 ?
34
34 Random Binary Images over GF(2 e ) Work over finite field F = GF(q), q=2 e. e is constant (e.g. 4) Fix encoding bin: F ! {0,1} e Choose linear code D 0 µ F m, m =n/e, such that D 0 can correct = (n) errors in polynomial time D 0 ? has minimum distance d’ = (n) in F m = {v 2 F m : v ¯ F x = 0, 8 x 2 D 0 } Algebraic geometric codes are good D 0 Big Idea: Take different binary images of D 0
35
35 Random Binary Images over GF(2 e ) Choose linear code D 0 µ F m, m =n/e such that D 0 can correct = (n) errors in polynomial time D 0 ? has minimum distance d’ = (n) in F m Obtain binary codes {C i : i 2 (F * ) n } : Write i = a 1,…,a n 2 F * D i = {(a 1 x 1,…,a m x m ) : x 2 D 0 } µ F m C i = {bin(y) : y 2 D i } µ {0,1} n One can also add permutation on {1,…,m} Why does this work?
36
36 Final Construction: Random Binary Images Start with code D 0 µ F m, F = GF(2 e ), e.g. e=4 x1x1 a1x1a1x1 bin(a 1 x 1 ) £ a 1 x2x2 a2x2a2x2 bin(a 2 x 2 ) £ a 2 xmxm amxmamxm bin(a m x m ) £ a m 2 D 0 µ F m 2 D i µ F m 2 C i µ {0,1} n random non-zero scalars bin(a) = binary representation of a 2 F Since D 0 corrects errors (out of m ), each C i corrects errors (out of n = m ¢ e ) ) C i corrects a constant fraction of errors since e = O(1)
37
37 Random Binary Images: Analysis of duals Fact : 9 bin :F ! {0,1} e such that C i ? = bin * (D i ? ) Similar analysis to case of permuted binary code x1x1 a 1 -1 x 1 bin * (a 1 -1 x 1 ) £ a 1 -1 x2x2 a 2 -1 x 2 bin * (a 2 -1 x 2 ) £ a 2 -1 xmxm a m -1 x m bin * (a m -1 x m ) £ a m -1 2 D 0 ? µ F m 2 D i ? µ F m 2 C i ? µ {0,1} n random non-zero scalars
38
38 Random Binary Images: Analysis of duals Claim : 8 2 {0,1} n, 0, Pr[ 2 C i ? ] · (q-1) d’-1 (Recall q =2 e = O(1) and d’ = mindist(D 0 ? )= (n)) Proof : Let = bin * -1 ( ) “Slice” S = {u 2 F m : support(u) = support( )} = (*,*, …,*, 0,0, …,0) (Pre-image of )= (a 1 1,…,a n n ) uniform in S Pr[ 2 C i ? ] = #(S Å D 0 ? ) / # S Break S into blocks by fixing all but d’ entries block = (*,*,…,*,b 1,…b wt( ),0,0,…,0) In each block · q-1 vectors in D 0 ? (since distance d’ ) Fraction of dual codewords is (q-1) / (q-1) d’ QED
39
39 Codes with evasive duals Main Lemma : For all < 1, 9 family of codes {C i : i 2 I} such that Pr i [ 2 C i ? ] · 2 = 2 – ¢ n ( 8 0) Each C i decodes ( n ) errors in polynomial time By choosing D 0 from appropriate algebraic geometry code sequence we obtain:
40
40 Construction Overview Main idea: Twists on code offset construction S(w) = w © ECC (R) 1.“Bias” of random variables X has small bias ) X © W ¼ uniform No known codes with small bias 2.Extend bias to families of random variables Pair ( i, w © X i ) ¼ uniform if family {X i } i 2 I has small bias Bias of codes, “evasive” dual codes 3.Construct code families with evasive duals Random binary images of algebraic geometry codes
41
41 Error-Correcting Extractors Main Theorem (restated): If t = (n), then 9 extractor S( ¢ ) such that Output allows correcting = (n) errors efficiently Error = 2 – (n). In particular, H 1 (W | S(W)) = ( n ) w w’ S random coins S(w)S(w) Recover noise ( · flipped bits) w uniform ¼¼
42
42 Error-Correcting Extractors Main Theorem (restated): If t = (n), then 9 extractor S( ¢ ) such that Output allows correcting = (n) errors efficiently Error = 2 – (n). In particular, H 1 (W | S(W)) = ( n ) w w’ S i syn i (w) Recover noise ( · flipped bits) w uniform ¼¼ i We want to optimize extraction error Want to minimize output length!
43
43 Extracting the Remaining Randomness Combine with (ordinary) strong extractor to get: Extractor with two outputs and error = 2 – (n). Second output allows correcting = (n) errors efficiently First output has length ( n ) Outputs are jointly random w w’ S i syn i (w) Recover noise ( · flipped bits) w uniform ¼¼ i Ext seed R
44
44 Outline Example of sketch via “code offset” Definition of secrecy (“entropic” security) Connection to randomness extraction Main Construction Applications (in paper) Correcting Errors in “Bounded Storage” Model “Fuzzy Extractors” for noisy passwords Weak Obfuscation of Proximity Queries
45
45 Noisy Passwords Correct errors in a noisy cryptographic key Bob = (Alice at a later point in time) Say Alice’s key is her iris scan w Alice stores sketch S( w ) “in the clear” Later, use sketch to correct errors in scan Small entropy loss ) derive secure key Problem: Parts of w itself may be leaked by sketch E.g. what if iris scan indicates diabetes? Use entropically-secure sketch Leaked bits should not be “meaningful”
46
46 Waek Code Obfuscation Obfuscation: given circuit C, find distrib. on circuits C’ : C’(x) = C(x) for all x C’ reveals “nothing else” about C Knowing C’ ¼ oracle access to C [BGIRSV’01]: General obfuscation is impossible [CMR’98]: Weak obfuscation of “point” queries C(y) = w (y) [This paper]: weak obfuscation of “proximity” queries C w (y)=w if y close to w ? otherwise
47
47 Conclusion Introduced secrecy for reconciliation protocols Constructions which are “asymptotically good” Applications to several contexts Questions: Improved Parameters? More robust definitions? Ideas useful in other contexts?
48
48 Open Question: Improved Parameters Current construction doesn’t handle all error rates not optimal for any error rate Non-poly-time constructions (random linear codes) handle up to n /2 Length of sketch = n h( / n) log(1/ ) = ( t – length)/2 Can we match this?
49
49 Open Question: Multi-time Secure Sketch Consider sequence S 1 (w), S 2 (w),… Our construction reveals w after O(1) uses (when = (n)) Always true for unbounded adversary Robust construction? “Best” definition? Possibly unattainable (code obfuscation) Short-term goal: S(w) correcting ( n ) errors such that If W, W 1,…,W k uniform in {0,1} n, S 1 (W), S 2 (W), …, S k (W) ¼ PPT S 1 (W 1 ), S 2 (W 2 ),…, S k (W k )
50
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.