1 Leonid Reyzin May 23, 2011 5 th International Conference on Information Theoretic Security Minentropy and its Variations for Cryptography.

Slides:



Advertisements
Similar presentations
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
Advertisements

Deterministic Extractors for Small Space Sources Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padro,
Short seed extractors against quantum storage Amnon Ta-Shma Tel-Aviv University 1.
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
Computational Analogues of Entropy Boaz Barak Ronen Shaltiel Avi Wigderson.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Strong Key Derivation from Biometrics
1 Adam O’Neill Leonid Reyzin Boston University A Unified Approach to Deterministic Encryption and a Connection to Computational Entropy Benjamin Fuller.
Gillat Kol (IAS) joint work with Ran Raz (Weizmann + IAS) Interactive Channel Capacity.
Foundations of Cryptography Lecture 5 Lecturer: Moni Naor.
1 Introduction CSE 5351: Introduction to cryptography Reading assignment: Chapter 1 of Katz & Lindell.
Foundations of Cryptography Lecture 4 Lecturer: Moni Naor.
Eran Omri, Bar-Ilan University Joint work with Amos Beimel and Ilan Orlov, BGU Ilan Orlov…!??!!
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
Derandomized parallel repetition theorems for free games Ronen Shaltiel, University of Haifa.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
TAMPER DETECTION AND NON-MALLEABLE CODES Daniel Wichs (Northeastern U)
 Secure Authentication Using Biometric Data Karen Cui.
Session 5 Hash functions and digital signatures. Contents Hash functions – Definition – Requirements – Construction – Security – Applications 2/44.
Seminar in Foundations of Privacy Gil Segev Message Authentication in the Manual Channel Model.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
Co-operative Private Equality Test(CPET) Ronghua Li and Chuan-Kun Wu (received June 21, 2005; revised and accepted July 4, 2005) International Journal.
Asymmetric Cryptography part 1 & 2 Haya Shulman Many thanks to Amir Herzberg who donated some of the slides from
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Announcements: 1. HW6 due now 2. HW7 posted Questions? This week: Discrete Logs, Diffie-Hellman, ElGamal Discrete Logs, Diffie-Hellman, ElGamal Hash Functions.
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
1 Chapter 5 A Measure of Information. 2 Outline 5.1 Axioms for the uncertainty measure 5.2 Two Interpretations of the uncertainty function 5.3 Properties.
Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson.
BB84 Quantum Key Distribution 1.Alice chooses (4+  )n random bitstrings a and b, 2.Alice encodes each bit a i as {|0>,|1>} if b i =0 and as {|+>,|->}
ON THE PROVABLE SECURITY OF HOMOMORPHIC ENCRYPTION Andrej Bogdanov Chinese University of Hong Kong Bertinoro Summer School | July 2014 based on joint work.
Information Theory and Security
On the Complexity of Approximating the VC Dimension Chris Umans, Microsoft Research joint work with Elchanan Mossel, Microsoft Research June 2001.
Leakage-Resilient Storage Francesco Davì Stefan Dziembowski Daniele Venturi SCN /09/2010 Sapienza University of Rome.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 8 Lecturer: Moni Naor.
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
1 Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871 Richard Cleve QNC 3129 Lecture 18 (2014)
Cryptography Lecture 8 Stefan Dziembowski
1 Information and interactive computation January 16, 2012 Mark Braverman Computer Science, Princeton University.
Cryptography Dec 29. This Lecture In this last lecture for number theory, we will see probably the most important application of number theory in computer.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
Fall 2004/Lecture 201 Cryptography CS 555 Lecture 20-b Zero-Knowledge Proof.
Secure Computation (Lecture 5) Arpita Patra. Recap >> Scope of MPC > models of computation > network models > modelling distrust (centralized/decentralized.
Key Derivation from Noisy Sources with More Errors Than Entropy Benjamin Fuller Joint work with Ran Canetti, Omer Paneth, and Leonid Reyzin May 5, 2014.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
1 Information Theory Nathanael Paul Oct. 09, 2002.
CS555Spring 2012/Topic 31 Cryptography CS 555 Topic 3: One-time Pad and Perfect Secrecy.
TAMPER DETECTION AND NON-MALLEABLE CODES Daniel Wichs (Northeastern U)
Randomness Extraction Beyond the Classical World Kai-Min Chung Academia Sinica, Taiwan 1 Based on joint works with Xin Li, Yaoyun Shi, and Xiaodi Wu.
1 Leonid Reyzin Boston University Adam Smith Weizmann  IPAM  Penn State Robust Fuzzy Extractors & Authenticated Key Agreement from Close Secrets Yevgeniy.
CRYPTOGRAPHIC HARDNESS OTHER FUNCTIONALITIES Andrej Bogdanov Chinese University of Hong Kong MACS Foundations of Cryptography| January 2016.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
When is Key Derivation from Noisy Sources Possible?
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
Correcting Errors Without Leaking Partial Information Yevgeniy Dodis New York University Adam SmithWeizmann Institute To appear in STOC 2005
Reusable Fuzzy Extractors for Low-Entropy Distributions
Information Complexity Lower Bounds
Computational Fuzzy Extractors
CS 154, Lecture 6: Communication Complexity
When are Fuzzy Extractors Possible?
The Curve Merger (Dvir & Widgerson, 2008)
Conditional Computational Entropy
When are Fuzzy Extractors Possible?
Imperfectly Shared Randomness
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
Presentation transcript:

1 Leonid Reyzin May 23, th International Conference on Information Theoretic Security Minentropy and its Variations for Cryptography

2 guessability and entropy Many ways to measure entropy If I want to guess your password, which entropy do I care about? This talk: minentropy =  log (Pr [adversary predicts sample]) W predictability H ∞ (W) =  log max Pr[w] w

3 what is minentropy good for? Passwords Message authentication

4 what is minentropy good for? a b key w = n/2 Passwords Message authentication [Wegman-Carter ‘81]  + MAC a,b (m) =  = am + b m  GF(2 n/2 )  GF(2 n/2 )

5 what is minentropy good for? a b key w = n/2 Let |a,b|= n, H  (a,b) = k Let “entropy gap” n  k = g minentropy k gap g n/2 MAC a,b (m) =  = am + b Security: k  n/2=n/2  g Passwords Message authentication [Maurer-Wolf ’03]

6 what is minentropy good for? MAC a,b (m) =  = am + b Passwords Message authentication Secret key extraction (  encryption, etc.) w R Ext seed i uniform jointly uniform (  -close) minentropy k uniform reusable [Bennett-Brassard-Robert ’85, Impagliazzo-Levin-Luby ’89, Nisan-Zuckerman ’93]

7 is it good for privacy amplification? w partially secret Goal: from a partial secret w agree on a uniform secret R [Bennett-Brassard-Robert ’85] Eve knows something about w i w R Ext i w R i i R Alice Bob Simple solution: use an extractor But wait! What is the right value for H  (w) ? Depends on Eve’s knowledge Y So how do we know what Ext to apply?

8 defining conditional entropy H  (W | Y) E.g., W is uniform, Y = Hamming Weight( W ) predictability W “average minentropy” but not average of minentropy: if min-entropy is 0 half the time, and 1000 half the time, you get log ( –1000 )/2  – log 1/2 = 1. But what about H ∞ (W | Y) ? H ∞ (W | Y = n/2)  n  ½ log n  1 H ∞ (W | Y = 0) = 0 Recall: minentropy =  log (predictability) H ∞ (W) =  log max Pr[w] What’s the probability of predicting W given Y ? w E max Pr[w|Y=y] w y H ∞ (W | Y) =  log [Dodis-Ostrovsky -R-Smith ‘04] Pr[Y = n/2] > 1/(2  n)  Pr[Y = n] = 2  n 

9 what is H  (W | Y) good for? Passwords –Prob. of guessing by adversary who knows Y: 2 Message authentication –If key is W and adversary knows Y : security H  (W | Y)  n/2 Secret key extraction (  encryption, etc.) –All extractors work [Vadhan ‘11]  H  (W | Y) w R Ext seed i jointly uniform given Y H  (W | Y)=k

10 what is H  (W | Y) good for? Passwords –Prob. of guessing by adversary who knows Y: 2 Message authentication –If key is W and adversary knows Y : security H  (W | Y)  n/2 Secret key extraction (  encryption, etc.) –All extractors work [Vadhan ‘11] –Therefore, privacy amplification! w partially secret Eve knows something about w i w R Ext i w R i i R Alice Bob  H  (W | Y)

11 what about information reconciliation? AliceBob w w′w′ Eve s,i w R Ext i s S Rec s w′w′ w R Ext i error-correcting info How long an R can you extract? Depends on H  (W | Y, S) ! Lemma: H  (W | Y, S)  H  (W, S | Y)  bit-length (S)

12 how to build S ? Code C: {0,1} m  {0,1} n encodes m -bit messages into n -bit codewords any two codewords differ in at least d locations –fewer than d/2 errors  unique correct decoding C

13 how to build S ? Idea: what if w is a codeword in an ECC? Decoding finds w from w ′ If w not a codeword, simply shift the ECC S(w) is the shift to random codeword [Juels-Watenberg ’02]: s = w  ECC(r) Recover: dec(w ′  s)  s w w′w′ s=S(w)s=S(w) –s–s +s+s dec

14 what about information reconciliation? AliceBob w w′w′ Eve s = w  ECC(r), i w R Ext i Rec s w′w′ w R Ext i Lemma: H  (W | Y, S)  H  (W, S | Y)  bit-length (S) H  (W | Y) + |r| = n = m = H  (W | Y, S)  H  (W | Y) + m  n Entropy loss for a code from m bits to n bits: n  m

15 active adversary AliceBob w w′w′ RR EveEve or  Starting in Maurer and Maurer-Wolf 1997 Interesting even if w = w′ Basic problem: authenticate extractor seed i Problem: if H  (W|Y)<n/2, w can’t be used as a MAC key Idea [Renner-Wolf 2003]: use interaction, one bit in two rounds

16 w w challenge x Accept 1 if Ext x (w) is correct authenticating a bit b [Renner-Wolf 03] AliceBob w t Ext x response t = Ext x (w) iff b=1; else just send 0 x′x′x t′t′t Note: Eve can make Bob’s view  Alice’s view EveEve w t Ext x w t′t′ x′x′ Claim: Eve can’t change 0 to 1! Lemma [Kanukurthi-R. ’09] H  ( Ext (W;X) | X,Y)  min (|t|, log )  1 As long as H  (W | Y) is high enough for Ext to ensure quality  ; but we can measure it: each bit authenticated reduces it by |t| (To prevent change of 1 to 0, make #0s = #1s) 1 

17 w w challenge x Accept 1 if Ext x (w) is correct improving entropy loss AliceBob w t Ext x response t = Ext x (w) iff b=1; else just send 0 Problem: For  security, | t | , so each round loses entropy Getting optimal entropy loss [Chandran-Kanukurthi-Ostrovsky-R ’10]: -- Make | t | = constant. -- Now Eve can change/insert/delete at most constant fraction of bits -- Encode whatever you are sending in an edit distance code [Schulman-Zuckerman99] of const. rate, correcting constant fraction

18 w w challenge x Accept 1 if Ext x (w) is correct improving entropy loss AliceBob w t Ext x response t = Ext x (w) iff b=1; else just send 0 Problem: For  security, | t | , so each round loses entropy Getting optimal entropy loss [Chandran-Kanukurthi-Ostrovsky-R ’10]: -- Make | t | = constant. -- Now Eve can change/insert/delete at most constant fraction of bits How to prove? Can we use H  ( Ext (W;X) | X,Y)  min (|t|, log )  1 ? 1  It talks about unpredictability of a single value; but doesn’t say anything about independence of two

19 w w challenge x Accept 1 if Ext x (w) is correct improving entropy loss AliceBob w t Ext x response t = Ext x (w) iff b=1; else just send 0 Can we use H  ( Ext (W;X) | X,Y)  min (|t|, log )  1 ? 1  It talks about unpredictability of a single value; but doesn’t say anything about independence of two

20 w w challenge x Accept 1 if Ext x (w) is correct improving entropy loss AliceBob w t Ext x response t = Ext x (w) iff b=1; else just send 0 Can we use H  ( Ext (W;X) | X,Y)  min (|t|, log )  1 ? 1  It talks about unpredictability of a single value; but doesn’t say anything about independence of two Step 1: H  (W | all variables Eve sees ) is sufficient Step 2: H  (W | a specific transcript ) is sufficient with high prob Step 3: H  (W | at every Ext step ) is sufficient with high prob Lemma: If H  (W) is sufficient, then H  ( Ext (W; x) | x)  |t|  1 with high prob. avg entropy still useful just minentropy

21 If (conditional) min-entropy is so useful in information-theoretic crypto, what about computational analogues?

22 computational entropy (HILL) H ∞ (W) =  log max Pr[w] wWwW Min-Entropy W Two more parameters relating to what  means -- maximum size s of distinguishing circuit D -- maximum advantage  with which D will distinguish predictability HILL [Håstad,Impagliazzo,Levin,Luby]: H ,s (W)  k if  Z such that H ∞ (Z) = k and W  Z

23 what is HILL entropy good for? H ,s (W)  k if  Z such that H ∞ (Z) = k and W  Z HILL w R Ext seed i looks (  +  )-close to uniform to circuits of size s HILL entropy k Many uses: indistinguishability is a powerful notion. In the proofs, substitute Z for W ; a bounded adversary won’t notice

24 what about conditional? entropic secret: g ab | observer knows g a, g b entropic secret: SK| observer knows leakage entropic secret: Sign SK (m)| observer knows PK entropic secret: PRG(x) | observer knows Enc(x) Very common:

25 conditioning HILL entropy on a fixed event H ∞ (W | Y = y)  H ∞ (W)  log 1/Pr[y] E.g., W is uniform, Y = Hamming Weight( W ) H ∞ (W | Y = n/2)  n  ½ log n  1 By the probability of the condition! Recall: how does conditioning reduce minentropy? Pr[Y = n/2] > 1/(2  n) 

26 conditioning HILL entropy on a fixed event By the probability of the condition! H ∞ (W | Y = y)  H ∞ (W)  log 1/Pr[y] Recall: how does conditioning reduce minentropy? [Fuller-R ‘11] (variant of Dense Model Theorem of [Green-Tao ‘04, Tao-Ziegler ‘06, Reingold-Trevisan-Tulsiani-Vadhan ‘08, Dziembowski-Pietrzak ’08] Warning: this is not H HILL ! It can be converted to H HILL with a loss in circuit size s [ Barak, Shaltiel, Wigderson 03] H  /Pr[y],s (W | Y = y)  H ,s (W)  log 1/Pr[y] Weaker entropy notion: a different Z for each distinguisher (“metric*”) metric* H ,s (W)  k if  Z s.t. H ∞ (Z) = k and W  Z  distinguisher D metric* D (moreover, D is limited to deterministic distinguishers) Theorem: same holds for computational entropy: HILL

27 conditioning HILL entropy on a fixed event It can be converted to H HILL with a loss in circuit size s [ Barak, Shaltiel, Wigderson 03] H  /Pr[y],s (W | Y = y)  H ,s (W)  log 1/Pr[y] metric* Long story, but simple message:

28 what about conditioning on average? entropic secret: g ab | observer knows g a, g b entropic secret: SK| observer knows leakage entropic secret: Sign SK (m)| observer knows PK entropic secret: PRG(x) | observer knows Enc(x) Again, we may not want to reason about specific values of Y [Hsiao-Lu-R ‘04]: Def: H ,s (W | Y)  k if  Z such that H ∞ (Z | Y) = k and (W, Y)  (Z, Y) HILL What is it good for? Original purpose: negative result Computational Compression (Yao) Entropy can be > HILL Hasn’t found many uses because it’s hard to measure (but it can be extracted from by reconstructive extractors!) Note: W changes, Y doesn’t

29 conditioning HILL entropy on average Recall: suppose Y is over b -bit strings H ∞ (W | Y )  H ∞ (W)  b Average-Case Entropy Version of Dense Model Theorem: H  2 b,s (W | Y )  H ,s (W)  b metric* Can work with metric* and then covert to HILL when needed (loss in s ) Follows from H  /Pr[y],s (W | Y = y)  H ,s (W)  log 1/Pr[y] metric*

30 conditioning HILL entropy on average H  2 b,s (W | Y )  H ,s (W)  b metric*

31 conditioning the conditional H  2 b,s (W | Y )  H ,s (W)  b metric* The theorem can be applied multiple times, of course: H  2 b 1 +b 2,s (W | Y 1, Y 2 )  H ,s (W)  b 1  b 2 metric* (where support of Y i has size 2 b i ) But we can’t prove: H  2 b 2,s (W | Y 1, Y 2 )  H ,s (W | Y 1 )  b 2 metric* Note: Gentry-Wichs ’11 implies: H 2 ,s/poly( , 2 b 2 ) (W | Y 1, Y 2 )  H ,s (W | Y 1 )  b 2 HILL-relaxed (bad case: W = plaintext, Y 1 = PK ; because for any given y 1, W has no entropy!) Defn: H ,s (W|Y)  k if  Z, T) such that H ∞ (Z | T) = k and (W, Y)  (Z, T) HILL-relaxed

32 unpredictability entropy Why should computational min-entropy be defined through indistinguishability? Why not model unpredictability directly? H ∞ (W) =  log max Pr[w] wWwW W predictability [Hsiao-Lu-R. ‘04] H s Unp (W |Z)  k if for all  A of size s, Pr[A(z) = w] ≤ 2  k Lemma: H Yao (W |Z)  H Unp (W |Z)  H HILL (W |Z) Corollary: Reconstructive extractors work for H Unp Lemma: H s Unp (W | Y 1, Y 2 )  H s Unp (W, | Y 1 )  b 2

33 unpredictability entropy H s Unp (W |Z)  k if for all  A of size s, Pr[A(z) = w] ≤ 2  k

34 what is it good for? H s Unp (W |Z) = k if for all  A of size s, Pr[A(z) = w] ≤ 2  k Examples:Diffie-Hellman: g ab | g a, g b One-Way Functions: x | f(x) Signatures: Sign SK (m) | PK Why bother? Hardcore bit results (e.g., [Goldreich&Levin,Ta-Shma&Zuckerman]) are typically stated only for OWF, but used everywhere –They are actually reconstructive extractors –H Unp (X |Z) + reconstructive extractors  simple generalization language Leakage-resilient crypto (assuming strong hardness) H HILL =0

35 the last slide Minentropy is often the right measure Conditional Entropy useful natural extension Easy to use because of simple bit counting Computational Case is trickier A few possible extensions Bit counting sometimes works Some definitions (such as H Unp ) only make sense conditionally Separations and conversions between definitions exist Still, can simply proofs!

36 Thank you!