Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecturer: Moni Naor Joint work with Cynthia Dwork Foundations of Privacy Informal Lecture Impossibility of Disclosure Prevention or The Case for Differential.

Similar presentations


Presentation on theme: "Lecturer: Moni Naor Joint work with Cynthia Dwork Foundations of Privacy Informal Lecture Impossibility of Disclosure Prevention or The Case for Differential."— Presentation transcript:

1 Lecturer: Moni Naor Joint work with Cynthia Dwork Foundations of Privacy Informal Lecture Impossibility of Disclosure Prevention or The Case for Differential Privacy

2 Let’s Talk About Sex Better Privacy Means Better Data

3 Private Data Analysis Simple Counts and Correlations –Was there a significant rise in asthma emergency room cases this month? –What is the correlation between new HIV infections and crystal meth usage? Holistic Statistics –Are the data inherently low-dimensional? Collaborative filtering for movie recommendations Beyond Statistics: Private Data Analysis –How far is the (proprietary) network from bipartite? …while preserving privacy of individuals

4 Different from SFE Secure Function Evaluation participants collaboratively compute a function f of their private inputs E.g.,  = sum(a,b,c, …) –Each player learns only what can be deduced from  and her own input to f Miracle of Modern Science! SFE does not imply privacy! –Privacy only ensured “modulo f ” if  and a yield b, so be it.

5 Cryptographic Rigor Applied to Privacy Define a Break of the System –What is a “win” for the adversary? –May settle for partial information Specify the Power of the Adversary –Computational power? “Auxiliary” information? Conservative/Paranoid by Nature –All breaks are forever –Protect against all feasible attacks

6 Dalenius, 1977 Anything that can be learned about a respondent from the statistical database can be learned without access to the database –Captures possibility that “ I ” may be an extrovert –The database doesn’t leak personal information –Adversary is a user Analagous to Semantic Security for Crypto – Anything that can be learned from the ciphertext can be learned without the ciphertext –Adversary is an eavesdropper Goldwasser- Micali 1982

7 Outline The Framework A General Impossibility Result –Dalenius’ goal cannot be achieved The Proof –Simplified –General case

8 Two Models DatabaseSanitized Database ? San Non-Interactive: Data are sanitized and released

9 Two Models Database Interactive: Multiple Queries, Adaptively Chosen ? San

10 Auxiliary Information Common theme in many privacy horror stories: Not taking into account side information –Netflix challenge: not taking into account IMDB [Narayan-Shmatikov]

11 Not learning from DB With access to the database Without access to the database San A Auxiliary Information San A’ Auxiliary Information DB There is some utility of DB that legitimate user should be able to learn Possible breach of privacy Goal: users learn the utility without the breach

12 Not learning from DB With access to the database Without access to the database San A Auxiliary Information San A’ Auxiliary Information DB Want: anything that can be learned about an individual from the statistical database can be learned without access to the database D 8 D 8 A 9 A’ whp DB 2 R D 8 auxiliary information z |Prob [A(z) $ DB wins] – Prob[ A’(z) wins]| is small

13 Illustrative Example for Impossibility Want: anything that can be learned about a respondent from the statistical database can be learned without access to the database More Formally 8 D 8 A 9 A’ whp DB 2 R D 8 auxiliary information z |Probability [A(z) $ DB wins] – Probability [ A’(z) wins]| is small Example: Aux z = “Kobi Oz is 10 cm shorter than average in DB ” –A learns average height in DB, hence, also Kobi’s height –A’ does not Impossibility Requires Utility –Mechanism must convey info about DB Not predictable by someone w/o access “Hint generator” and A share secret, unknown to A’

14 Defining “Win”: The Compromise Function Notion of privacy compromise Compromise? y 0/1 Adv DB D Privacy breach Privacy compromise should be non trivial: Should not be possible to find privacy breach from auxiliary information alone Privacy breach should exist: Given DB there should be y that is a privacy breach Should be possible to find y

15 Additional Basic Concepts DDistribution on (Finite) Databases D –Something about the database must be unknown –Captures knowledge about the domain E.g., rows of database correspond to owners of 2 pets DPrivacy Mechanism San( D, DB) –Can be interactive or non-interactive –May have access to the distribution D DAuxiliary Information Generator AuxGen( D, DB) –Has access to the distribution and to DB –Formalizes partial knowledge about DB Utility Vector w –Answers to k questions about the DB –(Most of) utility vector can be learned by user –Utility: Must inherit sufficient min-entropy from source D

16 Impossibility Theorem Fix any useful* privacy mechanism San and any reasonable privacy compromise decider C. Then D There is an auxiliary info generator AuxGen and an adversary A such that for “ all ” distributions D and all adversary simulators A’ Pr[A( D, San( D,DB), AuxGen( D, DB)) wins] - Pr[A’( D, AuxGen( D, DB)) wins] ≥  for suitable, large,  D The probability spaces are over choice of DB 2 R D and the coin flips of San, AuxGen, A, and A’ Tells us information we did not know To completely specify need assumption on the entropy of utility vector

17 Strategy The auxiliary info generator will provide a hint that together with the Utility Vector w will yield the privacy breach. Want AuxGen to work without knowing D just DB –Find privacy breach y and encode in z –Make sure z alone does not give y. Only with w Complication: is the utility vector w –Completely learned by the user? –Or just an approximation?

18 Entropy of Random Sources Source: –Probability distribution X on {0,1} n. –Contains some “randomness”. Measure of “randomness” –Shannon entropy: H(X) = - ∑ x  Γ P x (x) log P x (x) Represents how much we can compress X on the average But even a high entropy source may have a point with prob 0.9 – min-entropy: H min (X) = - log max x  Γ P x (x) Represents the most likely value of X {0,1} n

19 Min-entropy Definition : X is a k - source if H min (X) ¸ k. i.e. Pr[X = x] · 2 -k for all x Examples: –Bit-fixing: some k coordinates of X uniform, rest fixed or even depend arbitrarily on others. –Unpredictable Source: 8 i 2 [n], b 1,..., b i-1 2 {0,1}, k/n · Prob[X i =1| X 1, X 2, … X i-1 = b 1,..., b i-1 ] · 1-k/n –Flat k -source: Uniform over S µ {0,1} n, |S|=2 k Fact every k -source is convex combination of flat ones.

20 Extractors Universal procedure for “purifying” an imperfect source Definition : a function Ext : {0,1} n £ {0,1} d  {0,1} m is a (k,  ) - extractor if: 8 k -sources X, Ext(X, U d ) is  -close to U m. d random bits “seed” E XT k - source of length n m almost-uniform bits {0,1} n 2 k strings x s

21 Strong extractors Output looks random even after seeing the seed. Definition : Ext is a (k,  ) strong extractor if Ext ’ (x,s) = s ◦ Ext(x,s) is a (k,  ) -extractor i.e. 8 k -sources X, for a 1-  ’ frac. of s 2 {0,1} d Ext(X,s) is  -close to U m

22 Extractors from Hash Functions Leftover Hash Lemma [ILL89]: universal (pairwise independent) hash functions yield strong extractors –output length: m = k-O(1) –seed length: d = O(n) Example: Ext(x,(a,b))= first m bits of a ¢ x+b in GF[2 n ] Almost pairwise independence [SZ94,GW94]: –seed length: d= O(log n+k)

23 Suppose w Learned Completely AuxGen and A share a secret: w AuxGen(DB) Find privacy breach y of DB Find w from DB –simulate A Choose s 2 R {0,1} d and compute Ext(w,s) Set z = (s, Ext(w,s) © y) San DB Aux Gen A C 0/1 w z

24 Suppose w Learned Completely AuxGen and A share a secret: w DB Aux Gen A’ C 0/1San DB Aux Gen A C 0/1 w z z = (s, Ext(w,s) © y) z Technical Conditions: H 1 (W|y) ≥ |y| and |y| “safe”

25 Why is it a compromise? AuxGen and A share a secret: w Why doesn’t A’ learn y : For each possible value of y (s, Ext(w,s)) is  -close to uniform Hence: (s, Ext(w,s) © y) is  -close to uniform San DB Aux Gen A C 0/1 w z z = (s, Ext(s,w) © y) Technical Conditions: H 1 (W|y) ≥ |y| and |y| “safe”

26 w Need Not be Learned Completely Relaxed Utility: Something Close to w is Learned D AuxGen( D, DB) does not know exactly what A will learn Need that something close to w produces the same extracted randomness as w Ordinary extractors offer no such guarantee Fuzzy Extractors (m,ℓ,t,  ) : (Gen, Rec) Gen(w) outputs extracted r 2 {0,1} ℓ and public string p. For any distribution W min-entropy at least m (R, P) Ã Gen(W) ) (R, P) and (U ℓ, P) are within stat distance  Rec(p,w*) : reconstructs r given p and any w * sufficiently close to w (r, p) Ã Gen(w) and || w – w * || 0 · t ) Rec(w *, p) = r. Dodis, Reyzin and Smith

27 Construction Based on ECC Error-correcting code ECC : –Any two codewords differ by at least 2t bits Gen(w): p = w © ECC(r’) –where r’ is random r is extracted from r’ Given p and w’ close to w : –Compute w © p –Decode to get ECC(r’) –r is extracted from r’ 2t2t w ECC(r’) p w’

28 w Need Not be Learned Completely Fuzzy Extractors (m,ℓ,t,  ) : (Gen, Rec) Gen(w) outputs extracted r 2 {0,1} ℓ and public string p. For any distribution W of sufficient min-entropy (R, P) Ã Gen(W) ) (R, P) and (U ℓ, P) are within stat distance  Rec: reconstructs r given p and any w * sufficiently close to w (r, p) Ã Gen(w) and || w – w * || 0 · t ) Rec(w *, p) = r. Idea: (r, p) Ã Gen(w); Set z = (p, r © y) A reconstructs r from w* close to w r looks almost uniform to A’ even given p Problem : p leaks information about w - might disclose privacy breach y’ ; Solution: AuxGen interacts with DB to learn safe w’ (r, p) Ã Gen(w’); Set z = (p, r © y) w’’ (learned by A ) and w’ both sufficiently close to w ) w’, w’’ close to each other ) A(w’’, p) can reconstruct r. By assumption w’ should not yield a breach! Let  be bound on breach

29 w Need Not be Learned Completely AuxGen and A share a secret: r DB Aux Gen A’ C 0/1San DB Aux Gen A C 0/1 w’’ z (p, r) = Gen(w’) z = (p, r © y) A: r = Rec(p, w’’) z r almost unif, given p p should not be disclosive w’

30 w Need Not be Learned Completely Pr[A’(z)] wins D ≤ Pr[A $ San( D, DB) wins ] +  ≤  +  DB Aux Gen A’ C 0/1San DB Aux Gen A C 0/1 w’’ z (p, r) = Gen(w’) z = (p, r © y) A: r = Rec(p, w’’) z r almost unif, given p p should not be disclosive w’

31 w Need Not be Learned Completely Need extra min-entropy: H min (W|y) ≥ L+|p| Pr[A’(z)] wins ≤ Pr[A $ San( D, DB) wins ] +  ≤  +  DB Aux Gen A’ C 0/1San DB Aux Gen A C 0/1 w’’ z (p, r) = Gen(w’) z = (p, r © y) A: r = Rec(p, w’’) z r almost unif, given p p should not be disclosive w’

32 Two Remarkable Aspects Works Even if Kobi Oz not in Database! –Motivates a definition based on increased risk incurred by joining the database, Risk to Kobi if in database vs Risk to Kobi if not in DB Cf: What can be learned about Kobi with vs w/o DB access Dalenius’ Goal Impossible but Semantic Sec Possible –Yet, definitions are similar. –Resolved by utility: the adversary is a user. –Without auxiliary information: User must learn something from mechanism; – Simulator learns nothing Eavesdropper should learn nothing; – Simulator learns nothing –What about SFE: the comparison is to an ideal party Differential Privacy

33 Possible answer: Differential Privacy Noticeable Relative Shift between K (DB – Me) and K (DB + Me) ? If not, then no perceptible risk is incurred by joining DB. Anything adversary can do, it could do without Me. Possible responses Pr [response] Bad Responses: XXX


Download ppt "Lecturer: Moni Naor Joint work with Cynthia Dwork Foundations of Privacy Informal Lecture Impossibility of Disclosure Prevention or The Case for Differential."

Similar presentations


Ads by Google