Presentation is loading. Please wait.

Presentation is loading. Please wait.

Refined privacy models

Similar presentations


Presentation on theme: "Refined privacy models"— Presentation transcript:

1 Refined privacy models
Data Anonymization Refined privacy models

2 Outline k-anonymity specification is not sufficient Enhancing privacy
L-diversity T-closeness

3 Linking the dots… Countering the privacy attack
k-anonymity addresses one type of attacks  link attack ??  Other types of attacks

4 K-anonymity K-anonymity tries to counter this type of privacy attack:
Individual -> quasi-identifier-> sensitive attributes Example: 4-anonymized table, at least 4 records share the same quasi-identifier Quasi-identifier: The attacker can find the link using other public data Typical method: domain-specific generalization

5 More Privacy Problems All existing k-anonymity approaches assume:
Privacy is protected as long as the k-anonymity specification is satisfied. But there are other problems Homogeneity in sensitive attributes Background knowledge on individuals

6 Problem1: Homogeneity Attack
We know Bob is in the table… If Bob lives in the zip code 13053 And he is 31 years old -> Bob surely has cancer!

7 Problem 2: Background knowledge attack
Japanese have an extremely low Incidence of heart disease! A Japanese Umeko lives in zip code 13068 and she is 21 years old -> Umeko has viral infection with high probability.

8 The cause of these two problems
The values in the sensitive attribute of some blocks have no sufficient diversity Problem1: no diversity. Problem2: background knowledge helps to reduce the diversity.

9 major contributions of l-diversity
Formally analyze the privacy model of k-anonymity with the Bayes-Optimal privacy model. Basic idea: increase the diversity of sensitive attribute values for each anonymized block Instantiation and implementation of l-diversity concept Entropy l-diveristy Recursive l-diversity More…

10 Modeling the attacks What is a privacy attack? - guess the sensitive values! (probability) Prior belief: without seeing the table, what can we guess? S: sensitive data, Q: quasi-identifier, prior: P(S=s|Q=q) Example: Japanese vs. heart disease Observed belief: with the observed table, our belief will change. T*: anonymized table, observed: P(S=s|(Q=q and T* is known)) Effective privacy attacks: Table T* should help to change the belief a lot! Prior is small, observed belief is large -> positive disclosure Prior is large, observed belief is small -> negative disclosure

11 The definition of observed belief
# of records with S=s background knowledge, i.e. the prior p(S=s|Q=q) n(q*, s)/n(q*) q* S A q*-block: a k-anonymized group with q* as the quasi-identifier S=s n(q*, s) n(q*) records f(s|q*): the proportion of this part

12 Interpret privacy problem of k-anonymity
Derived from the relationship between observed belief and privacy disclosure (positive) Extreme situation: (q,s,T*)  1 => positive disclosure Possibility 1. n(q*, s’) << n(q*, s) => Lack of diversity Possibility 2. Strong background knowledge helps to eliminate other items Knowledge: except one s, other s’ are not likely true while Q=q => f(s’|q)  0 Minimize the contribution of other items, and make =>  0

13 Negative disclosure: (q,s,T*)  0
0  either n(q*,s)  0 or f(s|q)  0 The Umeko example

14 How to address the problems?
We make n(q*, s’) << n(q*, s) is not satisfied Need more knowledge to get rid of other items “damaging instance-level knowledge” for f(s’|q)  0 If L distinct sensitive values in the q*-block, the attacker needs L-1 pieces of damaging knowledge to get rid of the L-1 possible sensitive values This is the principle of L-diversity

15 L-diversity: how to evaluate it?
Entropy l-diversity Every q*-block satisfies the condition: *We like uniform distribution of sensitive values over each block! **Guarantees every q*-block has at least L distinct sensitive values Entropy of distinct sensitive values in q*-block Entropy of uniformly distributed L distinct sensitive values

16 Other extensions Entropy l-diversity is too restrictive
Some positive disclosures are allowed Typically, some sensitive values my have very high frequency and are not sensitive, in practice. For example, “normal” in disease symptom. Log(L) cannot be satisfied in some cases. Principle to relax the strong condition Uniform distribution of sensitive values is good! When we can not achieve this, we choose to make most value frequencies as close as possible, especially the most frequent value. Recursive (c,l)-diversity is proposed Control the frequency difference between the most frequent item and the most non-frequent items r1 < c(rl+rl+1+…+rm)

17 Implementation of l-diversity
build an algorithm having a structure similar to k-anonymity algorithms. With domain generalization hierarchy Check l-diversity and k-anonymity

18 Discussion Not addressed problems
Skewed data – common problem for l-diversity and k-anonymity Makes l-diversity very inefficient Balance between utility and privacy The entropy l-diversity and (c,l)-diversity methods do not guarantee good data utility

19 t-closeness Address two types of attacks Skewness attack
Similarity attack

20 Skewness attack Prob of cancer in the original table is low
Prob of cancer in the anonymized table is much higher than the global prob

21 Semantic similarity attack
Salary is low Has some kind of stomach diseas

22 The root of these two problems
Sensitive values Difference between Global distribution and Local distribution in some block

23 The proposal of t-closeness
Making the global and local distributions as similar as possible Evaluate the distribution similarity Semantic similarity “Earth mover’s distance” as the similarity measure between distributions

24 Further studies Modeling different types of prior knowledge
Think about the problem - no way to enumerate all background knowledge… Can we do better than that?


Download ppt "Refined privacy models"

Similar presentations


Ads by Google