Refined privacy models Data Anonymization Refined privacy models
Outline k-anonymity specification is not sufficient Enhancing privacy L-diversity T-closeness
Linking the dots… Countering the privacy attack k-anonymity addresses one type of attacks link attack ?? Other types of attacks
K-anonymity K-anonymity tries to counter this type of privacy attack: Individual -> quasi-identifier-> sensitive attributes Example: 4-anonymized table, at least 4 records share the same quasi-identifier Quasi-identifier: The attacker can find the link using other public data Typical method: domain-specific generalization
More Privacy Problems All existing k-anonymity approaches assume: Privacy is protected as long as the k-anonymity specification is satisfied. But there are other problems Homogeneity in sensitive attributes Background knowledge on individuals …
Problem1: Homogeneity Attack We know Bob is in the table… If Bob lives in the zip code 13053 And he is 31 years old -> Bob surely has cancer!
Problem 2: Background knowledge attack Japanese have an extremely low Incidence of heart disease! A Japanese Umeko lives in zip code 13068 and she is 21 years old -> Umeko has viral infection with high probability.
The cause of these two problems The values in the sensitive attribute of some blocks have no sufficient diversity Problem1: no diversity. Problem2: background knowledge helps to reduce the diversity.
major contributions of l-diversity Formally analyze the privacy model of k-anonymity with the Bayes-Optimal privacy model. Basic idea: increase the diversity of sensitive attribute values for each anonymized block Instantiation and implementation of l-diversity concept Entropy l-diveristy Recursive l-diversity More…
Modeling the attacks What is a privacy attack? - guess the sensitive values! (probability) Prior belief: without seeing the table, what can we guess? S: sensitive data, Q: quasi-identifier, prior: P(S=s|Q=q) Example: Japanese vs. heart disease Observed belief: with the observed table, our belief will change. T*: anonymized table, observed: P(S=s|(Q=q and T* is known)) Effective privacy attacks: Table T* should help to change the belief a lot! Prior is small, observed belief is large -> positive disclosure Prior is large, observed belief is small -> negative disclosure
The definition of observed belief # of records with S=s background knowledge, i.e. the prior p(S=s|Q=q) n(q*, s)/n(q*) q* S A q*-block: a k-anonymized group with q* as the quasi-identifier S=s n(q*, s) n(q*) records f(s|q*): the proportion of this part
Interpret privacy problem of k-anonymity Derived from the relationship between observed belief and privacy disclosure (positive) Extreme situation: (q,s,T*) 1 => positive disclosure Possibility 1. n(q*, s’) << n(q*, s) => Lack of diversity Possibility 2. Strong background knowledge helps to eliminate other items Knowledge: except one s, other s’ are not likely true while Q=q => f(s’|q) 0 Minimize the contribution of other items, and make => 0
Negative disclosure: (q,s,T*) 0 0 either n(q*,s) 0 or f(s|q) 0 The Umeko example
How to address the problems? We make n(q*, s’) << n(q*, s) is not satisfied Need more knowledge to get rid of other items “damaging instance-level knowledge” for f(s’|q) 0 If L distinct sensitive values in the q*-block, the attacker needs L-1 pieces of damaging knowledge to get rid of the L-1 possible sensitive values This is the principle of L-diversity
L-diversity: how to evaluate it? Entropy l-diversity Every q*-block satisfies the condition: *We like uniform distribution of sensitive values over each block! **Guarantees every q*-block has at least L distinct sensitive values Entropy of distinct sensitive values in q*-block Entropy of uniformly distributed L distinct sensitive values
Other extensions Entropy l-diversity is too restrictive Some positive disclosures are allowed Typically, some sensitive values my have very high frequency and are not sensitive, in practice. For example, “normal” in disease symptom. Log(L) cannot be satisfied in some cases. Principle to relax the strong condition Uniform distribution of sensitive values is good! When we can not achieve this, we choose to make most value frequencies as close as possible, especially the most frequent value. Recursive (c,l)-diversity is proposed Control the frequency difference between the most frequent item and the most non-frequent items r1 < c(rl+rl+1+…+rm)
Implementation of l-diversity build an algorithm having a structure similar to k-anonymity algorithms. With domain generalization hierarchy Check l-diversity and k-anonymity
Discussion Not addressed problems Skewed data – common problem for l-diversity and k-anonymity Makes l-diversity very inefficient Balance between utility and privacy The entropy l-diversity and (c,l)-diversity methods do not guarantee good data utility
t-closeness Address two types of attacks Skewness attack Similarity attack
Skewness attack Prob of cancer in the original table is low Prob of cancer in the anonymized table is much higher than the global prob
Semantic similarity attack Salary is low Has some kind of stomach diseas
The root of these two problems Sensitive values Difference between Global distribution and Local distribution in some block
The proposal of t-closeness Making the global and local distributions as similar as possible Evaluate the distribution similarity Semantic similarity “Earth mover’s distance” as the similarity measure between distributions
Further studies Modeling different types of prior knowledge Think about the problem - no way to enumerate all background knowledge… Can we do better than that?