Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reconciling Confidentiality Risk Measures from Statistics and Computer Science Jerry Reiter Department of Statistical Science Duke University.

Similar presentations


Presentation on theme: "Reconciling Confidentiality Risk Measures from Statistics and Computer Science Jerry Reiter Department of Statistical Science Duke University."— Presentation transcript:

1 Reconciling Confidentiality Risk Measures from Statistics and Computer Science Jerry Reiter Department of Statistical Science Duke University

2 Background for talk I am a proponent of unrestricted data access when possible. I advocate procedures that have provably valid inferential properties. I am focused on non-interactive setting. I am learning differential privacy.

3 My questions about differential privacy in non-interactive setting What does ε imply in terms of risks? Is it wise to be agnostic to the content of data and how they were collected? Why consider all possible versions of data when determining protection?

4 Meaning of ε : A simulation study Data generation: but bound values to lie within +/- 5 of means. Data: 9 values from first component 1 values from second component. Query is mean of 10 observations. Add Laplace noise to mean as in Dwork (2006) (sensitivity |μ|+10). (NOTE: the sensitivity was not divided by sample size and so is conservative)

5 Meaning of ε: Set μ = 10 Adversary knows first nine values but not last one. Knows marginal distribution of Y. Let S be the reported (noisy) value of the mean. Straightforward to simulate posterior distribution. Prior implies

6 Results from simulation w/ μ=10 Global sensitivity (1000 runs) Compute ε =.01 p =.19 ε =.1 p =.19 ε = 1 p =.19 ε = 10 p =.21 ε = 100 p =.82 ε = 1000 p = 1

7 Results from simulation w/ μ=3 Global sensitivity (1000 runs) ε =.01 p =.18 ε =.1 p =.18 ε = 1 p =.18 ε = 10 p =.18 ε = 100 p =.43 ε = 1000 p =.88

8 Results from simulation w/ μ=10 Local sensitivity (1000 runs) Sensitivity = max(data) – min(data). ε =.01 p =.82 ε =.1 p =.82 ε = 1 p =.82 ε = 10 p =.89 ε = 100 p = 1.0 ε = 1000 p = 1.0

9 Should the local sensitivity results be dismissed? Suppose data are a census. Support of Y is finite and known to agency. Sensitivity parameter cannot be “close to” the max(Y) – min(Y). How to set sensitivity while preserving accuracy?

10 Hybrid approach Noise done with global approach. Intruder uses local approach. This has no justification. But it can give good predictions in this context.

11 Results from simulation w/ μ=10 Global sensitivity, local intruder (1000 runs) ε =.01 p =.90 ε =.1 p =.90 ε = 1 p =.90 ε = 10 p =.90 ε = 100 p =.99 ε = 1000 p = 1.0 Need more study to see if this approach is too specific to the simulation design.

12 What about the data? Two independent random samples of 6 people walking the streets of LA. Would you say they represent an equal disclosure risk?

13 What about the data? Sex Age Partners F 31 4 M 26 2 M 48 10 F 52 3 M 41 2 F 28 7

14 What about the data? Sex Age Partners F 31 4 M 26 2 M 48 10 F 52 3 M 41 2 F 108 7

15 What about the data? Act of sampling provides protection. Does this fit in differential privacy? Even with sampling, some people at higher risk than others. Does this fit in differential privacy?

16 Why all versions of data? Sex Age Partners F 31 4 M 26 2 M 48 10 F 52 3 M 41 2 F 28 7 DP: consider all ages in support.

17 Why all versions? A variant of the versions question: are some releases safer than others? Silly example: use DP algorithm 99.99% of time, and use non-DP algorithm 0.01% of time. Not DP, but 99.99% of releases OK. Why focus on the 0.01%?

18 Statistical approaches: Main types of disclosure risks Identification disclosure Match record in released data with target; learn someone participated in the study. Attribute disclosure Learn value of sensitive variable for target.

19 Measures of identification disclosure risk Number of population uniques Does not incorporate intruders’ knowledge. May not be useful for numerical data. Hard to gauge effects of SDL procedures. Probability-based methods (Direct matching using external databases. Indirect matching using existing data set.) Require assumptions about intruder behavior. May be costly to obtain external databases.

20 Identification disclosure risks in microdata Context: Survey of Earned Doctorates Intruder knows target’s characteristics, e.g., age, field, gender, race (?), citizenship (?). Available from public records. Searches for people with those characteristics in the released SED files. If small number of people match, claims that the target participated in SED.

21 Assessing identification disclosure risk: The intruder’s actions Released data on n records: Z. Information about disclosure protection: M. Target: t (man, statistics, 1999, white, citizen). Let J = j when record j in Z matches t. Let J = n + 1 when target is not in Z. For j = 1, …, n+1, intruder computes Intruder selects j with highest probability.

22 Assessing identification disclosure risk Let Y s be true values of perturbed data.

23 Calculation CASE 1: Target assumed to be in Z: Records that do not match target’s values have zero probability. For matches, probability equals 1/n t where n t is number of matches. If no matches, use 1/n t* where n t* is number of matches in unperturbed data. Probability equals zero for j = n+1.

24 Calculation CASE 2: Target not assumed to be in Z: Units that do not match target’s values have zero probability. For matches, probability is 1/N t where N t is number of matches in pop’n. For j = n+1, probability is (N t – n t ) / N t

25 Implications 1. Clear interpretation of risk for given assumptions (encoded in prior distribution). 2. Specific to collected data. 3. Incorporates sampling. 4. Incorporates information released about protection scheme. 5. Could in principle incorporate measurement error, missing data.

26 Implications 1. Can incorporate strong assumptions like those in CS, e.g., intruder knows all values but one. 2. Provides risk measures under variety of assumptions to enable decision making under uncertainty. 3. Provides record-level risk measures useful for targeted protection.


Download ppt "Reconciling Confidentiality Risk Measures from Statistics and Computer Science Jerry Reiter Department of Statistical Science Duke University."

Similar presentations


Ads by Google