Download presentation
Presentation is loading. Please wait.
Published byRosamund Paul Modified over 9 years ago
1
Privacy by Learning the Database Moritz Hardt DIMACS, October 24, 2012
2
Isn’t privacy the opposite of learning the database?
3
Curator Analyst data set D = multi-set over universe U query set Q privacy-preserving structure S accurate on Q
4
... 123N 45 Data set D as N-dimensional histogram where N=|U| D[i] = # elements in D of type i Normalized histogram = distribution over universe 1 0 Statistical query q (aka linear/counting):... 123N 45 Vector q in [0,1] N q(D) := q(D) in [0,1]
5
Why statistical queries? Perceptron, ID3 decision trees, PCA/SVM, k- means clustering [BlumDworkMcSherryNissim’05] Any SQ-learning algorithm [Kearns’98] – includes “most” known PAC-learning algorithms Lots of data analysis reduces to multiple statistical queries
6
Curator’s wildest dream: This seems hard!
7
Curator’s 2 nd attempt: Intuition: Entropy implies privacy
8
Two pleasant surprises Approximately solved by multiplicative weights update [Littlestone89,...] Can easily be made differentially private
9
Why did learning theorists care to solve privacy problems 20 years ago? Answer: Entropy implies generalization
10
Learner example set Q hypothesis h accurate on all examples Maximizing entropy implies hypothesis generalizes Unknown concept
11
Sensitive database Queries labeled by answer on DB Synopsis approximates DB on query set Must Preserve Privacy Unknown concept Examples labeled by concept Hypothesis approximates target concept on examples Must Generalize PrivacyLearning
12
How can we solve this? Concave maximization s.t. linear constraints Concave maximization s.t. linear constraints Ellipsoid We’ll take a different route.
13
Start with uniform D 0 “What’s wrong with it?” Query q violates constraint! Minimize entropy loss s.t. correction Minimize entropy loss s.t. correction Closed form expression for D t+1 ? Well...
14
Closed form expression for D t+1 ? YES! Relax Approximate Think
15
Multiplicative Weights Update
16
... 123N 45 0 1 DtDt D At step t
17
... 123N 45 0 1 DtDt D q At step t Suppose q(D t ) < q(D)
18
... 123N 45 0 1 DtDt D q After step t
19
Multiplicative Weights Update Algorithm: D 0 uniform For t = 1...T Find bad query q D t+1 = Update(D t,q) Algorithm: D 0 uniform For t = 1...T Find bad query q D t+1 = Update(D t,q) How quickly do we run out of bad queries?
20
Progress Lemma: if q bad Put
21
Facts : Progress Lemma: if q bad At moststeps Error bound
22
Algorithm: D 0 uniform For t = 1...T Find bad query q D t+1 = Update(D t,q) Algorithm: D 0 uniform For t = 1...T Find bad query q D t+1 = Update(D t,q) What about privacy? Only step that interacts with D
23
Differential Privacy [Dwork-McSherry-Nissim-Smith-06] Two data sets D,D’ are called neighboring if they differ in one element. Definition (Differential Privacy): A randomized algorithm M(D) is called (ε,δ)-differentially private if for any two neighboring data sets D,D’ and all events S:
24
Laplacian Mechanism [DMNS’06] Given query q: 1.Compute q(D) 2.Output q(D) + Lap(1/ε 0 n) Given query q: 1.Compute q(D) 2.Output q(D) + Lap(1/ε 0 n) Fact: Satisfies ε 0 -differential privacy Note: Sensitivity of q is 1/n
25
Query selection … q1q1 q2q2 q3q3 qkqk |q(D)-q(D t )|
26
Query selection … q1q1 q2q2 q3q3 qkqk |q(D)-q(D t )| Add Lap(1/ε 0 n)
27
Pick maximal violation Query selection … q1q1 q2q2 q3q3 qkqk |q(D)-q(D t )|
28
Pick maximal violation Query selection … q1q1 q2q2 q3q3 qkqk |q(D)-q(D t )| Lemma [McSherry-Talwar’07]: Selected index satisfies ε 0 -differential privacy and w.h.p Violation > Lemma [McSherry-Talwar’07]: Selected index satisfies ε 0 -differential privacy and w.h.p Violation >
29
Algorithm: D 0 uniform For t = 1...T Noisy selection of q D t+1 = Update(D t,q) Algorithm: D 0 uniform For t = 1...T Noisy selection of q D t+1 = Update(D t,q) Now: Each step satisfies ε 0 -differential privacy! What is the total privacy guarantee? Also use noisy answer in update rule New error bound:
30
T-fold composition of ε 0 -differential privacy satisfies: Answer 1 [DMNS’06]: ε 0 T-differential privacy Answer 1 [DMNS’06]: ε 0 T-differential privacy Answer 2 [DRV’10]: (ε,δ)-differential privacy Answer 2 [DRV’10]: (ε,δ)-differential privacy Note: for small enough ε
31
Composition Theorems Error bound Optimize T, ε 0 ε,δ Theorem 1. On databases of size n MW achieves ε-differential privacy with Theorem 2. MW achieves (ε, δ)-differential privacy with Optimal dependence on |Q| and n
32
Offline (non-interactive) S Q … Online (interactive) q1q1 q2q2 a2a2 a1a1 ? ✔ H-Ligett-McSherry12, Gupta-H-Roth-Ullman11 See also: Roth-Roughgarden10, Dwork-Rothblum-Vadhan10, Dwork-Naor-Reingold-Rothblum-Vadhan09, Blum-Ligett-Roth08 H-Rothblum10
33
Algorithm: Given query q t : If |q t (D t )- q t (D) | < α/2 + Lap(1/ε 0 n) – Output q t (D t ) Otherwise – Output q t (D) + Lap(1/ε 0 n) – D t+1 = Update(D t, q t ) Algorithm: Given query q t : If |q t (D t )- q t (D) | < α/2 + Lap(1/ε 0 n) – Output q t (D t ) Otherwise – Output q t (D) + Lap(1/ε 0 n) – D t+1 = Update(D t, q t ) Private MW Online [H-Rothblum’10] Achieves same error bounds!
34
Overview: Privacy Analysis Offline setting: T << n steps – Simple analysis using Composition Theorems Online setting: k >> n invocations of Laplace – Composition Thms don’t suggest small error! Idea: Analyze privacy loss like lazy random walk (goes back to Dinur-Dwork-Nissim’03)
35
Privacy Loss as a lazy random walk
36
Privacy loss
37
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update
38
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update
39
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update
40
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update
41
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update
42
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update
43
Privacy Loss as a lazy random walk lazy Privacy loss busy busy round = noisy answer close to forcing update W.h.p. bounded by O(sqrt(#busy))
44
Formalizing the random walk Imagine output of PMW is 0/1 indicator vector where v t =1 if round t update, 0 otherwise Recall: Very few updates! Vector is sparse. Theorem: Vector v is (ε,δ)-diffpriv.
45
Let D,D’ be neighboring DBs Let P,Q be corresponding output distributions Lemma: (3) implies (ε,δ)-diffpriv. Approach: 1.Sample v from P 2.Consider X = log(P(v)/Q(v)) 3.Argue Pr{ |X| > ε } ≤ δ Intution: X = privacy loss Intution: X = privacy loss
46
Privacy loss in round t We’ll show: 1. X t = 0 if t not busy 2.|X t | ≤ ε 0 if t busy 3. Number of busy rounds O(#updates) Total privacy loss DRV’10 E[X 1 +...+X k ] ≤ O(ε 0 2 #updates) Azuma Strong concentration around expectation
47
Defining “busy” event Update condition: Busy event
48
… Offline (non-interactive)Online (interactive) q1q1 S q2q2 a2a2 Qa1a1 ✔ ✔
49
What we can do Offline/batch setting: every set of linear queries Online/interactive setting: every sequence of adaptive and adversarial linear queries Theoretical performance: Nearly optimal in the worst case – For instance-by-instance guarantee see H-Talwar10, Nikolov-Talwar (upcoming!), different techniques Practical performance: Compares favorably to previous work! See Katrina’s talk. Are we done?
50
What we would like to do Running time: Linear dependence on |U| |U| exponential in #attributes of data Can we get poly(n)? No, in the worst-case for synthetic data [DNRRV09] even for simple query classes [Ullman-Vadhan10] No, in interactive setting without restricting query class [Ullman12] What can we do about it?
51
Look beyond the worst-case! Find meaningful assumptions on data, queries, models etc Design better heuristics! In this talk: Get more mileage out of learning theory! In this talk: Get more mileage out of learning theory!
52
Sensitive database Queries labeled by answer on DB Synopsis approximates DB on query set Unknown concept Examples labeled by concept Hypothesis approximates target concept on examples PrivacyLearning Can we turn this into an efficient reduction? Yes. [H-Rothblum-Servedio’12]
53
Informal Theorem: There is an efficient differentially private release mechanism for a query class Q provided that there is an efficient PAC-learning algorithm for related concept class Q’ Interfaces nicely with existing learning algorithms: – Learning based on polynomial threshold functions [Klivans-Servedio] – Harmonic Sieve [Jackson] and extension [Jackson, Klivans, Servedio]
54
Database as a function Observation: Enough to learn F t for t=α,2α,...,(1-α) in order to approximate F Observation: Enough to learn F t for t=α,2α,...,(1-α) in order to approximate F Query q q(D)
55
High-level idea Learning algorithm labeled examples Observation: If all labels are privacy-preserving, then so will be hypothesis h Observation: If all labels are privacy-preserving, then so will be hypothesis h Hypothesis h such that
56
Main hurdles Privacy requires noise, noise might defeat learning algorithm Can only generate |D| examples efficiently before running out of privacy
57
Learning algorithm Threshold Oracle Compute a=F(x)+N If |a-t| tiny: output “fail” Else if a>t: output 1 Else if a<t: output 0 Threshold Oracle Compute a=F(x)+N If |a-t| tiny: output “fail” Else if a>t: output 1 Else if a<t: output 0 Ensures: 1.Privacy 2.“Removes” noise 3.Complexity independent of |D| Generate samples: 1. Pick x 1,x 2,..,.x m 2. Receive b 1,b 2,...,b m from TO 3. Remove all “failed” examples 4. Pass on remaining labeled examples to learner (y 1,l 1 ),....,(y r,l r ) “F(x)>t”? b in {0,1,fail}
58
Application: Boolean Conjunctions Important class of queries in differential privacy [BCDKMT07,KRSU10,GHRU11,HMT12,...] Salary > $50kSyphilisHeight > 6’1Weight < 180 Male TrueFalseTrueFalseTrue False TrueFalse TrueFalse True False Universe U = {0,1} d
59
Informal Corollary (Subexponential algorithm for conjunctions). There is a differentially private release algorithm with running time poly(|D|) such that for any distribution over Boolean conjunctions the algorithm is w.h.p. α-accurate provided that: Informal Corollary (Small width). There is a differentially private release algorithm with running time poly(|D|) such that for any distribution over width-k Boolean conjunctions the algorithm is w.h.p. α-accurate provided that: Previous: 2 O(d) Previous: 2 O(d) Previous: d O(k) Previous: d O(k)
60
Follow-up work Thaler-Ullman-Vadhan12: Can remove distributional relaxation and get exp(O(d 1/2 )) complexity for all Boolean conjunctions Idea: Use polynomial encodings from learning algorithm directly
61
Summary Derived simple and powerful private data release algorithm from first principles Privacy/learning analogy as a guiding principle – Can be turned into efficient reduction Can we use these ideas outside theory and in new settings?
62
Thank you
63
Open problems Is PMW close to instance optimal? Is there a converse to privacy-to-learning reduction? No barriers for cut/spectral analysis of graphs/matrices (universe small) Releasing k-way conjunctions in time poly(n), error poly(d,k)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.