Download presentation
Presentation is loading. Please wait.
Published byBrice Lone Modified over 9 years ago
1
Differential Privacy on Linked Data: Theory and Implementation
Yotam Aron
2
Table of Contents Introduction Differential Privacy for Linked Data
SPIM implementation Evaluation
3
Contributions Theory: how to apply differential privacy to linked data. Implementation: privacy module for SPARQL queries. Experimental evaluation: differential privacy on linked data.
4
Introduction
5
Overview: Privacy Risk
Statistical data can leak privacy. Mosaic Theory: Different data sources harmful when combined. Examples: Netflix Prize Data set GIC Medical Data set AOL Data logs Linked data has added ontologies and meta-data, making it even more vulnerable.
6
Current Solutions Accountability: Problems: Privacy Ontologies
Privacy Policies and Laws Problems: Requires agreement among parties. Does not actually prevent breaches, just a deterrent.
7
Current Solutions (Cont’d)
Anonymization Delete “private” data K – anonymity (Strong Privacy Guarantee) Problems Deletion provides no strong guarantees Must be carried out for every data set What data should be anonymized? High computational cost (k-anonimity is np-hard)
8
Pr[𝐾 𝐷 1 ∈𝑆]≤ exp 𝜖 ×Pr[𝐾 𝐷 2 ∈𝑆]
Differential Privacy Definition for relational databases (from PINQ paper): A randomized function K gives Ɛ-differential privacy if for all data sets 𝐷 1 and 𝐷 2 differing on at most one record, and all 𝑆⊆𝑅𝑎𝑛𝑔𝑒(𝐾), Pr[𝐾 𝐷 1 ∈𝑆]≤ exp 𝜖 ×Pr[𝐾 𝐷 2 ∈𝑆]
9
Differential Privacy What does this mean?
Adversaries get roughly same results from 𝐷 1 and 𝐷 2 , meaning a single individual’s data will not greatly affect their knowledge acquired from each data set.
10
How Is This Achieved? Add noise to result. Simplest: Add Laplace noise
11
Laplace Noise Parameters
Mean = 0 (so don’t add bias) Variance = ∆𝑄 𝜖 , where ∆𝑄 is defined, for a record j, as 𝑀𝑎 𝑥 𝑗 (|𝐹 𝐷 −𝐹 𝐷 −𝑗 |) Theorem: For query Q result R, the output R + Laplace(0, ∆𝑄 𝜖 ) is differentially private.
12
Other Benefit of Laplace Noise
A set of queries each with sensitivity 𝜀 𝑖 will have an overall sensitivity of 𝜀 𝑖 Implementation-wise, can allocate an “budget” Ɛ for a client and for each query client specifies 𝜀 𝑖 < 𝜀 to use.
13
Benefits of Differential Privacy
Strong Privacy Guarantee Mechanism-Based, so don’t have to mess with data. Independent of data set’s structure. Works well with for statistical analysis algorithms.
14
Problems with Differential Privacy
Potentially poor performance Complexity Noise Only works with statistical data (though this has fixes) How to calculate sensitivity of arbitrary query without brute-force?
15
Theory: Differential Privacy for Linked Data
16
Differential Privacy and Linked Data
Want same privacy guarantees for linked data without, but no “records.” What should be “unit of difference”? One triple All URIs related to person’s URI All links going out from person’s URI
17
Differential Privacy and Linked Data
Want same privacy guarantees for linked data without, but no “records.” What should be “unit of difference”? One triple All URIs related to person’s URI All links going out from person’s URI
18
Differential Privacy and Linked Data
Want same privacy guarantees for linked data without, but no “records.” What should be “unit of difference”? One triple All URIs related to person’s URI All links going out from person’s URI
19
Differential Privacy and Linked Data
Want same privacy guarantees for linked data without, but no “records.” What should be “unit of difference”? One triple All URIs related to person’s URI All links going out from person’s URI
20
“Records” for Linked Data
Reduce links in graph to attributes Idea: Identify individual contributions from a single individual to total answer. Find contribution that affects answer most.
21
“Records” for Linked Data
Reduce links in graph to attributes, makes it a record. P1 P2 Knows Person Knows P1 P2
22
“Records” for Linked Data
Repeated attributes and null values allowed P1 P2 Knows Loves Knows P3 P4 Knows
23
“Records” for Linked Data
Repeated attributes and null values allowed (not good RDBMS form but makes definitions easier) Person Knows Loves P1 P2 Null P4 P3
24
Query Sensitivity in Practice
Need to find triples that “belong” to a person. Idea: Identify individual contributions from a single individual to total answer. Find contribution that affects answer most. Done using sorting and limiting functions in SPARQL
25
Example COUNT of places visited S1 S2 P1 MA P2 State of Residence
26
Example COUNT of places visited S1 S2 P1 MA P2 State of Residence
27
Example Answer: Sensitivity of 2 COUNT of places visited S1 S2 P1 MA
State of Residence Visited S3 Answer: Sensitivity of 2
28
Using SPARQL Query: (COUNT(?s) as ?num_places_visited) WHERE{
?p :visited ?s }
29
Using SPARQL Sensitivity Calculation Query (Ideally):
SELECT ?p (COUNT(ABS(?s)) as ?num_places_visited) WHERE{ ?p :visited ?s; ?p foaf:name ?n } GROUP BY ?p ORDER BY ?num_places_visited LIMIT 1
30
In reality… LIMIT, ORDER BY, GROUP BY doesn’t work together in 4store…
For now: Don’t use LIMIT and get top answers manually. I.e. Simulate using these keywords in python Will affect results, so better testing should be carried out in the future. Would like to keep it on sparql-side ideally so there is less transmitted data (e.g. on large data sets)
31
(Side rant) 4store limitations
Many operations not supported in unison E.g. cannot always filter and use “order by” for some reason Severely limits the types of queries I could use to test. May be desirable to work with a different triplestore that is more up-to-date (ARQ). Didn’t because wanted to keep code in python. Also had already written all code for 4store
32
Problems with this Approach
Need to identify “people” in graph. Assume, for example, that URI with a foaf:name is a person and use its triples in privacy calculations. Imposes some constraints on linked data format for this to work. For future work, look if there’s a way to automatically identify private data, maybe by using ontologies. Complexity is tied to speed of performing query over large data set. Still not generalizable to all functions.
33
…and on the Plus Side Model for sensitivity calculation can be expanded to arbitrary statistical functions. e.g. dot products, distance functions, variance, etc. Relatively simple to implement using SPARQL 1.1
34
Implementation: Design of Privacy System
35
SPARQL Privacy Insurance Module
i.e. SPIM Use authentication, AIR, and differential privacy in one system. Authentication to manage Ɛ-budgets. AIR to control flow of information and non-statistical data. Differential privacy for statistics. Goal: Provide a module that can integrate into SPARQL 1.1 endpoints and provide privacy.
36
Design OpenID Authentication HTTP Server AIR Reasoner
Differential Privacy Module SPIM Main Process Triplestore User Data Privacy Policies
37
HTTP Server and Authentication
HTTP Server: Django server that handles http requests. OpenID Authentication: Django module. HTTP Server OpenID Authentication
38
SPIM Main Process Controls flow of information.
First checks user’s budget, then uses AIR, then performs final differentially-private query. SPIM Main Process
39
AIR Reasoner Performs access control by translating SPARQL queries to n3 and checking against policies. Can potentially perform more complicated operations (e.g. check user credentials) AIR Reasoner Privacy Policies
40
Differential Privacy Protocol
Differential Privacy Module SPARQL Endpoint Client Scenario: Client wishes to make standard SPARQL 1.1 statistical query. Client has Ɛ “budget” of overall accuracy for all queries.
41
Differential Privacy Protocol
Differential Privacy Module SPARQL Endpoint Query, Ɛ > 0 Client Step 1: Query and epsilon value sent to the endpoint and intercepted by the enforcement module.
42
Differential Privacy Protocol
Differential Privacy Module SPARQL Endpoint Client Sens Query Step 2: The sensitivity of the query is calculated using a re-written, related query.
43
Differential Privacy Protocol
Differential Privacy Module SPARQL Endpoint Client Query Step 3: Actual query sent.
44
Differential Privacy Protocol
Differential Privacy Module SPARQL Endpoint Result and Noise Client Step 4: Result with Laplace noise sent over.
45
Experimental Evaluation
46
Evaluation Three things to evaluate:
Correctness of operation Correctness of differential privacy Runtime Used an anonymized clinical database as the test data and added fake names, social security numbers, and addresses.
47
Correctness of Operation
Can the system do what we want? Authentication provides access control AIR restricts information and types of queries Differential privacy gives strong privacy guarantees. Can we do better?
48
Use Case Used in Thesis Clinical database data protection
HIPAA: Federal protection of private information fields, such as name and social security number, for patients. 3 users Alice: Works in CDC, needs unhindered access Bob: Researcher that needs access to private fields (e.g. addresses) Charlie: Amateur researcher to whom HIPAA should apply Assumptions: Django is secure enough to handle “clever attacks” Users do not collude, so can allocate individual epsilon values.
49
Use Case Solution Overview
What should happen: Dynamically apply different AIR policies at runtime. Give different epsilon-budgets. How allocated: Alice: No AIR Policy, no noise. Bob: Give access to addresses but hide all other private information fields. Epsilon budget: E1 Charlie: Hide all private information fields in accordance with HIPAA Epsilon budget: E2
50
Use Case Solution Overview
Alice: No AIR Policy Bob: Give access to addresses but hide all other private information fields. Epsilon budget: E1 Charlie: Hide all private information fields in accordance with HIPAA Epsilon budget: E2
51
Example: A Clinical Database
Client Accesses triplestore via HTTP server. OpenID Authentication verifies user has access to data. Finds epsilon value, HTTP Server OpenID Authentication
52
Example: A Clinical Database
AIR reasoner checks incoming queries for HIPAA violations. Privacy policies contain HIPAA rules. AIR Reasoner Privacy Policies
53
Example: A Clinical Database
Differential Privacy applied to statistical queries. Statistical result + noise returned to client. Differential Privacy Module
54
Correctness of Differential Privacy
Need to test how much noise is added. Too much noise = poor results. Too little noise = no guarantee. Test: Run queries and look at sensitivity calculated vs. actual sensitivity.
55
How to test sensitivity?
Ideally: Test noise calculation is correct Test that noise makes data still useful (e.g. by applying machine learning algorithms). Fort his project, just tested former Machine learning APIs not as prevalent for linked data. What results to compare to?
56
Test suite 10 queries for each operation (COUNT, SUM, AVG, MIN, MAX)
10 different WHERE CLAUSES Test: Sensitivity calculated from original query Remove each personal URI using “MINUS” keyword and see which removal is most sensitive
57
Example for Sens Test Query:
PREFIX rdf: < PREFIX rdfs: < PREFIX foaf: < PREFIX mimic: < SELECT (SUM(?o) as ?aggr) WHERE{ ?s foaf:name ?n. ?s mimic:event ?e. ?e mimic:m1 "Insulin". ?e mimic:v1 ?o. FILTER(isNumeric(?o)) }
58
Example for Sens Test Sensitivity query:
PREFIX rdf: < PREFIX rdfs: < PREFIX foaf: < PREFIX mimic: < SELECT (SUM(?o) as ?aggr) WHERE{ ?s foaf:name ?n. ?s mimic:event ?e. ?e mimic:m1 "Insulin". ?e mimic:v1 ?o. FILTER(isNumeric(?o)) MINUS {?s foaf:name "%s"} } % (name)
59
Results Query 6 - Error
60
Runtime Queries were also tested for runtime. Bigger WHERE clauses
More keywords Extra overhead of doing the calculations.
61
Results Query 6 - Runtime
62
Interpretation Sensitivity calculation time on-par with query time
Might not be good for big data Find ways to reduce sensitivity calculation time? AVG does not do so well… Approximation yields too much noise vs. trying all possibilities Runs ~4x slower than simple querying Solution 1: Look at all data manually (large data transfer) Solution 2: Can we use NOISY_SUM / NOISY_COUNT instead?
63
Conclusion
64
Contributions Theory on how to apply differential privacy to linked data. Overall privacy module for SPARQL queries. Limited but a good start Experimental implementation of differential privacy. Verification that it is applied correctly. Other: Updated sparql to n3 translation to Sparql version 1.1 Expanded upon IARPA project to create policies against statistical queries.
65
Shortcomings and Future Work
Triplestores need some structure for this to work Personal information must be explicitly defined in triples. Is there a way to automatically detect what triples would constitute private information? Complexity Lots of noise for sparse data. Can divide data into disjoint sets to reduce noise like PINQ does Use localized sensitivity measures? Third party software problems Would this work better using a different Triplestore implementation?
66
Diff. Privacy and an Open Web
How applicable is this to an open web? High sample numbers, but potentially high data variance. Sensitivity calculation might take too long, need to approximate. Can use disjoint subsets of the web to increase number of queries with ɛ budgets.
67
Demo air.csail.mit.edu:8800/spim_module/
68
References Differential Privacy Implementations:
“Privacy Integrated Queries (PINQ)” by Frank McSherry: “Airavat: Security and Privacy for MapReduce” by Roy, Indrajit; Setty, Srinath T. V. ; Kilzer, Ann; Shmatikov, Vitaly; and Witchel, Emmet: “Towards Statistical Queries over Distributed Private User Data” by Chen, Ruichuan; Reznichenko, Alexey; Francis, Paul; Gehrke, Johannes:
69
References Theoretical Work
“Differential Privacy” by Cynthia Dwork: “Mechanism Design via Differential Privacy” by McSherry, Frank; and Talwar, Kunal: “Calibrating Noise to Sensitivity in Private Data Analysis” by Dwork, Cynthia; McSherry, Frank; Nissim, Kobbi; and Smith, Adam: “Differential Privacy for Clinical Trail Data: Preliminary Evaluations”, by Vu, Duy; and Slavković, Aleksandra:
70
References Other “Privacy Concerns of FOAF-Based Linked Data” by Nasirifard, Peyman; Hausenblas, Michael; and Decker, Stefan: “The Mosaic Theory, National Security, and the Freedom of Information Act”, by David E. Pozen “A Privacy Preference Ontology (PPO) for Linked Data”, by Sacco, Owen; and Passant, Alexandre: “k-Anonimity: A Model for Protecting Privacy”, by Latanya Sweeney:
71
References Other “Approximation Algorithms for k-Anonimity”, by Aggarwal, Gagan; Feder, Tomas; Kenthapadi, Krishnaram; Motwani, Rajeev; Panigraphy, Rina; Thomas, Dilys; and Zhu, An:
72
Appendix: Results Q1, Q2 Q1 Error Query_Time Sens_Calc_Time COUNT
Q2 Error Query_Time Sens_Calc_Time COUNT SUM AVG MAX MIN
73
Appendix: Results Q3, Q4 Q3 Error Query_Time Sens_Calc_Time COUNT
SUM AVG MAX MIN Q4 Error Query_Time Sens_Calc_Time COUNT SUM AVG 860.91 MAX MIN
74
Appendix: Results Q5, Q6 Q5 Error Query_Time Sens_Calc_Time COUNT
SUM AVG MAX MIN Q6 Error Query_Time Sens_Calc_Time COUNT SUM AVG MAX MIN
75
Appendix: Results Q7, Q8 Q7 Error Query_Time Sens_Calc_Time COUNT
SUM AVG MAX MIN Q8 Error Query_Time Sens_Calc_Time COUNT SUM AVG MAX MIN
76
Appendix: Results Q9, Q10 Q9 Error Query_Time Sens_Calc_Time COUNT
SUM AVG MAX MIN Q10 Error Query_Time Sens_Calc_Time COUNT SUM AVG 860.91 MAX MIN
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.