Integrating Private Databases for Data Analysis IEEE ISI 2005 Benjamin C. M. Fung Simon Fraser University BC, Canada Ke Wang Simon Fraser.

Slides:



Advertisements
Similar presentations
Anonymizing Sequential Releases ACM SIGKDD 2006 Benjamin C. M. Fung Simon Fraser University Ke Wang Simon Fraser University
Advertisements

Hani AbuSharkh Benjamin C. M. Fung fung (at) ciise.concordia.ca
Data Mining Techniques: Classification. Classification What is Classification? –Classifying tuples in a database –In training set E each tuple consists.
IT 433 Data Warehousing and Data Mining
Decision Tree Approach in Data Mining
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Secure Distributed Framework for Achieving -Differential Privacy Dima Alhadidi, Noman Mohammed, Benjamin C. M. Fung, and Mourad Debbabi Concordia Institute.
Template-Based Privacy Preservation in Classification Problems IEEE ICDM 2005 Benjamin C. M. Fung Simon Fraser University BC, Canada Ke.
Data Anonymization - Generalization Algorithms Li Xiong CS573 Data Privacy and Anonymity.
Anonymizing Healthcare Data: A Case Study on the Blood Transfusion Service Benjamin C.M. Fung Concordia University Montreal, QC, Canada
Privacy-Preserving Data Mashup Benjamin C.M. Fung Concordia University Montreal, QC, Canada Noman Mohammed Concordia University.
Machine Learning and Data Mining Course Summary. 2 Outline  Data Mining and Society  Discrimination, Privacy, and Security  Hype Curve  Future Directions.
Fast Data Anonymization with Low Information Loss 1 National University of Singapore 2 Hong Kong University
SLIQ: A Fast Scalable Classifier for Data Mining Manish Mehta, Rakesh Agrawal, Jorma Rissanen Presentation by: Vladan Radosavljevic.
UTEPComputer Science Dept.1 University of Texas at El Paso Privacy in Statistical Databases Dr. Luc Longpré Computer Science Department Spring 2006.
An architecture for Privacy Preserving Mining of Client Information Jaideep Vaidya Purdue University This is joint work with Murat.
Data Security against Knowledge Loss *) by Zbigniew W. Ras University of North Carolina, Charlotte, USA.
Decision Tree Algorithm
Finding Personally Identifying Information Mark Shaneck CSCI 5707 May 6, 2004.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Privacy Preserving Data Mining Yehuda Lindell & Benny Pinkas.
Classification.
Privacy-Preserving Data Mining Rakesh Agrawal Ramakrishnan Srikant IBM Almaden Research Center 650 Harry Road, San Jose, CA Published in: ACM SIGMOD.
Enterprise systems infrastructure and architecture DT211 4
Data Mining By Andrie Suherman. Agenda Introduction Major Elements Steps/ Processes Tools used for data mining Advantages and Disadvantages.
Differentially Private Data Release for Data Mining Benjamin C.M. Fung Concordia University Montreal, QC, Canada Noman Mohammed Concordia University Montreal,
Task 1: Privacy Preserving Genomic Data Sharing Presented by Noman Mohammed School of Computer Science McGill University 24 March 2014.
Differentially Private Transit Data Publication: A Case Study on the Montreal Transportation System Rui Chen, Concordia University Benjamin C. M. Fung,
Chapter 7 Decision Tree.
Basic Data Mining Techniques
Data Mining: Classification
1 Data Mining Books: 1.Data Mining, 1996 Pieter Adriaans and Dolf Zantinge Addison-Wesley 2.Discovering Data Mining, 1997 From Concept to Implementation.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Overview of Privacy Preserving Techniques.  This is a high-level summary of the state-of-the-art privacy preserving techniques and research areas  Focus.
Shiyuan Wang, Divyakant Agrawal, Amr El Abbadi Department of Computer Science UC Santa Barbara DBSec 2010.
Automatically Extracting Data Records from Web Pages Presenter: Dheerendranath Mundluru
Data Mining Chapter 1 Introduction -- Basic Data Mining Tasks -- Related Concepts -- Data Mining Techniques.
Differentially Private Data Release for Data Mining Noman Mohammed*, Rui Chen*, Benjamin C. M. Fung*, Philip S. Yu + *Concordia University, Montreal, Canada.
 Day 59 Computer Science and Industry Exploring The Intersection Between CS and Other Fields.
Decision Tree Learning Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 25, 2014.
Privacy preserving data mining Li Xiong CS573 Data Privacy and Anonymity.
Data Anonymization (1). Outline  Problem  concepts  algorithms on domain generalization hierarchy  Algorithms on numerical data.
Data Anonymization – Introduction and k-anonymity Li Xiong CS573 Data Privacy and Security.
Expert Systems with Applications 34 (2008) 459–468 Multi-level fuzzy mining with multiple minimum supports Yeong-Chyi Lee, Tzung-Pei Hong, Tien-Chin Wang.
Security Control Methods for Statistical Database Li Xiong CS573 Data Privacy and Security.
Privacy Preserving Data Mining Benjamin Fung bfung(at)cs.sfu.ca.
TEMPLATE DESIGN © Predicate-Tree based Pretty Good Protection of Data William Perrizo, Arjun G. Roy Department of Computer.
Privacy vs. Utility Xintao Wu University of North Carolina at Charlotte Nov 10, 2008.
Privacy Preserving Payments in Credit Networks By: Moreno-Sanchez et al from Saarland University Presented By: Cody Watson Some Slides Borrowed From NDSS’15.
Privacy-preserving data publishing
1/3/ A Framework for Privacy- Preserving Cluster Analysis IEEE ISI 2008 Benjamin C. M. Fung Concordia University Canada Lingyu.
An Introduction Student Name: Riaz Ahmad Program: MSIT( ) Subject: Data warehouse & Data Mining.
Anonymizing Data with Quasi-Sensitive Attribute Values Pu Shi 1, Li Xiong 1, Benjamin C. M. Fung 2 1 Departmen of Mathematics and Computer Science, Emory.
1 Privacy Preserving Data Mining Introduction August 2 nd, 2013 Shaibal Chakrabarty.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
An Interval Classifier for Database Mining Applications Rakes Agrawal, Sakti Ghosh, Tomasz Imielinski, Bala Iyer, Arun Swami Proceedings of the 18 th VLDB.
Using category-Based Adherence to Cluster Market-Basket Data Author : Ching-Huang Yun, Kun-Ta Chuang, Ming-Syan Chen Graduate : Chien-Ming Hsiao.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Basic Data Mining Techniques Chapter 3-A. 3.1 Decision Trees.
Privacy Preserving Outlier Detection using Locality Sensitive Hashing
Packet Classification Using Multi- Iteration RFC Author: Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang Publisher: 2013 IEEE 37th Annual Computer Software.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
DATA MINING TECHNIQUES (DECISION TREES ) Presented by: Shweta Ghate MIT College OF Engineering.
Privacy Issues in Graph Data Publishing Summer intern: Qing Zhang (from NC State University) Mentors: Graham Cormode and Divesh Srivastava.
Rule Induction for Classification Using
Privacy Preserving Data Publishing
Privacy Preserving Data Mining
Anonymizing Sequential Releases
©Jiawei Han and Micheline Kamber
Presentation transcript:

Integrating Private Databases for Data Analysis IEEE ISI 2005 Benjamin C. M. Fung Simon Fraser University BC, Canada Ke Wang Simon Fraser University BC, Canada Guozhu Dong Wright State University OH, USA

2 Outline Problem: Secure Data Integration Our solution: Top-Down Specialization for 2 Parties Related works Experimental results Conclusions

3 Data Mining and Privacy Government and business have strong motivations for data mining. Citizens have growing concern about protecting their privacy. Can we satisfy both the data mining goal and the privacy goal?

4 Scenario Suppose a bank A and a credit card company B observe different sets of attributes about the same set of individuals identified by the common key SSN, e.g., T A (SSN; Age; Sex; Balance) T B (SSN; Job; Salary) These companies want to integrate their data to support better decision making such as loan or card limit approval.

5 Scenario After integrating the two tables (by matching the SSN field), the “female lawyer” becomes unique, therefore, vulnerable to be linked to sensitive information such as Salary.

6 Problem: Secure Data Integration Given two private tables for the same set of records on different sets of attributes, we want to produce an integrated table on all attributes for release to both parties. The integrated table must satisfy the following two requirements: –Classification requirement: The integrated data is as useful as possible to classification analysis. –Privacy requirements: Given a specified subset of attributes called a quasi-identifier (QID), each value of the quasi-identifier must identify at least k records [5]. At any time in this integration / generalization, no party should learn more detailed information about the other party other than those in the final integrated table.

7 Example: k-anonymity QID 1 = {Sex, Job}, k 1 = 4 SexJobSalaryClass# of Recs. MJanitor30K0Y3N3 MMover32K0Y4N4 MCarpenter35K2Y3N5 FTechnician37K3Y1N4 FManager42K4Y2N6 FManager44K3Y0N3 MAccountant44K3Y0N3 FAccountant44K3Y0N3 MLawyer44K2Y0N2 FLawyer44K1Y0N1 Total:34 a( qid 1 ) Minimum a(qid 1 ) = 1

8 Generalization SexJob…Class a(qid 1 ) MJanitor0Y3N 3 MMover0Y4N 4 MCarpenter2Y3N 5 FTechnician3Y1N 4 FManager4Y2N 9 FManager3Y0N MAccountant3Y0N 3 FAccountant3Y0N 3 MLawyer2Y0N 2 FLawyer1Y0N1 SexJob…Class a(qid 1 ) MJanitor0Y3N 3 MMover0Y4N 4 MCarpenter2Y3N 5 FTechnician3Y1N 4 FManager4Y2N 9 FManager3Y0N MProfessional5Y0N 5 FProfessional4Y0N 4

9 Intuition 1.Classification goal and privacy goal have no conflicts: –Privacy goal: mask sensitive information, usually specific descriptions that identify individuals. –Classification goal: extract general structures that capture trends and patterns. 2.A table contains multiple classification structures. Generalizations destroy some classification structures, but other structures emerge to help. If generalization is “carefully” performed, identifying information can be masked while still preserving trends and patterns for classification.

10 Two simple but incorrect approaches 1.Generalize-then-integrate: first generalize each table locally and then integrate the generalized tables. –Does not work for QID that spans two tables. 2.Integrate-then-generalize: first integrate the two tables and then generalize the integrated table using some single table methods, such as: –Iyengar’s Genetic Algorithm [10] or Fung et al.’s Top-Down Specialization [8]. –Any party holding the integrated table will immediately know all private information of both parties. Violated our privacy requirement.

11 Algorithm Top-Down Specialization (TDS) for Single Party Initialize every value in T to the top most value. Initialize Cut i to include the top most value. while there is some candidate in UCut i do Find the Winner specialization of the highest Score. Perform the Winner specialization on T. Update Cut i and Score(x) in UCut i. end while return Generalized T and UCut i. Salary ANY [1-99) [1-37)[37-99)

12 Algorithm Top-Down Specialization for 2 Parties (TDS2P) Initialize every value in T A to the top most value. Initialize Cut i to include the top most value. while there is some candidate in UCut i do Find the local candidate x of the highest Score(x). Communicate Score(x) with Party B to find the winner. if the winner w is local then Specialize w on T A. Instruct Party B to specialize w. else Wait for the instruction from Party B. Specialize w on T A using the instruction. end if Update the local copy of Cut i. Update Score(x) in UCut i. end while return Generalized T A and UCut i.

13 Search Criteria: Score Consider a specialization v  child(v). To heuristically maximize the information of the generalized data for achieving a given anonymity, we favor the specialization on v that has the maximum information gain for each unit of anonymity loss:

Search Criteria: Score R v denotes the set of records having value v before the specialization. R c denotes the set of records having value c after the specialization where c  child(v). I(R x ) is the entropy of R x : freq(R x, cls) is the number records in R x having the class cls. Intuitively, I(R x ) measures the impurity of classes for the data records in R x. A good specialization reduces the impurity of classes.

15 Perform the Winner Specialization To perform the Winner specialization w  child(w), we need to retrieve R w, the set of data records containing the value Winner. Taxonomy Indexed PartitionS (TIPS) is a tree structure with each node representing a generalized record over UQID j, and each child node representing a specialization of the parent node on exactly one attribute. Stored with each leaf node is the set of data records having the same generalized record.

SexJobSalary# of Recs. ANY_SexANY_Job[1-99)34 ANY_SexANY_Job[37-99)22ANY_SexANY_Job[1-37)12 Salary ANY [1-99) [1-37)[37-99) ANY_SexBlue- collar [1-37)12 ANY_SexBlue- collar [37-99)4ANY_SexWhite -collar [37-99)18 Link [37-99) Link ANY_Sex Consider QID 1 = {Sex, Job}, QID 2 = {Job, Salary} ABB IDs: 1-12IDs: 13-34

17 Practical Features of TDS2P Handling multiple QIDs –Treating all QIDs as a single QID leads to over generalization. –QIDs span across two parties. Handling both categorical and continuous attributes –Dynamically generate taxonomy tree for continuous attributes. Anytime solution –Determine a desired trade-off between privacy and accuracy. –Stop any time and obtain a generalized table satisfying the anonymity requirement. Bottom-up approach does not support this feature. Scalable computation

18 Related Works Secure Multiparty Computation (SMC): allow sharing computed results, e.g., classifier, but completely prohibits sharing of data [3]. Liang and Chawathe [4] and Agrawal et al. [2] considered computing intersection, intersection size, equijoin and equijoin size on private databases. The concept of anonymity was proposed by Dalenius [5]. Sweeney: achieve k-anonymity by generalization [6, 7]. Fung et. al. [8], Wang et. al. [9], Iyengar [10] : consider anonymity for classification on a single data source.

Experimental Evaluation Data quality & Efficiency –A broad range of anonymity requirements. –Used C4.5 classifier. Adult data set –Used in Iyengar [6]. –Census data. –6 continuous attributes. –8 categorical attributes. –Two classes. –30162 recs. for training. –15060 recs. for testing.

20 Data Quality Include the TopN most important attributes into a SingleQID, which is more restrictive than breaking them into multiple QIDs.

21 Efficiency and Scalability Took at most 20 seconds for all previous experiments. Replicate the Adult data set and substitute some random data.

22 Conclusions We studied secure data integration of multiple databases for the purpose of a joint classification analysis. We formalized this problem as achieving the k- anonymity on the integrated data without revealing more detailed information in this process. Quality classification and privacy preservation can coexist. Allow data sharing instead of only result sharing. Great applicability to both public and private sectors that share information for mutual benefits.

23 References 1.The House of Commons in Canada: The personal information protection and electronic documents act (2000) 2.Agrawal, R., Evfimievski, A., Srikant, R.: Information sharing across private databases. In: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, San Diego, California (2003) 3.Yao, A.C.: Protocols for secure computations. In: Proceedings of the 23rd Annual IEEE Symposium on Foundations of Computer Science. (1982) 4.Liang, G., Chawathe, S.S.: Privacy-preserving inter-database operations. In: Proceedings of the 2nd Symposium on Intelligence and Security Informatics. (2004) 5.Dalenius, T.: Finding a needle in a haystack - or identifying anonymous census record. Journal of Official Statistics 2 (1986) Sweeney, L.: Achieving k-anonymity privacy protection using generalization and suppression. International Journal on Uncertainty, Fuzziness, and Knowledge- based Systems 10 (2002) Hundepool, A., Willenborg, L.:  - and  -argus : Software for statistical disclosure control. In: Third International Seminar on Statistical Confidentiality, Bled (1996) 8.Fung, B.C.M., Wang, K., Yu, P.S.: Top-down specialization for information and privacy preservation. In: Proceedings of the 21st IEEE International Conference on Data Engineering, Tokyo, Japan (2005)

24 References 9.Wang, K., Yu, P., Chakraborty, S.: Bottom-up generalization: a data mining solution to privacy protection. In: Proceedings of the 4th IEEE International Conference on Data Mining. (2004) 10.Iyengar, V.S.: Transforming data to satisfy privacy constraints. In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, AB, Canada (2002) Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann (1993)