Presentation is loading. Please wait.

Presentation is loading. Please wait.

April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Chih-Cheng Chang †, Brian Thompson †, Hui Wang ‡, Danfeng Yao † †‡

Similar presentations


Presentation on theme: "April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Chih-Cheng Chang †, Brian Thompson †, Hui Wang ‡, Danfeng Yao † †‡"— Presentation transcript:

1 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Chih-Cheng Chang †, Brian Thompson †, Hui Wang ‡, Danfeng Yao † †‡ ACM Symposium on Information, Computer, and Communications Security (ASIACCS 2010)

2 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Outline Introduction Privacy in recommender systems Predictive Anonymization Experimental results Conclusions and future work

3 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Inevitable trend towards data sharing –Medical records –Social networks –Web search data –Online shopping, ads Databases contain sensitive information Growing need to protect privacy Motivation

4 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Privacy in Relational Databases NameAgeGenderZip CodeDisease Joe Smith52Male08901Cancer John Doe24Male08904--------- Mary Johnson45Female08854Asthma Janie McJonno59Female08904Cancer Johnny Walker76Male08854Diabetes identifierssensitive information

5 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Privacy in Relational Databases NameAgeGenderZip CodeDisease Person 000152Male08901Cancer Person 000224Male08904--------- Person 000345Female08854Asthma Person 000459Female08904Cancer Person 000576Male08854Diabetes “Pseudo-identifiers” 87% of the U.S. population can be uniquely identified by DOB, gender, and zip code! [S00]

6 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Approaches to Achieving Privacy 1.Statistical databases Only aggregate queries: What is average salary? Differential Privacy [Dinur-Nissim ‘03, Dwork ‘06] Adaptively add random noise to output so querier can not determine if a user is in the database Quality decreases over multiple queries 2.Publishing of anonymized databases No restriction on how data is utilized, good for complex data mining applications How to address privacy concerns?

7 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Anonymization of Databases Techniques: Perturbation NameAge Joe Smith John Doe Mary Johnson 5253 24 26 4542

8 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Anonymization of Databases Techniques: Perturbation Swapping NameAge Joe Smith John Doe24 Mary Johnson 52 45

9 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Anonymization of Databases Techniques: Perturbation Swapping Generalization Def. A database entry is k-anonymous if ≥ k-1 other entries match identically on the insensitive attributes. [SS98] NameAge Joe Smith John Doe Mary Johnson 5250s 24 20s 4540s

10 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization The Generalization Approach NameAgeGenderZip CodeDisease Person 0001MaleCancer Person 0002Male--------- Person 0003FemaleAsthma Person 0004MaleDiabetes Person 0005FemaleCancer Person 0006FemaleAIDS <50 >50 <50 >50 089** 088** 089** 088** 32 24 59 45 76 61 08901 08904 08854 08904 08854

11 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Outline Introduction Privacy in recommender systems Predictive Anonymization Experimental results Conclusions and future work

12 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Recommender Systems Users register for service After buying a good, they submit a rating for it Get recommendations based on yours and others’ ratings

13 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Recommender Systems NETFLIXAlienBatmanCloserDogmaEvitaX-ratedGladiator 42 335 253 32 ? ? ? Joe Smith John Doe Mary Johnson Janie McDonno User 0001 User 0002 User 0003 User 0004 Question: Is privacy really protected? The Netflix Challenge: “Anonymized” Netflix data is released to the public. $1 million prize for best movie prediction algorithm.

14 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Privacy in Recommender Systems NETFLIXAlienBatmanCloserDogmaEvitaX-ratedGladiator User 000142 User 0002335 User 0003253 User 000432 Narayanan and Shmatikov [NS08] exploited external information to re-identify users in the released Netflix Challenge dataset. Privacy breach!

15 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization News Timeline Oct. 2006Netflix Challenge announced May 2008N&S publish attack Aug. 2009Plans announced for Challenge 2 Dec. 2009Netflix users file lawsuits Mar. 2010 NC2 plans canceled due to privacy concerns (and FTC investigation) How can we enable sharing of recommendation data without compromising users’ privacy?

16 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization All data may be considered “sensitive” by users. All data could be used as quasi-identifiers. Data sparsity helps re-identification attacks, and makes anonymization difficult. [NS08] Scalability – Netflix matrix has 8.5 billion cells! Challenges in Anonymization of Recommender Systems

17 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Attack Models 0001 0002 0003 0004 Star Wars Godfather English Patient Pretty in Pink 3 1 5 4 5 4 4 4 5 Godfather English Patient Ben Star Wars English Patient Tim 5 1 We represent the recommendation database as a labeled bipartite graph: “structure-based attack” “label-based attack”

18 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Privacy Models Node re-identification privacy: Should not be possible to re-identify individuals. Link existence privacy: Should not be possible to infer whether a user has seen a particular movie. Our approach, Predictive Anonymization, provides these notions of privacy against both the structure-based and label-based attacks.

19 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Outline Introduction Privacy in recommender systems Predictive Anonymization Experimental results Conclusions and future work

20 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Predictive Anonymization Our solution takes a 3-step approach: 1. Use predictive padding to reduce sparsity. 2. Cluster users into groups of size k. 3. Perform homogenization by assigning users in each group to have the same ratings. Achieves k-anonymity!

21 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Predictive Anonymization AlienBatmanCloserDogmaEvitaX-ratedGladiator User 000142 User 0002335 User 0003253 User 000432 Want to cluster users, but there is not enough information due to data sparsity. Solution: Fill empty cells with predicted values. Cluster users based on similar tastes, not necessarily similar lists of movies rated. 35314 5231 2323 32413

22 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization 1. Use predictive padding to reduce sparsity. 2. Cluster users into groups of size k. 3. Perform homogenization by assigning users in each group to have the same ratings. Predictive Anonymization The final step, homogenization, can be done in one of several ways. We describe two methods, “padded” and “pure” homogenization.

23 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization 0001 0002 0003 0004 Star Wars Godfather English Patient Pretty in Pink “Padded Homogenization” 3.5 4.5 3.5 4.5 3.5 1.5 2.5 1.5 3 1 5 4 5 4 4 4 5 Predictive Anonymization All edges are added to the recommendation graph. Each cluster is averaged using the padded data.

24 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization 0001 0002 0003 0004 Star Wars Godfather English Patient Pretty in Pink 3 1 5 4 5 4 4 5 4 3.5 1 5 4.5 4 3.5 4 1 5 Predictive Anonymization “Pure Homogenization” Only necessary edges are added to the graph. Each cluster is averaged using the original data.

25 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Outline Introduction Privacy in recommender systems Predictive Anonymization Experimental results Conclusions and future work

26 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Experiments Performed on the Netflix Challenge dataset: –480,189 users and 17,770 movies –more than 100 million ratings Singular value decomposition (SVD) is used for padding and prediction. We compute the root mean squared error (RMSE) for a test set of 1 million ratings on the original and anonymized data. RMSE =

27 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Analysis: Prediction Accuracy Padded Anonymization preserves prediction accuracy. However, sparsity is eliminated, which affects the utility of the published dataset for data mining applications. Experiment SeriesRMSE Original Data0.95185 Padded Anonymization (k=5)0.95970 Padded Anonymization (k=50)0.95871 Pure Anonymization (k=5)2.36947 Pure Anonymization (k=50)2.37710

28 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Summary Prediction Accuracy Supports Complex Data Mining Node Re-Ident. Privacy Link Existence Privacy Naive Anonymization Padded Predictive Anonymization Pure Predictive Anonymization UtilityPrivacy

29 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Outline Introduction Privacy in recommender systems Predictive Anonymization Experimental results Conclusions and future work

30 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Conclusions We have formalized privacy and attack models for recommender systems. Our solutions show that privacy-preserving publishing of anonymized recommendation data is feasible. More work is required to find a practical solution that satisfies real-world privacy and utility goals.

31 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Future Work Investigate the use of differential privacy-like guarantees for recommendation databases Analyze how to protect against more complex attacks with greater background knowledge Evaluate the utility of anonymized recommendation data for advanced data mining applications

32 April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Thank you!


Download ppt "April 13, 2010 Towards Publishing Recommendation Data With Predictive Anonymization Chih-Cheng Chang †, Brian Thompson †, Hui Wang ‡, Danfeng Yao † †‡"

Similar presentations


Ads by Google