Download presentation
Presentation is loading. Please wait.
Published byKarina Lund Modified over 5 years ago
1
Mining Privacy Settings to Find Optimal Privacy-Utility Tradeoffs for Social Network Services
Shumin Guo, Keke Chen Data Intensive Analysis and Computing (DIAC) Lab Kno.e.sis Center Wright State University
2
Outline Introduction Our modeling methods The experiments Conclusions
Background Research goals Contributions Our modeling methods The IRT model Our research hypothesis Modeling social network privacy and utility The weighted/personalized utility model Trade off between privacy and utility The experiments Social network data from Facebook Experimental Results Conclusions
3
Introduction
4
Background Social network services (SNS) are popular
SNS are filled up with private info privacy risks Online identity theft Insurance discrimination … Protecting SNS privacy is complicated Many new young users Do not realize privacy risks Do not know how to protect their privacy Privacy settings consist of tens of options involve implicit privacy-utility tradeoff A privacy guidance for new young users?
5
Some facts Privacy settings of Facebook 27 items
Each item is set to one of the four levels of exposure (“me only”, “friends only”, “friends of friends”, “everyone”) By default, most items are set to the highest exposure level the best interest to the SNS provider is to get people exposed and connected to each other
6
Research goals Understand the SNS privacy problem
The level of “privacy sensitivity” for each personal item Quantification of privacy The balance between privacy and SNS utility Enhancement of SNS privacy How to help users express their privacy concerns? How to help users automate the privacy configuration with utility preference in mind?
7
Our contributions Develop a privacy quantification framework that considers both privacy and utility Understand common users’ privacy concerns Help users achieve optimal privacy settings based on their utility preferences We study the framework with real data obtained from Facebook
8
Modeling SNS Users’ Privacy Concerns
9
Basic idea Use the Item Response Theory (IRT) model to understand existing SNS users’ privacy settings Derive the quantification of privacy concern with the privacy IRT model Map a new user’s privacy concern to the IRT model find the best privacy setting
10
The Item Response Theory(IRT) model
A classic model used in standard test evaluation Example, estimate the ability level of an examinee based on his/her answers to the a number of questions
11
The two-parametric model
The black curve represents Question 1 and the red Question 2. X-axis represents the quantification of “ability” – or attitudes such as “privacy concern” Y-axis represents the probability of giving the right answer for that question Beta represents the difficulty level – beta2 is larger than beta1, which means Q2 is more difficult; at the same ability level, the probability of giving the right answer for Q2 is lower than Q1 Alpha represents the discrimination level of the question – whether it can clearly discriminate low-ability users from high ability ones; flat curves (alpha is small) certainly have lower discrimination power The IRT model is learned from users’ answers to a bunch of questions (or the privacy settings for a number of items) The two-parametric model α level of discrimination for a certain question β Level of difficulty for a certain question θ Level of a person’s certain trait
12
Mapping to privacy problem
Question answer profile item setting Ability level of privacy concern Beta sensitivity of profile item Alpha contribution to overall privacy concern
13
What we get… Probability of hiding the item relationships network
Level of privacy concern Probability of hiding the item network relationships Current_city
14
Our Research Approach Observation: Users disclose some profile items while hide others If a user believes an item is not too sensitive, he/she will disclose this item If a user perceives an item as critical to realize his/her social utility, he/she may also disclose it Otherwise, user will hide this item Hypothesis: Users have some implicit balance judgment behind their SNS activities If utility gain > privacy risk disclose If utility gain < privacy risk hide
15
Modeling SNS privacy Use the two-parametric IRT model
New interpretation of the IRT model α profile-item weight for a user’s overall privacy concern β Sensitivity level of the profile item θ Level of a user’s privacy concern
16
The complete result looks like…
17
Finding optimal settings
Theorem: Privacy rating at i User settings for items: 1: hidden, 0: disclosed Probability of hiding the item
18
Modeling SNS utility – the same method
λ profile-item weight for a user’s SNS utility μ importance level of the profile item φ Level of a user’s utility preference We can derive: λ = α and μ = -β For utility model, we have: Exposing an item lose privacy but gain utility of that item is the flip of sij
19
An important result For a specific privacy setting over theta_i Privacy rating + utility rating ≈ a constant Privacy-utility are linearly related
20
The weighted/personalized utility model
Users often have clear intention for using SN but have less knowledge on privacy Users want to put higher utility weight on (a) certain group(s) of profile items than others Users can assign specific weights to profile items to express his/her preference The utility IRT model can be revised with a weighted model (skip the details here)
21
Illustration of Tradeoff between privacy and utility
22
The Experiments
23
The Real Data from Facebook
Data crawled from Facebook with two accounts Account normal: a normal Facebook account, which has a certain number of friends Account fake: a fake account with no friends Data crawling steps For the friends and “friends of friends” (FOF) of account normal, crawl the profile item visibility of each user For the same group of users, crawl the visibility of the fake account’s FoFs’ profile items again We have the following inference rules
24
Deriving privacy settings of users
Based on the data crawled with the FoF of the two accounts, we derive the privacy setting of a user based on the following rules E: everyone, FoF: friends of Friends, O:the account owner, F: Friends only
25
Experimental Results “Cleaned # of FoF”: Shared FoFs are removed to reduce the bias in modeling
26
Note: The items that are not filled up by the user are also treated as “hidden”. - Some people simply ignore some items or those items have no value, e.g., “graduate school” - it is consistent with our rationale of “disclosing/hiding” items
28
Validated with 5-fold cross-validation With p-value <0.05
29
Privacy rating real setting
For each theta_i : the level of privacy concern, there is one “privacy rating” (defined in previous slide”) The real settings may deviate from the ideal setting Privacy rating real setting
30
Results of learning weighted utility model
The weighting scheme is to be studied.
31
Tradeoff between privacy and utility (unweighted)
Utility rating Privacy rating Few people have very high level of privacy concern More people tend to have lower privacy ratings, or implicitly higher utility ratings The relationship is about linear, Very few have very high privacy rating.
32
Tradeoff between privacy and weighted utility
33
Conclusion A framework to address the tradeoff between privacy and utility Latent trait model (IRT) is used for modeling privacy and utility We develop a personalized utility model and a tradeoff method for users to find optimal configuration based on utility preferences The models are validated with a large dataset crawled from Facebook
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.