& A Recommendation System for Recipients Vitor R. Carvalho and William W. Cohen, Carnegie Mellon University March 2007 Preventing Leaks
On July 6th 2001, the news agency Bloomberg.com published… “California Governor Gray Davis’s office released data on the state’s purchases in the spot electricity market — information Davis has been trying to keep secret — through a misdirected . The , containing data on California’s power purchases yesterday, was intended for members of the governor’s staff, said Davis spokesman Steve Maviglio. It was accidentally sent to some reporters on the office’s press list, he said. Davis is fighting disclosure of state power purchases, saying it would compromise negotiations for future contracts”.
Other examples…just google “ leak” Leaked exposes MS charity as PR exercise “Leaked may be behind Morgan Stanley's Asia economist's sudden resignation” Dell leaked shows channel plans -Direct threat haunts dealers-A leaked reveals Dell wants to get closer to UK resellers. California Power-Buying Data Disclosed in Misdirected .
Information Leaks via leak = message accidentally sent to “unintended” recipients. 1.Similar first/last names, aliases 2.Aggressive auto- completion of addresses 3.Typos 4.Keyboard settings leaks may contain sensitive information leading to disastrous consequences.
Detecting Leaks: Method Idea 1.Goal: to detect s accidentally sent to the wrong person 2.Generate artificial leaks: leaks may be simulated by various criteria: a typo, similar last names, identical first names, aggressive auto-completion of addresses, etc. 3.Method: Look for outliers Look for Outliers 1.Build model for (msg- recipients) pairs: train classifier on real data to detect simulated outliers (added to the “true” recipient list). 2.Features: textual(subject, body), network features (frequencies, co-occurrences, etc). 3.Rank potential outliers - Detect outlier and warn user based on classifier’s confidence
Detecting Leaks: Method Rec_6 Rec_2 … Rec_K Rec_5 Most likely outlier Least likely outlier P(rec_t) P(rec_t) =Probability recipient t is an outlier given “message text and other recipients in the message”. Look for Outliers 1.Build model for (msg- recipients) pairs: train classifier on real data to detect simulated outliers (added to the “true” recipient list). 2.Features: textual(subject, body), network features (frequencies, co-occurrences, etc). 3.Rank potential outliers - Detect outlier and warn user based on classifier’s confidence
Leak Criteria: how to generate (artificial) outliers Several options: –Frequent typos, same/similar last names, identical/similar first names, aggressive auto- completion of addresses, etc. We adopted the “3g-address” criteria: –On each trial, one of the msg recipients is randomly chosen and an outlier is generated according to: Else: Randomly select an address book entry
Data Preprocessing Used the Enron Dataset Setup a realistic temporal setup –For each user, 10% (most recent) sent messages will be used as test All users had their Address Books extracted –List of all recipients in the sent messages. Self-addressed messages were disregarded
Still Data Preprocessing ISI version of Enron –Remove repeated messages and inconsistencies Disambiguate Main Enron addresses –List provided by Corrada-Emmanuel from UMass Bag-of-words –Messages were represented as the union of BOW of body and BOW of subject (Textual Features) Some stop words removed
Experiments:Textual Features only Three Baseline Methods –Random Rank recipient addresses randomly –Cosine or TfIdf Centroid (Rocchio) Create a “TfIdf centroid” for each user in Address Book. A user1-centroid is the sum of all training messages (in TfIdf vector format) that were addressed to user user1. For testing, rank according to cosine similarity between test message and each centroid. –Knn-30 Given a test msg, get 30 most similar msgs in training set. Rank according to “sum of similarities” of a given user on the 30-msg set.
Experiments: Textual Features only Leak Prediction Results: Accuracy (or in 10 trials. On each trial, a different set of outliers is generated Accuracy
Using Network Features 1.Frequency features –Number of received messages (from this user) –Number of sent messages (to this user) –Number of sent+received messages 2.Co-Occurrence Features –Number of times a user co-occurred with all other recipients. Co-occurr means “two recipients were addressed in the same message in the training set” 3.Max3g features –For each recipient R, find Rm (=address with max score from 3g-address list of R), then use score(R)-score(Rm) as feature. Scores come from the CV10 procedure. Leak- recipient scores are likely to be smaller than their 3g- address highest score.
Combining Textual and Network Features 10-fold Crossvalidation scheme Training –Use Knn-30 on 10-Fold crossvalidation setting to get “textual score” of each user for all training messages –Turn each train example into |R| binary examples, where |R| is the number of recipients of the message. |R|-1 positive (the real recipients) 1 negative (leak-recipient) –Augment “textual score” with network features –Quantize features –Train a classifier VP5-Classification-based ranking scheme (VP5=Voted Perceptron with 5 passes over training set)
Results: Textual+Network Features
Finding Real Leaks in Enron How can we find it? –Look for “mistake”, “sorry” or “accident ”. We were looking for sentences like “Sorry. Sent this to you by mistake. Please disregard.”, “I accidentally send you this reminder”, etc. How many can we find? –Dozens of cases. Unfortunately, most of these cases were originated by non-Enron addresses or by an Enron address that is not one of the 151 Enron users whose messages were collected. Our method requires a collection of sent (+received) messages from a user. Found 2 real “valid” cases! (“valid” = testable) –Message germanyc/sent/930, message has 20 recipients, leak is –kitchen-l/sent items/497, it has 44 recipients, leak is
Finding Real Leaks in Enron –Very Disappointing Results!! –Reason: and were never observed in the training set! [Accuracy, Average Rank], 100 trials
“Smoothing” the leak generation Else: Randomly select an address book entry Generate a random address NOT in Address Book Sampling from random unseen recipients with probability
Some Results: Kitchen-l has 4 unseen addresses out of the 44 recipients, Germany-c has only one, out of 20.
Mixture parameter :
Back to the simulated leaks:
Conclusions Privacy and papers are rare. To the best of our knowledge, this was the first paper on preventing information leaks via . Can prevent HUGE problems Easy to implement in any client – no change in server side. The Leak paper was accepted in SDM “This is a feature I would like to have in the client I use myself” “Personally, I am eager to use such a tool if its accuracy is good.”
& A Recommendation System for Recipients Vitor R. Carvalho and William W. Cohen, Carnegie Mellon University March 2007 Preventing Leaks
Recommending Recipients 1.Prevent a user from forgetting to add an important collaborator or manager as recipient, preventing costly misunderstandings and communication delays. Cost of errors in task management is high: for instance, deadlines can be missed or opportunities wasted because of such errors. 2.Find people in an organization that are working in a similar topic or project, or to find people with appropriate expertise or skills. Valuable addition to systems, particularly in large corporations systems that can suggest who recipients of a message might be while the message is being composed, given its current contents and given its previously-specified recipients.
Two Recommendation Tasks TO+CC+BCC Prediction CC+BCC Prediction Method: 1.Extract Features: Textual and non-Textual 2.Build model for (msg-recipients) pairs: train classifier to detect “true” missing recipients. 3. Rank all addresses in Address Book according to classifier’s confidence
Methods Large-scale multi-class multi-label classification tasks: Address Books have hundreds, sometimes thousands, of addresses (classes) One-vs-all training is too expensive, even for users having a small collections of messages Using Information Retrieval Techniques as baselines: Rocchio (TfIdf-Centroid) and KNN Enron Dataset with similar preprocessing steps as Leak Problem
Using Network Features 1.Frequency features –Number of received messages (from this user) –Number of sent messages (to this user) –Number of sent+received messages 2.Co-Occurrence Features (CC+BCC only) –Number of times a user co-occurred with all other recipients. Co-occurr means “two recipients were addressed in the same message in the training set” 3.Recency features –How frequent a recipient is in the last 20, 50, 100 messages.
Combining Textual and Network Features 10-fold Crossvalidation scheme Training –Use Knn-30 on 10-Fold crossvalidation setting to get “textual score” of each user for all training messages –Turn each train example into |AB| binary examples, where |AB| is the number of recipients in the Address Book. J positive (the real recipients) |AB|-J negative (all other in address book) –Augment “textual score” with network features –Train a classifier VP5-Classification-based ranking scheme (VP5=Voted Perceptron with 5 passes over training set)
Results: TO+CC+BCC Prediction
Avg. Recall vs Rank Curves
Overall Results
Related Work Privacy Enforcement System –Boufaden et al. (CEAS-2005) - used information extraction techniques and domain knowledge to detect privacy breaches via in a university environment. Breaches: student names, student grades and student IDs. CC Prediction –Pal & McCallum (CEAS-06) Counterpart problem: prediction of most likely intended recipients of msg. One single user, limited evaluation, not public data Expert finding in –Dom et al.(SIGMOD-03), Campbell et al(CIKM-03) –Balog & de Rijke (www-06), Balog et al (SIGIR-06) –Soboroff, Craswell, de Vries (TREC-Enterprise …) Expert finding task on the W3C corpus
Conclusions Submitted to KDD-07 The Recipient Prediction task can be seen as the negative counterpart of the Leak Prediction task. In the former, we want to find the intended recipients of messages, whereas in the latter we want to find the unintended recipients or -leaks. A desirable system addition to avoid misunderstandings and communication delays. Efficient, easy to implement and integrate, particularly in systems where traditional search is already available.