Unsupervised Detection of Anomalous Text David Guthrie The University of Sheffield
Textual Anomalies Computers are routinely used to detect differences from what is normal or expected –fraud –network attacks Principal focus of this research is to similarly detect text that is irregular We view text that deviates from its context as a type of anomaly
New Document Collection Anomalous Documents?
Find text that is unusual
New Document Anomalous Segments?
New Document Anomalous Segments?
New Document Anomalous Segments?
New Document Anomalous Segments? Anomalous
Motivation Plagiarism –Writing style of plagiarized passages anomalous with respect to the rest of the authors work –Detect such passages because writing is “odd” not by using external resources (web) Improving Corpora –Automatically gathered corpora can contain errors. Improve the integrity and homogeneity. Unsolicited –E.g. Spam constructed from sentences Undesirable Bulletin Board or Wiki posts –E.g. rants on wikipedia
Goals To develop a general approach which recognizes: –different dimensions of anomaly –fairly small segments (50 to 100 words) –Multiple anomalous segments
Unsupervised For this task we assume there is no training data available to characterize “normal” or “anomalous” language When we first look at a document we have no idea which segments are “normal” and which are “anomalous” Segments are anomalous with respect to the rest of the document not to a training corpus
Outlier Detection Treat the problem as a type of outlier detection We aim to find pieces of text in a corpus that differ significantly from the majority of text in that corpus and thus are ‘outliers’
Characterizing Text 166 features computed for every piece of text (many of which have been used successfully for genre classification by Biber, Kessler, Argamon, …) Simple Surface Features Readability Measures POS Distributions (RASP) Vocabulary Obscurity Emotional Affect (General Inquirer Dictionary)
Readability Measures Attempt to provide a rough indication of the reading level required for a text Purported to correspond how “easily” a text is read Work well for differentiating certain texts ( Scores are Flesch Reading Ease) Romeo & Juliet 84 Plato’s Republic 69 Comic Books 92 Sports Illustrated 63 New York Times 39 IRS Code -6
Readability Measures Flesch-Kincaid Reading Ease Flesch-Kincaid Grade Level Gunning-Fog Index Coleman-Liau Formula Automated Readability Index Lix Formula SMOG Index
Obscurity of Vocabulary Implemented new features to capture vocabulary richness used in a segment of text Lists of most frequent words in Gigaword Measure distribution of words in a segment of text in each group of words Top 1,000 words Top 5,000 words Top 10,000 words Top 50,000 words Top 100,000 words Top 200,000 words Top 300,000 words
Part-of-Speech All segments are passed through the RASP (Robust and Accurate Statistical Parser) part-of-speech tagger All words tagged with one of 155 part-of-speech tags from the CLAWS 2 tagset
Part-of-Speech Ratio of adjectives to nouns % of sentences that begin with a subordinating or coordinating conjunctions (but, so, then, yet, if, because, unless, or…) % articles % prepositions % pronouns % adjectives %conjuctions Diversity of POS trigrams
Morphological Analysis Texts are also run through the RASP morphological analyser, which produces words lemmas and inflectional affixes Gather statistics about the percentage of passive sentences and amount of nominalization were made thinking apples be + ed make + ed think + ing apple + s
Rank Features Store lists ordered by the frequency of occurrence of certain stylistic phenomena Most frequent POS trigrams list Most frequent POS bigram list Most frequent POS list Most frequent Articles list Most frequent Prepositions list Most frequent Conjunctions list Most frequent Pronouns list
List Rank Similarity To calculate the similarity between two segments lists, we use the Spearman’s Rank Correlation measure
Sentiment General Inquirer Dictionary (Developed by social science department at Harvard) 7,800 words tagged with 114 categories: –Positive –Negative –Strong –Weak –Active –Passive –Overstated –Understated –Agreement –Disagreement and many more … - Negate - Casual slang - Think - Know - Compare - Person Relations - Need - Power Gain - Power Loss - Affection - Work
Representation Characterize each piece of text (document, segment, paragraph, …) in our corpus as a vector of features Use these vectors to construct a matrix, X, which has number of rows equal to the pieces of text in the corpus and number of columns equal to the number of features
Feature Matrix X segf1f2f3f4f5f6f7…fpfp … n Represent each piece of text as a vector of features Document or corpus
Feature Matrix X segf1f2f3f4f5f6f7…fpfp … n Identify outlying Text Document or corpus
Approaches Mean Distance: Compute average distance from other segments Comp Distance: compute a segment’s difference from its complement SDE Distance Find the projection of the data where segments appear farthest
Mean Distance
Finding Outlying Segments Feature Matrix segf1f2f3f4f5f6f7…fn … n Calculate the distance from segment 1 to segment 2 Dist =.5
Finding Outlying Segments Feature Matrix segf1f2f3f4f5f6f7…fn … n Calculate the distance from segment 1 to segment 3 Dist=.3
Finding Outlying Segments Feature Matrix se g f1f1 f2f2 f3f3 f4f4 f5f5 f6f6 f7f7 …fnfn … n Build a Distance Matrix
Finding Outlying Segments Feature Matrix se g f1f1 f2f2 f3f3 f4f4 f5f5 f6f6 f7f7 …fnfn … n Choose the segment that is most different Distance Matrix outlier
Ranking Segments Feature Matrix Distance Matrix List of Segments Produce a Ranking of Segments
Distance Measures Cosine Similarity Measure d = 1 - s City Block Distance Euclidean Distance Pearson Correlation Coefficient d = 1 - r
Standardizing Variables Desirable for all variables to have about the same influence We can express them each as deviations from their means in units of standard deviations (Z score) Or Standardize all variables to have a minimum of zero and a maximum of one
Comp Distance
Distance from complement New Document or corpus
Distance from complement Segment the text
Distance from complement segf1f2f3f4f5f6f7…fn … n Characterize one segment
Distance from complement Characterize the complement of the segment
Distance from complement Compute the distance between the two vectors D=.4
Distance from complement For all segments D=.4
Distance from complement Compute distance between segments D=.6 D=.4
Rank by distance from complement Next, segments are ranked by their distance from the complement In this scenario we can make good use of list features
SDE Dist
SDE Use the Stahel-Donoho Estimator (SDE) to identify outliers Project the data down to one dimension and measure the outlyingness of each piece of text in that dimension For every piece of text, the goal is to find a projection of the that maximizes its robust z- score Especially suited to data with a large number of dimensions (features)
Outliers are ‘hidden’
Robust Zscore of furthest point is <3
Robust z score for triangles in this Projection is >12 std dev
Outliers are clearly visible
SDE Where a is a direction (unit length vector) and x i a is the projection of row x i onto direction a mad is the median absolute deviation
Anomalies have a large SD The distances for each piece of text SD(x i ) are then sorted and all pieces of text are ranked
Experiments In each experiment we randomly select 50 segments of text from a corpus and insert one piece of text from a different source to act as an ‘outlier’ Rank segments We varied the size of the pieces of text from 100 to 1000 words
Normal Population Anomalous Population Creating Test Documents
Normal Population Anomalous Population Creating Test Documents
Normal PopulationAnomalous Population
Test Document Creating Test Documents Now we attempt to spot this anomaly
Creating Test Documents Normal PopulationAnomalous Population Test documents are created for every segment in anomalous text
New Document System Output: a ranking Most Anomalous 2nd Most Anomalous 3nd Most Anomalous
Author Tests Compare 8 different authors »Bronte »Carroll »Doyle »Eliot »James »Kipling »Tennyson »Wells 56 pairs of authors For each author we use 50,000 words For each pair at least 50 different paragraphs from one author are inserted into the other author, one at a time. Tests are run using different segment sizes: »100 words »500 words »1000 words
Authorship Anomalies - Top 5 Ranking
Testing whether opinion can be detected in a factual story Opinion text is editorials from 4 newspapers totalling 28,200 words »Guardian »New Statesman »New York Times »Daily Telegraph Factual text is randomly chosen from the Gigaword and consists of 4 different 78,000 word segments one each from one of the 4 news wire services: »Agence France Press English Service »Associated Press Worldstream English Service »The New York Times Newswire Service »The Xinhua News Agency English Service Each opinion text paragraph is inserted into each news wire service one at a time for at least 28 insertions on each newswire Tests are run using different paragraph sizes Fact versus Opinion Tests
Opinion Anomalies - Top 5 Ranking
Very different genre from newswire. The writing is much more procedural (e.g. instructions to build telephone phreaking devices) and also very informal (e.g. ``When the fuse contacts the balloon, watch out!!!'') Randomly insert one segment from the Anarchist Cookbook and attempt to identify outliers –This is repeated 200 times for each segment size (100, 500, and 1,000 words) News versus Anarchist Cookbook
Anarchist Cookbook Anomalies - Top 5 Ranking
35 thousand words of Chinese news articles were hand picked (Wei Liu) and translated into English using Google’s Chinese to English translation engine Similar genre to English newswire but translations are far from perfect and so the language use is very odd Test collections are created for each segment size as before News versus Chinese MT
Chinese MT Anomalies - Top 5 Ranking
Results Comp Distance anomaly detection produced best overall results, closely followed by SDE dist. City block distance measure always produced good rankings (with all methods)
Conclusions Variations in text can be viewed as a type of anomaly or outlier and can be successfully detected using automatic unsupervised techniques Stylistic features and distributions of the rarity of words are a good choices for characterizing text and detecting a broad range of anomalies. Procedure that measures a piece of texts distance from its textual complement performs best Accuracy for anomaly detection improves considerably as we increase the length of our segments.