Michael L. Nelson CS 432/532 Old Dominion University

Slides:



Advertisements
Similar presentations
Document Filtering Dr. Frank McCown Intro to Web Science Harding University This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike.
Advertisements

Basic Communication on the Internet:
Naive Bayes Classifiers, an Overview By Roozmehr Safi.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
What is Spam  Any unwanted messages that are sent to many users at once.  Spam can be sent via , text message, online chat, blogs or various other.
Google Apps: Google Mail Got Gmail?....Need Help? Mrs. Connor.
Engineering Village ™ Basic Searching.
Machine Learning in Practice Lecture 7 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
CSC 380 Algorithm Project Presentation Spam Detection Algorithms Kyle McCombs Bridget Kelly.
Presented by: Alex Misstear Spam Filtering An Artificial Intelligence Showcase.
Analysis of frequency counts with Chi square
Engineering Village ™ ® Basic Searching On Compendex ®
CS345 Data Mining Web Spam Detection. Economic considerations  Search has become the default gateway to the web  Very high premium to appear on the.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
Spam Filters. What is Spam? Unsolicited (legally, “no existing relationship” Automated Bulk Not necessarily commercial – “flaming”, political.
1 CS 430 / INFO 430 Information Retrieval Lecture 10 Probabilistic Information Retrieval.
CONTENT-BASED BOOK RECOMMENDING USING LEARNING FOR TEXT CATEGORIZATION TRIVIKRAM BHAT UNIVERSITY OF TEXAS AT ARLINGTON DATA MINING CSE6362 BASED ON PAPER.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
The Dr ü G Book: An Intro to Drupal The Dr ü G Book: An Intro to Drupal (Dr ü G: Drupal User ’ s Group - users, not developers) This is an introduction.
Advanced Multimedia Text Classification Tamara Berg.
Classes and Class Libraries Examples and Hints November 9,
Bayesian Networks. Male brain wiring Female brain wiring.
Python & Web Mining Old Dominion University Department of Computer Science Hany SalahEldeen CS495 – Python & Web Mining Fall 2012 Lecture 5 CS 495 Fall.
Testing Michael Ernst CSE 140 University of Washington.
Text Classification, Active/Interactive learning.
The Internet 8th Edition Tutorial 2 Basic Communication on the Internet: .
Hunter Valley Amateur Beekeepers Forum User Guide Guide shows sample screenshots with most relevant actions. Website is at
CS Fall 2007 Dr. Barbara Boucher Owens. CS 2 Text –Main, Michael. Data Structures & Other Objects in Java Third Edition Objectives –Master building.
Web Optimization- Review. Web Optimization- Metrics ( ROI)  What is ROIROI Return on Investment (Finance) ROI = Profit – Costs / Costs.
Errors And How to Handle Them. GIGO There is a saying in computer science: “Garbage in, garbage out.” Is this true, or is it just an excuse for bad programming?
Evaluating What’s Been Learned. Cross-Validation Foundation is a simple idea – “ holdout ” – holds out a certain amount for testing and uses rest for.
Working on exercises (a few notes first). Comments Sometimes you want to make a comment in the Python code, to remind you what’s going on. Python ignores.
Improving Cloaking Detection Using Search Query Popularity and Monetizability Kumar Chellapilla and David M Chickering Live Labs, Microsoft.
ITCS373: Internet Technology Lecture 5: More HTML.
Computing Science, University of Aberdeen1 Reflections on Bayesian Spam Filtering l Tutorial nr.10 of CS2013 is based on Rosen, 6 th Ed., Chapter 6 & exercises.
1 FollowMyLink Individual APT Presentation Third Talk February 2006.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
1 CS 430: Information Discovery Sample Midterm Examination Notes on the Solutions.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Data Mining – Algorithms: Naïve Bayes Chapter 4, Section 4.2.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
Working on exercises (a few notes first)‏. Comments Sometimes you want to make a comment in the Python code, to remind you what’s going on. Python ignores.
Bayesian Filtering Team Glyph Debbie Bridygham Pravesvuth Uparanukraw Ronald Ko Rihui Luo Thuong Luu Team Glyph Debbie Bridygham Pravesvuth Uparanukraw.
Wikispam, Wikispam, Wikispam PmWiki Patrick R. Michaud, Ph.D. March 4, 2005.
Web Information Retrieval Textbook by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze Notes Revised by X. Meng for SEU May 2014.
Introduction to JavaScript MIS 3502, Spring 2016 Jeremy Shafer Department of MIS Fox School of Business Temple University 2/2/2016.
Feature Assignment LBSC 878 February 22, 1999 Douglas W. Oard and Dagobert Soergel.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
Machine Learning in Practice Lecture 21 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
3M Partners and Suppliers Click to edit Master title style USER GUIDE Supplier eInvoicing USER GUIDE The 3M beX environment: Day-to-day use.
1 e-Resources on Social Sciences: Scopus. 2 Why Scopus?  A comprehensive abstract and citation database of peer-reviewed literature and quality web sources.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Document Filtering Michael L. Nelson CS 495/595 Old Dominion University This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 15: Text Classification & Naive Bayes 1.
Vertical Search for Courses of UIUC Homepage Classification The aim of the Course Search project is to construct a database of UIUC courses across all.
Collective Intelligence Week 6: Document Filtering Old Dominion University Department of Computer Science CS 795/895 Spring 2009 Michael L. Nelson 2/18/09.
General System Navigation
Create a blog Skills: create, modify and post to a blog
Collective Intelligence Week 11: k-Nearest Neighbors
Document Filtering Social Web 3/17/2010 Jae-wook Ahn.
Michael L. Nelson CS 495/595 Old Dominion University
How to Use Members Area of The Ninety-Nines Website
Text Categorization Assigning documents to a fixed set of categories
Naïve Bayes Classifiers
CS 430: Information Discovery
NAÏVE BAYES CLASSIFICATION
Information Organization: Evaluation of Classification Performance
Evaluation David Kauchak CS 158 – Fall 2019.
Presentation transcript:

Michael L. Nelson CS 432/532 Old Dominion University Document Filtering Michael L. Nelson CS 432/532 Old Dominion University This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License This course is based on Dr. McCown's class

Can we classify these documents? Science, Leisure, Programming, etc.

Can we classify these documents? Important, Work, Personal, Spam, etc.

How about very small documents?

Rule-Based Classifiers Are Inadequate If my email has the word "spam", is the message about: http://www.youtube.com/watch?v=anwy2MPT5RE http://en.wikipedia.org/wiki/Spam_%28Monty_Python%29 https://docs.python.org/2/faq/general.html#why-is-it-called-python Rule-based classifiers don't consider context

Features Many external features can be used depending on type of document Links pointing in? Links pointing out? Recipient list? Sender's email and IP address? Many internal features Use of certain words or phrases Color and sizes of words Document length Grammar analysis We will focus on internal features to build a classifier

Spam Unsolicited, unwanted, bulk messages sent via electronic messaging systems Usually advertising or some economic incentive Many forms: email, forum posts, blog comments, social networking, web pages for search engines, etc.

http://modernl. com/images/illustrations/how-viagra-spam-works-large http://modernl.com/images/illustrations/how-viagra-spam-works-large.png

Classifiers Needs features for classifying documents Feature is anything you can determine that is present or absent in the item Best features are common enough to appear frequently but not all the time (cf. stopwords) Words in document are a useful feature For spam detection, certain words like viagra usually appear in spam

Classifying with Supervised Learning We "teach" the program to learn the difference between spam as unsolicited bulk email, luncheon meat, and comedy troupes by providing examples of each classification We use an item's features for classification item = document feature = word classification = {good|bad}

Simple Feature Classifications >>> import docclass >>> cl=docclass.classifier(docclass.getwords) >>> cl.setdb('mln.db') >>> cl.train('the quick brown fox jumps over the lazy dog','good') the quick brown fox jumps over the lazy dog >>> cl.train('make quick money in the online casino','bad') make quick money in the online casino >>> cl.fcount('quick','good') 1.0 >>> cl.fcount('quick','bad') >>> cl.fcount('casino','good') >>> cl.fcount('casino','bad') http://imgur.com/gallery/zWuuJ67

training data def sampletrain(cl): cl.train('Nobody owns the water.','good') cl.train('the quick rabbit jumps fences','good') cl.train('buy pharmaceuticals now','bad') cl.train('make quick money at the online casino','bad') cl.train('the quick brown fox jumps','good')

Conditional Probabilities >>> import docclass >>> cl=docclass.classifier(docclass.getwords) >>> cl.setdb('mln.db') >>> docclass.sampletrain(cl) Nobody owns the water. the quick rabbit jumps fences buy pharmaceuticals now make quick money at the online casino the quick brown fox jumps >>> cl.fprob('quick','good') 0.66666666666666663 >>> cl.fprob('quick','bad') 0.5 >>> cl.fprob('casino','good') 0.0 >>> cl.fprob('casino','bad') >>> cl.fcount('quick','good') 2.0 >>> cl.fcount('quick','bad') 1.0 >>> Pr(A|B) = "probability of A given B" fprob(quick|good) = "probability of quick given good" = (quick classified as good) / (total good items) = 2 / 3 fprob(quick|bad) = "probability of quick given bad" = (quick classified as bad) / (total bad items) = 1 / 2 note: we're writing to a database, so your counts might be off if you re-run the examples

Assumed Probabilities >>> cl.fprob('money','bad') 0.5 >>> cl.fprob('money','good') 0.0 we have data for bad, but should we start with 0 probability for money given good? >>> cl.weightedprob('money','good',cl.fprob) 0.25 >>> docclass.sampletrain(cl) Nobody owns the water. the quick rabbit jumps fences buy pharmaceuticals now make quick money at the online casino the quick brown fox jumps 0.16666666666666666 >>> cl.fcount('money','bad') 3.0 >>> cl.weightedprob('money','bad',cl.fprob) 0.5 define an assumed probability of 0.5 then weightedprob() returns the weighted mean of fprob and the assumed probability weightedprob(money,good) = (weight * assumed + countAllCats * fprob()) / (countAllCats + weight) = (1*0.5 + 1*0) / (1+1) = 0.5 / 2 = 0.25 (double the training) = (1*0.5 + 2*0) / (2+1) = 0.5 / 3 = 0.166 Pr(money|bad) remains = (0.5 + 3*0.5) / (3+1) = 0.5

Naïve Bayesian Classifier Move from terms to documents: Pr(document) = Pr(term1) * Pr(term2) * … * Pr(termn) Naïve because we assume all terms occur independently we know this is as simplifying assumption; it is naïve to think all terms have equal probability for completing: "Shave and a hair cut ___ ____" "New York _____" "International Business ______" Bayesian because we use Bayes' Theorem to invert the conditional probabilities

Probability of Whole Document Naïve Bayesian classifier determines probability of entire document being given a classification Pr(Category | Document) Assume: Pr(python|bad) = 0.2 Pr(casino|bad) = 0.8 So Pr(python & casino|bad) = 0.2 * 0.8 = 0.16 This is Pr(Document|Category) How do we calculate Pr(Category|Document)?

see: https://twitter.com/KirkDBorne/status/850073322884927488 and Bayes' Theorem Given our training data, we know: Pr(feature|classification) What we really want to know is: Pr(classification|feature) Bayes' Theorem (http://en.wikipedia.org/wiki/Bayes%27_theorem ) : Pr(A|B) = Pr(B|A) Pr(A) / Pr(B) Pr(good|doc) = Pr(doc|good) Pr(good) / Pr(doc) we know how to calculate this #good / #total we skip this since it is the same for each classification see: https://twitter.com/KirkDBorne/status/850073322884927488 and https://www.countbayesie.com/blog/2015/2/18/bayes-theorem-with-lego

Our Bayesian Classifier >>> import docclass >>> cl=docclass.naivebayes(docclass.getwords) >>> cl.setdb('mln.db') >>> docclass.sampletrain(cl) Nobody owns the water. the quick rabbit jumps fences buy pharmaceuticals now make quick money at the online casino the quick brown fox jumps >>> cl.prob('quick rabbit','good') quick rabbit 0.15624999999999997 >>> cl.prob('quick rabbit','bad') 0.050000000000000003 >>> cl.prob('quick rabbit jumps','good') quick rabbit jumps 0.095486111111111091 >>> cl.prob('quick rabbit jumps','bad') 0.0083333333333333332 we use these values only for comparison, not as "real" probabilities

Classification Thresholds >>> cl.prob('quick rabbit','good') quick rabbit 0.15624999999999997 >>> cl.prob('quick rabbit','bad') 0.050000000000000003 >>> cl.classify('quick rabbit',default='unknown') u'good' >>> cl.prob('quick money','good') quick money 0.09375 >>> cl.prob('quick money','bad') 0.10000000000000001 >>> cl.classify('quick money',default='unknown') u'bad' >>> cl.setthreshold('bad',3.0) 'unknown' >>> for i in range(10): docclass.sampletrain(cl) ... [training data deleted] >>> cl.prob('quick money','good') quick money 0.016544117647058824 >>> cl.prob('quick money','bad') 0.10000000000000001 >>> cl.classify('quick money',default='unknown') u'bad' >>> >>> cl.prob('quick rabbit','good') quick rabbit 0.13786764705882351 >>> cl.prob('quick rabbit','bad') 0.0083333333333333332 >>> cl.classify('quick rabbit',default='unknown') u'good' only classify something as bad if it is 3X more likely to be bad than good

Fisher Method Normalize the frequencies for each category e.g., we might have far more "bad" training data than good, so the net cast by the bad data will be "wider" than we'd like Naïve Bayes = combine feature probabilities to arrive at document probability Fisher = calculate category probability for each feature, combine the probabilities, then see if the set of probabilities is more or less than the expected value for a random document Calculate normalized Bayesian probability, then fit the result to an inverse chi-square function to see what is the probability that a random document of that classification would have those features (i.e., terms)

Fisher Code class fisherclassifier(classifier): def cprob(self,f,cat): # The frequency of this feature in this category clf=self.fprob(f,cat) if clf==0: return 0 # The frequency of this feature in all the categories freqsum=sum([self.fprob(f,c) for c in self.categories()]) # The probability is the frequency in this category divided by # the overall frequency p=clf/(freqsum) return p

Fisher Code def fisherprob(self,item,cat): # Multiply all the probabilities together p=1 features=self.getfeatures(item) for f in features: p*=(self.weightedprob(f,cat,self.cprob)) # Take the natural log and multiply by -2 fscore=-2*math.log(p) # Use the inverse chi2 function to get the # probability of getting the fscore # value we got return self.invchi2(fscore,len(features)*2) http://en.wikipedia.org/wiki/Inverse-chi-squared_distribution

Fisher Example >>> import docclass >>> cl=docclass.fisherclassifier(docclass.getwords) >>> cl.setdb('mln.db') >>> docclass.sampletrain(cl) Nobody owns the water. the quick rabbit jumps fences buy pharmaceuticals now make quick money at the online casino the quick brown fox jumps >>> cl.cprob('quick','good') 0.57142857142857151 >>> cl.fisherprob('quick','good') quick 0.5535714285714286 >>> cl.fisherprob('quick rabbit','good') quick rabbit 0.78013986588957995 >>> cl.cprob('rabbit','good') 1.0 >>> cl.fisherprob('rabbit','good') rabbit 0.75 >>> cl.cprob('quick','bad') 0.4285714285714286 >>> cl.cprob('money','good') >>> cl.cprob('money','bad') 1.0 >>> cl.cprob('buy','bad') >>> cl.cprob('buy','good') >>> cl.fisherprob('money buy','good') money buy 0.23578679513998632 >>> cl.fisherprob('money buy','bad') 0.8861423315082535 >>> cl.fisherprob('money quick','good') money quick 0.41208671548422637 >>> cl.fisherprob('money quick','bad') 0.70116895256207468 >>>

Classification with Inverse Chi-Square Result >>> cl.fisherprob('quick rabbit','good') quick rabbit 0.78013986588957995 >>> cl.classify('quick rabbit') u'good' >>> cl.fisherprob('quick money','good') quick money 0.41208671548422637 >>> cl.classify('quick money') u'bad' >>> cl.setminimum('bad',0.8) >>> cl.setminimum('good',0.4) >>> cl.setminimum('good',0.42) >>> Classification with Inverse Chi-Square Result in practice, we'll tolerate false positives for "good" more than false negatives for "good" -- we'd rather see a mesg that is spam rather than lose a mesg that is not spam. this version of the classifier does not print "unknown" as a classification

Classifying Entries in the F-Measure Blog encoding problems with supplied python_search.xml fixable, but didn't want to work that hard f-measure.blogspot.com is an Atom-based feed music is not classified by genre edits made to feedfilter.py & data commented out "publisher" field rather than further edit feedfilter.py, I s/content/summary/g in the f-measure.xml (a hack, I know…) changes in read(): # Print the best guess at the current category #print 'Guess: '+str(classifier.classify(entry)) print 'Guess: '+str(classifier.classify(fulltext)) # Ask the user to specify the correct category and train on that cl=raw_input('Enter category: ') classifier.train(fulltext,cl) #classifier.train(entry,cl) # where fulltext is now title + summary

F-Measure Example >>> import feedfilter >>> import docclass >>> cl=docclass.fisherclassifier (docclass.getwords) >>> cl.setdb('mln-f-measure.db') >>> feedfilter.read('f-measure.xml',cl) [lots of interactive training stuff deleted] >>> cl.classify('cars') u'electronic' >>> cl.classify('uk') u'80s' >>> cl.classify('ocasek') >>> cl.classify('weezer') u'alt' >>> cl.classify('cribs') >>> cl.classify('mtv') >>> cl.cprob('mtv','alt') >>> cl.cprob('mtv','80s') 0.51219512195121952 >>> cl.classify('libertines') u'alt' >>> cl.classify('wichita') >>> cl.classify('journey') u'80s' >>> cl.classify('venom') u'metal' >>> cl.classify('johnny cash') u'cover' >>> cl.classify('spooky') >>> cl.classify('dj spooky') >>> cl.classify('dj shadow') u'electronic' >>> cl.cprob('spooky','metal') 0.60000000000000009 >>> cl.cprob('spooky','electronic') 0.40000000000000002 >>> cl.classify('dj') >>> cl.cprob('dj','80s') >>> cl.cprob('dj','electronic') we have "dj spooky" (electronic) and "spooky intro" (metal) unfortunately, getwords() ignores "dj" with: if len(s)>2 and len(s)<20

Improved Feature Detection entryfeatures() on p. 137 takes an entry as an argument, not a string (edits from 2 slides ago would have to be backed out) looks for > 30% UPPERCASE words does not tokenize "publisher" and "creator" fields actually, the code just does that for "publisher" For "summary" field, it preserves 1-grams (as before) but also adds bi-grams For example, "…best songs ever: "Good Life" and "Pink Triangle"." would be split into: 1-grams: best, songs, ever, good, life, and, pink, triangle bi-grams: best songs, songs ever, ever good, good life, life and, and pink, pink triangle

Precision and Recall: Defined the Same, but Expectations are Different Remember this slide from Week 5? Now we can expect to populate the top right-hand corner. Precision = TP / (TP+FP) Recall = TP / (TP+FN) F-Measure = 2 * P*R / (P+R) https://en.wikipedia.org/wiki/Precision_and_recall https://en.wikipedia.org/wiki/Confusion_matrix

10-Fold Cross Validation image from: https://chrisjmccormick.wordpress.com/2013/07/31/k-fold-cross-validation-with-matlab-code/ https://en.wikipedia.org/wiki/Cross-validation_%28statistics%29