CSCI 5417 Information Retrieval Systems Jim Martin Lecture 11 9/29/2011.

Slides:



Advertisements
Similar presentations
Lecture 10: Text Classification; The Naive Bayes algorithm
Advertisements

Text Categorization.
Information Retrieval and Organisation Chapter 12 Language Models for Information Retrieval Dell Zhang Birkbeck, University of London.
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
Naïve-Bayes Classifiers Business Intelligence for Managers.
Language Models Naama Kraus (Modified by Amit Gross) Slides are based on Introduction to Information Retrieval Book by Manning, Raghavan and Schütze.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 16 10/18/2011.
Text Categorization CSC 575 Intelligent Information Retrieval.
Learning for Text Categorization
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 13: Text Classification & Naive.
Lecture 13-1: Text Classification & Naive Bayes
TEXT CLASSIFICATION CC437 (Includes some original material by Chris Manning)
1 Text Categorization CSE 454. Administrivia Mailing List Groups for PS1 Questions on PS1? Groups for Project Ideas for Project.
CES 514 Lec 11 April 28,2010 Neural Network, case study of naïve Bayes and decision tree, text classification.
Text Classification – Naive Bayes
CS276A Text Retrieval and Mining Lecture 11. Recap of the last lecture Probabilistic models in Information Retrieval Probability Ranking Principle Binary.
Naïve Bayes Classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 14, 2014.
Text Categorization Moshe Koppel Lecture 2: Naïve Bayes Slides based on Manning, Raghavan and Schutze.
Naïve Bayes for Text Classification: Spam Detection
Advanced Multimedia Text Classification Tamara Berg.
Categorization/Classification

Text Classification : The Naïve Bayes algorithm
Information Retrieval and Web Search Introduction to Text Classification (Note: slides in this set have been adapted from the course taught by Chris Manning.

Bayesian Networks. Male brain wiring Female brain wiring.
ITCS 6265 Information Retrieval and Web Mining Lecture 12: Text Classification; The Naïve Bayes algorithm.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 9 9/20/2011.
Text Classification, Active/Interactive learning.
How to classify reading passages into predefined categories ASH.
Review of the web page classification approaches and applications Luu-Ngoc Do Quang-Nhat Vo.
Naive Bayes Classifier
Spam Filtering. From: "" Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There.
Text Feature Extraction. Text Classification Text classification has many applications –Spam detection –Automated tagging of streams of news articles,
Statistical NLP Winter 2009 Lecture 4: Text categorization through Naïve Bayes Roger Levy ありがとう to Chris Manning for slides.
Learning from Multi-topic Web Documents for Contextual Advertisement KDD 2008.
Information Retrieval and Organisation Chapter 13 Text Classification and Naïve Bayes Dell Zhang Birkbeck, University of London.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
WEB BAR 2004 Advanced Retrieval and Web Mining Lecture 15.
Classification Techniques: Bayesian Classification
Slides for “Data Mining” by I. H. Witten and E. Frank.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
1 CS 391L: Machine Learning Text Categorization Raymond J. Mooney University of Texas at Austin.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Information Retrieval and Web Search Text classification Instructor: Rada Mihalcea.
Information Retrieval Lecture 4 Introduction to Information Retrieval (Manning et al. 2007) Chapter 13 For the MSc Computer Science Programme Dell Zhang.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Statistical NLP Winter 2008 Lecture 4: Text classification through Naïve Bayes Roger Levy ありがとう to Chris Manning for slides.
Web Search and Text Mining Lecture 17: Naïve BayesText Classification.
Text Classification and Naïve Bayes (Modified from Stanford CS276 slides on Lecture 10: Text Classification; The Naïve Bayes algorithm)
Improved Video Categorization from Text Metadata and User Comments ACM SIGIR 2011:Research and development in Information Retrieval - Katja Filippova -
1 Text Categorization CSE Categorization Given: –A description of an instance, x  X, where X is the instance language or instance space. –A fixed.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 12 10/4/2011.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 15: Text Classification & Naive Bayes 1.
KNN & Naïve Bayes Hongning Wang
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
CS276 Information Retrieval and Web Search Lecture 12: Naïve BayesText Classification.
CS276B Text Information Retrieval, Mining, and Exploitation Lecture 4 Text Categorization I Introduction and Naive Bayes Jan 21, 2003.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Lecture 10 Naïve Bayes Text Classification
Naive Bayes Classifier
CSCI 5417 Information Retrieval Systems Jim Martin
Lecture 15: Text Classification & Naive Bayes
Text Categorization Berlin Chen 2003 Reference:
Information Retrieval
Naïve Bayes Text Classification
Presentation transcript:

CSCI 5417 Information Retrieval Systems Jim Martin Lecture 11 9/29/2011

9/21/2015CSCI IR2 Today 9/29 Classification Naïve Bayes classification Unigram LM

9/21/2015CSCI IR3 Where we are... Basics of ad hoc retrieval Indexing Term weighting/scoring Cosine Evaluation Document classification Clustering Information extraction Sentiment/Opinion mining

9/21/2015CSCI IR4 Is this spam? From: "" Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW ! ================================================= Click Below to order: =================================================

9/21/2015CSCI IR5 Text Categorization Examples Assign labels to each document or web-page: Labels are most often topics such as Yahoo-categories finance, sports, news>world>asia>business Labels may be genres editorials, movie-reviews, news Labels may be opinion like, hate, neutral Labels may be domain-specific "interesting-to-me" : "not-interesting-to-me” “spam” : “not-spam” “contains adult content” :“doesn’t” important to read now: not important

9/21/2015CSCI IR6 Categorization/Classification Given: A description of an instance, xX, where X is the instance language or instance space. Issue for us is how to represent text documents And a fixed set of categories: C = {c 1, c 2,…, c n } Determine: The category of x: c(x)C, where c(x) is a categorization function whose domain is X and whose range is C. We want to know how to build categorization functions (i.e. “classifiers”).

Text Classification Types Those examples can be further classified by type Binary Spam/not spam, contains adult content/doesn’t Multiway Business vs. sports vs. gossip Hierarchical News> UK > Wales>Weather > Mixture model.8 basketball,.2 business 9/21/2015CSCI IR7

9/21/2015CSCI IR8 MultimediaGUIGarb.Coll.Semantics ML Planning planning temporal reasoning plan language... programming semantics language proof... learning intelligence algorithm reinforcement network... garbage collection memory optimization region... “planning language proof intelligence” Training Data: Test Data: Classes: (AI) Document Classification (Programming)(HCI)...

9/21/2015CSCI IR9 Bayesian Classifiers Task: Classify a new instance D based on a tuple of attribute values into one of the classes c j  C

9/21/2015CSCI IR10 Naïve Bayes Classifiers P(c j ) Can be estimated from the frequency of classes in the training examples. P(x 1,x 2,…,x n |c j ) O(|X| n|C|) parameters Could only be estimated if a very, very large number of training examples was available. Naïve Bayes Conditional Independence Assumption: Assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P(x i |c j ).

9/21/2015CSCI IR11 Flu X1X1 X2X2 X5X5 X3X3 X4X4 feversinuscoughrunnynosemuscle-ache The Naïve Bayes Classifier (Belief Net) Conditional Independence Assumption: features detect term presence and are independent of each other given the class:

9/21/2015CSCI IR12 Learning the Model First attempt: maximum likelihood estimates simply use the frequencies in the data C X1X1 X2X2 X5X5 X3X3 X4X4 X6X6

9/21/2015CSCI IR13 Smoothing to Avoid Overfitting # of values of X i Add-One smoothing

9/21/2015CSCI IR14 Stochastic Language Models Models probability of generating strings (each word in turn) in the language (commonly all strings over ∑). E.g., unigram model 0.2the 0.1a 0.01man 0.01woman 0.03said 0.02likes … themanlikesthewoman multiply Model M P(s | M) =

9/21/2015CSCI IR15 Stochastic Language Models Model probability of generating any string 0.2the 0.01class sayst pleaseth yon maiden 0.01woman Model M1Model M2 maidenclasspleasethyonthe P(s|M2) > P(s|M1) 0.2the class 0.03sayst 0.02pleaseth 0.1yon 0.01maiden woman

9/21/2015CSCI IR16 Unigram and higher-order models Unigram Language Models Bigram (generally, n-gram) Language Models Other Language Models Grammar-based models (PCFGs), etc. Probably not the first thing to try in IR = P ( ) P ( | ) P ( ) P ( ) P ( ) P ( ) P ( ) P ( ) P ( | ) P ( | ) P ( | ) Easy. Effective!

9/21/2015CSCI IR17 Naïve Bayes via a class conditional language model = multinomial NB Effectively, the probability of each class is done as a class-specific unigram language model Cat w1w1 w2w2 w3w3 w4w4 w5w5 w6w6

9/21/2015CSCI IR18 Using Multinomial Naive Bayes to Classify Text Attributes are text positions, values are words. Still too many possibilities Assume that classification is independent of the positions of the words Use same parameters for each position Result is bag of words model (over tokens not types)

9/21/2015CSCI IR19 Text j  single document containing all docs j for each word x k in Vocabulary n k  number of occurrences of x k in Text j Naïve Bayes: Learning From training corpus, extract Vocabulary Calculate required P(c j ) and P(x k | c j ) terms For each c j in C do docs j  subset of documents for which the target class is c j

9/21/2015CSCI IR20 Multinomial Model

9/21/2015CSCI IR21 Naïve Bayes: Classifying positions  all word positions in current document which contain tokens found in Vocabulary Return c NB, where

9/21/2015CSCI IR22 Apply Multinomial

9/21/2015CSCI IR23 Naive Bayes: Time Complexity Training Time: O(|D|L d + |C||V|)) where L d is the average length of a document in D. Assumes V and all D i, n i, and n ij pre-computed in O(|D|L d ) time during one pass through all of the data. Generally just O(|D|L d ) since usually |C||V| < |D|L d Test Time: O(|C| L t ) where L t is the average length of a test document. Very efficient overall, linearly proportional to the time needed to just read in all the data.

9/21/2015CSCI IR24 Underflow Prevention: log space Multiplying lots of probabilities, which are between 0 and 1 by definition, can result in floating-point underflow. Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities. Class with highest final un-normalized log probability score is still the most probable. Note that model is now just max of sum of weights…

Naïve Bayes example Given: 4 documents D1 (sports): China soccer D2 (sports): Japan baseball D3 (politics): China trade D4 (politics): Japan Japan exports Classify: D5: soccer D6: Japan Use Add-one smoothing Multinomial model Multivariate binomial model 9/21/201525CSCI IR

9/21/2015CSCI IR26 Naïve Bayes example V is {China, soccer, Japan, baseball, trade exports} |V| = 6 Sizes Sports = 2 docs, 4 tokens Politics = 2 docs, 5 tokens JapanRawSm Sports1/42/10 Politics2/53/11 soccerRawSm Sports1/42/10 Politics0/51/11

9/21/2015CSCI IR27 Naïve Bayes example Classifying Soccer (as a doc) Soccer | sports =.2 Soccer | politics =.09 Sports > Politics or.2/ =.69.09/ =.31

9/21/2015CSCI IR28 New example What about a doc like the following? Japan soccer Sports P(japan|sports)P(soccer|sports)P(sports).2 *.2 *.5 =.02 Politics P(japan|politics)P(soccer|politics)P(politics).27 *.09 *. 5 =.01 Or.66 to.33

9/21/2015CSCI IR29 Evaluating Categorization Evaluation must be done on test data that are independent of the training data (usually a disjoint set of instances). Classification accuracy: c / n where n is the total number of test instances and c is the number of test instances correctly classified by the system. Average results over multiple training and test sets (splits of the overall data) for the best results.

9/21/2015CSCI IR30 Example: AutoYahoo! Classify 13,589 Yahoo! webpages in “Science” subtree into 95 different topics (hierarchy depth 2)

9/21/2015CSCI IR31 WebKB Experiment Classify webpages from CS departments into: student, faculty, course,project Train on ~5,000 hand-labeled web pages Cornell, Washington, U.Texas, Wisconsin Crawl and classify a new site (CMU)

9/21/2015CSCI IR32 NB Model Comparison

9/21/2015CSCI IR33

9/21/2015CSCI IR34 SpamAssassin Naïve Bayes made a big splash with spam filtering Paul Graham’s A Plan for Spam And its offspring... Naive Bayes-like classifier with weird parameter estimation Widely used in spam filters Classic Naive Bayes superior when appropriately used According to David D. Lewis Many filters use NB classifiers But also many other things: black hole lists, etc.

9/21/2015CSCI IR35 Naïve Bayes on spam

9/21/2015CSCI IR36 Naive Bayes is Not So Naive Does well in many standard evaluation competitions Robust to Irrelevant Features Irrelevant Features cancel each other without affecting results Instead Decision Trees can heavily suffer from this. Very good in domains with many equally important features Decision Trees suffer from fragmentation in such cases – especially if little data A good dependable baseline for text classification Very Fast: Learning with one pass over the data; testing linear in the number of attributes, and document collection size Low Storage requirements

Next couple of classes Other classification issues What about vector spaces? Lucene infrastructure Better ML approaches SVMs etc. 9/21/2015CSCI IR37