Text Classification – Naive Bayes

Slides:



Advertisements
Similar presentations
Text Categorization.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Natural Language Processing COMPSCI 423/723
Text Categorization CSC 575 Intelligent Information Retrieval.
Naïve Bayes Classifier
What is Statistical Modeling
Learning for Text Categorization
Bayes Rule How is this rule derived? Using Bayes rule for probabilistic inference: –P(Cause | Evidence): diagnostic probability –P(Evidence | Cause): causal.
TEXT CLASSIFICATION CC437 (Includes some original material by Chris Manning)
1 Text Categorization CSE 454. Administrivia Mailing List Groups for PS1 Questions on PS1? Groups for Project Ideas for Project.
Generative Models Rong Jin. Statistical Inference Training ExamplesLearning a Statistical Model  Prediction p(x;  ) Female: Gaussian distribution N(
Learning Bayesian Networks
Today Logistic Regression Decision Trees Redux Graphical Models
Naïve Bayes Classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 14, 2014.
Text Categorization Moshe Koppel Lecture 2: Naïve Bayes Slides based on Manning, Raghavan and Schutze.
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Crash Course on Machine Learning
11 CS 388: Natural Language Processing: Discriminative Training and Conditional Random Fields (CRFs) for Sequence Labeling Raymond J. Mooney University.
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
1 Naïve Bayes A probabilistic ML algorithm. 2 Axioms of Probability Theory All probabilities between 0 and 1 True proposition has probability 1, false.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Naïve Bayes for Text Classification: Spam Detection
Advanced Multimedia Text Classification Tamara Berg.
Machine Learning Queens College Lecture 3: Probability and Statistics.
Information Retrieval and Web Search Introduction to Text Classification (Note: slides in this set have been adapted from the course taught by Chris Manning.
Bayesian Networks. Male brain wiring Female brain wiring.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 11 9/29/2011.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Classification: Naïve Bayes Readings: Barber
Text Classification, Active/Interactive learning.
How to classify reading passages into predefined categories ASH.
1 CS 343: Artificial Intelligence Probabilistic Reasoning and Naïve Bayes Raymond J. Mooney University of Texas at Austin.
Naive Bayes Classifier
Bayesian Learning CS446 -FALL ‘14 f:X  V, finite set of values Instances x  X can be described as a collection of features x = (x 1, x 2, … x n ) x i.
1 CS546: Machine Learning and Natural Language Discriminative vs Generative Classifiers This lecture is based on (Ng & Jordan, 02) paper and some slides.
1 CS 391L: Machine Learning: Bayesian Learning: Naïve Bayes Raymond J. Mooney University of Texas at Austin.
Information Retrieval and Organisation Chapter 13 Text Classification and Naïve Bayes Dell Zhang Birkbeck, University of London.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
1 Generative and Discriminative Models Jie Tang Department of Computer Science & Technology Tsinghua University 2012.
1 Text Categorization. 2 Categorization Given: –A description of an instance, x  X, where X is the instance language or instance space. –A fixed set.
Classification Techniques: Bayesian Classification
1 Naïve Bayes Classification CS 6243 Machine Learning Modified from the slides by Dr. Raymond J. Mooney
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
1 CS 391L: Machine Learning Text Categorization Raymond J. Mooney University of Texas at Austin.
Active learning Haidong Shi, Nanyi Zeng Nov,12,2008.
Information Retrieval Lecture 4 Introduction to Information Retrieval (Manning et al. 2007) Chapter 13 For the MSc Computer Science Programme Dell Zhang.
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten’s and E. Frank’s “Data Mining” and Jeremy Wyatt and others.
1 Text Categorization Slides based on R. Mooney (UT Austin)
1 Text Categorization CSE Categorization Given: –A description of an instance, x  X, where X is the instance language or instance space. –A fixed.
CS 2750: Machine Learning Probability Review Prof. Adriana Kovashka University of Pittsburgh February 29, 2016.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
1 Probabilistic ML algorithm Naïve Bayes and Maximum Likelyhood.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 15: Text Classification & Naive Bayes 1.
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
1 Chapter 6 Bayesian Learning lecture slides of Raymond J. Mooney, University of Texas at Austin.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Bayesian and Markov Test
Naive Bayes Classifier
Statistical Models for Automatic Speech Recognition
Lecture 15: Text Classification & Naive Bayes
Machine Learning. k-Nearest Neighbor Classifiers.
Classification Techniques: Bayesian Classification
Text Categorization.
Naive Bayes for Document Classification
Probabilistic ML algorithm Naïve Bayes and Maximum Likelyhood
Hankz Hankui Zhuo Bayesian Networks Hankz Hankui Zhuo
Parametric Methods Berlin Chen, 2005 References:
Speech recognition, machine learning
Logistic Regression [Many of the slides were originally created by Prof. Dan Jurafsky from Stanford.]
Speech recognition, machine learning
Presentation transcript:

Text Classification – Naive Bayes September 11, 2014 Credits for slides: Allan, Arms, Manning, Lund, Noble, Page.

Generative and Discriminative Models: An analogy The task is to determine the language that someone is speaking Generative approach: is to learn each language and determine as to which language the speech belongs to Discriminative approach: is to determine the linguistic differences without learning any language – a much easier task!

Taxonomy of ML Models Generative Methods Discriminative Methods Model class-conditional pdfs and prior probabilities “Generative” since sampling can generate synthetic data points Popular models Gaussians, Naïve Bayes, Mixtures of multinomials Mixtures of Gaussians, Mixtures of experts, Hidden Markov Models (HMM) Discriminative Methods Directly estimate posterior probabilities No attempt to model underlying probability distributions Generally, better performance Logistic regression, SVMs (Kernel methods) Traditional neural networks, Nearest neighbor Conditional Random Fields (CRF)

Summary of Basic Probability Formulas Product rule: probability of a conjunction of two events A and B Sum rule: probability of a disjunction of two events A and B Bayes theorem: the posterior probability of A given B Theorem of total probability: if events A1, …, An are mutually exclusive with

Generative Probabilistic Models Assume a simple (usually unrealistic) probabilistic method by which the data was generated. For categorization, each category has a different parameterized generative model that characterizes that category. Training: Use the data for each category to estimate the parameters of the generative model for that category. Testing: Use Bayesian analysis to determine the category model that most likely generated a specific test instance.

Bayesian Methods Learning and classification methods based on probability theory. Bayes theorem plays a critical role in probabilistic learning and classification. Build a generative model that approximates how data is produced Use prior probability of each category given no information about an item. Categorization produces a posterior probability distribution over the possible categories given a description of an item.

Bayes Theorem

Bayes Classifiers for Categorical Data Task: Classify a new instance x based on a tuple of attribute values into one of the classes cj  C The most probable class given the attribute values attributes Example Color Shape Class 1 red circle positive 2 3 square negative 4 blue values

Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

Bayes Classifiers

Bayes Classifiers P(cj) P(x1,x2,…,xn|cj) Can be estimated from the frequency of classes in the training examples. P(x1,x2,…,xn|cj) O(|X|n|C|) parameters Could only be estimated if a very, very large number of training examples was available. Need to make some sort of independence assumptions about the features to make learning tractable.

The Naïve Bayes Classifier Flu X1 X2 X5 X3 X4 fever sinus cough runnynose muscle-ache Conditional Independence Assumption: attributes are independent of each other given the class: Multi-valued variables: multivariate model Binary variables: multivariate Bernoulli model

Learning the Model First attempt: maximum likelihood estimates C X1 X2 X5 X3 X4 X6 First attempt: maximum likelihood estimates simply use the frequencies in the data

Problem with Max Likelihood Flu X1 X2 X5 X3 X4 fever sinus cough runnynose muscle-ache What if we have seen no training cases where patient had no flu and muscle aches? Zero probabilities cannot be conditioned away, no matter the other evidence!

Smoothing to Improve Generalization on Test Data # of values of Xi overall fraction in data where Xi=xi,k Somewhat more subtle version extent of “smoothing”

Underflow Prevention Multiplying lots of probabilities, which are between 0 and 1 by definition, can result in floating-point underflow. Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities. Class with highest final un-normalized log probability score is still the most probable.

Probability Estimation Example positive negative P(Y) P(small | Y) P(medium | Y) P(large | Y) P(red | Y) P(blue | Y) P(green | Y) P(square | Y) P(triangle | Y) P(circle | Y) Ex Size Color Shape Class 1 small red circle positive 2 large 3 triangle negative 4 blue

Probability Estimation Example positive negative P(Y) 0.5 P(small | Y) P(medium | Y) 0.0 P(large | Y) P(red | Y) 1.0 P(blue | Y) P(green | Y) P(square | Y) P(triangle | Y) P(circle | Y) Ex Size Color Shape Class 1 small red circle positive 2 large 3 triangle negative 4 blue

Naïve Bayes Example Test Instance: <medium ,red, circle> Probability positive negative P(Y) 0.5 P(small | Y) 0.4 P(medium | Y) 0.1 0.2 P(large | Y) P(red | Y) 0.9 0.3 P(blue | Y) 0.05 P(green | Y) P(square | Y) P(triangle | Y) P(circle | Y) Test Instance: <medium ,red, circle>

Naïve Bayes Example Test Instance: <medium ,red, circle> Probability positive negative P(Y) 0.5 P(medium | Y) 0.1 0.2 P(red | Y) 0.9 0.3 P(circle | Y) Test Instance: <medium ,red, circle> P(positive | X) =? P(negative | X) =?

Naïve Bayes Example Test Instance: <medium ,red, circle> Probability positive negative P(Y) 0.5 P(medium | Y) 0.1 0.2 P(red | Y) 0.9 0.3 P(circle | Y) Test Instance: <medium ,red, circle> P(positive | X) = P(positive)*P(medium | positive)*P(red | positive)*P(circle | positive) / P(X) 0.5 * 0.1 * 0.9 * 0.9 = 0.0405 / P(X) = 0.0405 / 0.0495 = 0.8181 P(negative | X) = P(negative)*P(medium | negative)*P(red | negative)*P(circle | negative) / P(X) 0.5 * 0.2 * 0.3 * 0.3 = 0.009 / P(X) = 0.009 / 0.0495 = 0.1818 P(positive | X) + P(negative | X) = 0.0405 / P(X) + 0.009 / P(X) = 1 P(X) = (0.0405 + 0.009) = 0.0495

Question How can we see the multivariate Naïve Bayes model as a generative model? A generative model produces the observed data by the means of a probabilistic generation process. First generate a class cj according to the probability P(C) Then for each attribute: Generate xi according to P(xi | C = cj ).

Naïve Bayes Generative Model neg pos pos pos neg pos neg Category Size Color Shape red circ lg red circ med tri sm blue sm blue tri sqr lg med grn med red grn red circ med grn tri circ circ lg lg circ sm sm lg red blue circ tri sqr blue red sqr sm med red circ sm lg blue grn sqr tri Size Color Shape Positive Negative

Naïve Bayes Inference Problem lg red circ ?? ?? neg pos pos pos neg pos neg Category Size Color Shape red circ lg red circ med tri sm blue sm blue tri sqr lg grn red circ med grn tri circ med red med grn circ lg lg circ tri sm blue sm lg red blue circ sqr red sqr sm med circ sm lg red blue grn sqr tri Size Color Shape Positive Negative

Naïve Bayes for Text Classification Two models: Multivariate Bernoulli Model Multinomial Model

Model 1: Multivariate Bernoulli One feature Xw for each word in dictionary Xw = true (1) in document d if w appears in d Naive Bayes assumption: Given the document’s topic, appearance of one word in the document tells us nothing about chances that another word appears Parameter estimation

Model 1: Multivariate Bernoulli One feature Xw for each word in dictionary Xw = true (1) in document d if w appears in d Naive Bayes assumption: Given the document’s topic, appearance of one word in the document tells us nothing about chances that another word appears Parameter estimation fraction of documents of topic cj in which word w appears

Naïve Bayes Generative Model neg pos pos pos neg pos neg Category w1 w2 w3 no yes no yes yes yes yes no yes no no no yes yes yes no yes no yes no no yes no yes yes yes no yes yes yes no yes yes yes no no yes yes yes no yes no no no no yes yes no no w1 w2 w3 Positive Negative

Model 2: Multinomial Cat w1 w2 w3 w4 w5 w6

Multinomial Distribution “The binomial distribution is the probability distribution of the number of "successes" in n independent Bernoulli trials, with the same probability of "success" on each trial. In a multinomial distribution, each trial results in exactly one of some fixed finite number k of possible outcomes, with probabilities p1, ..., pk (so that pi ≥ 0 for i = 1, ..., k and their sum is 1), and there are n independent trials. Then let the random variables Xi indicate the number of times outcome number i was observed over the n trials. X=(X1,…,Xn) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk).” (Wikipedia)

Multinomial Naïve Bayes Class conditional unigram language Attributes are text positions, values are words. One feature Xi for each word position in document feature’s values are all words in dictionary Value of Xi is the word in position i Naïve Bayes assumption: Given the document’s topic, word in one position in the document tells us nothing about words in other positions Too many possibilities!

Multinomial Naive Bayes Classifiers Second assumption: Classification is independent of the positions of the words (word appearance does not depend on position) for all positions i,j, word w, and class c Use same parameters for each position Result is bag of words model (over tokens)

Multinomial Naïve Bayes for Text Modeled as generating a bag of words for a document in a given category by repeatedly sampling with replacement from a vocabulary V = {w1, w2,…wm} based on the probabilities P(wj | ci). Smooth probability estimates with Laplace m-estimates assuming a uniform distribution over all words (p = 1/|V|) and m = |V|

Multinomial Naïve Bayes as a Generative Model for Text spam legit spam spam legit legit spam spam legit Category science bank pamper PM feast computer Friday king printable test homework coupons dog March score Food May exam brand spam legit

Naïve Bayes Inference Problem Food feast ?? ?? spam legit spam spam legit legit spam spam legit bank science pamper Category PM feast king printable computer Friday coupons dog test homework Food March score brand May exam spam legit

Naïve Bayes Classification

Parameter Estimation fraction of documents of topic cj Multivariate Bernoulli model: Multinomial model: Can create a mega-document for topic j by concatenating all documents in this topic Use frequency of w in mega-document fraction of documents of topic cj in which word w appears fraction of times in which word w appears across all documents of topic cj

Classification Multinomial vs Multivariate Bernoulli? Multinomial model is almost always more effective in text applications!

WebKB Experiment (1998) Classify webpages from CS departments into: student, faculty, course, project, etc. Train on ~5,000 hand-labeled web pages Cornell, Washington, U.Texas, Wisconsin Crawl and classify a new site (CMU) Results:

Naïve Bayes - Spam Assassin Naïve Bayes has found a home in spam filtering Paul Graham’s A Plan for Spam A mutant with more mutant offspring... Widely used in spam filters Classic Naive Bayes superior when appropriately used According to David D. Lewis But also many other things: black hole lists, etc. Many email topic filters also use NB classifiers

Naïve Bayes on Spam Email This graph is from http://www.cs.utexas.edu/users/jp/research/email.paper.pdf Naive-Bayes vs. Rule-Learning in Classification of Email Jefferson Provost, UT Austin http://www.cs.utexas.edu/users/jp/research/email.paper.pdf

Naive Bayes is Not So Naive Naïve Bayes: First and Second place in KDD-CUP 97 competition, among 16 (then) state of the art algorithms Goal: Financial services industry direct mail response prediction model: Predict if the recipient of mail will actually respond to the advertisement – 750,000 records. Robust to Irrelevant Features Irrelevant Features cancel each other without affecting results Very good in domains with many equally important features A good baseline for text classification! Very Fast: Learning with one pass of counting over the data; testing linear in the number of attributes, and document collection size Low Storage requirements