Naïve Bayes Classifier

Slides:



Advertisements
Similar presentations
INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
Advertisements

What we will cover here What is a classifier
Naïve Bayes Classifier
Middle Term Exam 03/01 (Thursday), take home, turn in at noon time of 03/02 (Friday)
On Discriminative vs. Generative classifiers: Naïve Bayes
Introduction to Bayesian Learning Bob Durrant School of Computer Science University of Birmingham (Slides: Dr Ata Kabán)
Decision Theory Naïve Bayes ROC Curves
Review. 2 Statistical modeling  “Opposite” of 1R: use all the attributes  Two assumptions: Attributes are  equally important  statistically independent.
Generative Models Rong Jin. Statistical Inference Training ExamplesLearning a Statistical Model  Prediction p(x;  ) Female: Gaussian distribution N(
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
Simple Bayesian Supervised Models Saskia Klein & Steffen Bollmann 1.
Thanks to Nir Friedman, HU
Bayes Classifier, Linear Regression 10701/15781 Recitation January 29, 2008 Parts of the slides are from previous years’ recitation and lecture notes,
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Crash Course on Machine Learning
Naïve Bayes Classifier Ke Chen Extended by Longin Jan Latecki COMP20411 Machine Learning.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
NAÏVE BAYES CLASSIFIER 1 ACM Student Chapter, Heritage Institute of Technology 10 th February, 2012 SIGKDD Presentation by Anirban Ghose Parami Roy Sourav.
CSE 446 Gaussian Naïve Bayes & Logistic Regression Winter 2012
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Classification: Naïve Bayes Readings: Barber
Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Text Classification, Active/Interactive learning.
Last lecture summary Naïve Bayes Classifier. Bayes Rule Normalization Constant LikelihoodPrior Posterior Prior and likelihood must be learnt (i.e. estimated.
Naive Bayes Classifier
Bayesian Learning CS446 -FALL ‘14 f:X  V, finite set of values Instances x  X can be described as a collection of features x = (x 1, x 2, … x n ) x i.
CSE 446 Perceptron Learning Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
CSE 446 Logistic Regression Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Classification Techniques: Bayesian Classification
Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki
Machine Learning CUNY Graduate Center Lecture 4: Logistic Regression.
Chapter 6 Bayesian Learning
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.2 Statistical Modeling Rodney Nielsen Many.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Classification: Logistic Regression –NB & LR connections Readings: Barber.
Regress-itation Feb. 5, Outline Linear regression – Regression: predicting a continuous value Logistic regression – Classification: predicting a.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
CSE 446 Logistic Regression Perceptron Learning Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen.
Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners
Data Mining Chapter 4 Algorithms: The Basic Methods Reporter: Yuen-Kuei Hsueh.
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen.
Naive Bayes (Generative Classifier) vs. Logistic Regression (Discriminative Classifier) Minkyoung Kim.
Last lecture summary Naïve Bayes Classifier. Bayes Rule Normalization Constant LikelihoodPrior Posterior Prior and likelihood must be learnt (i.e. estimated.
Bayesian Learning Reading: Tom Mitchell, “Generative and discriminative classifiers: Naive Bayes and logistic regression”, Sections 1-2. (Linked from.
Classification: Naïve Bayes Classifier
Oliver Schulte Machine Learning 726
Naïve Bayes Classifier
Naive Bayes Classifier
text processing And naïve bayes
Data Science Algorithms: The Basic Methods
Naïve Bayes Classifier
Bayes Net Learning: Bayesian Approaches
Naïve Bayes Classifier
Data Mining Lecture 11.
Classification Techniques: Bayesian Classification
Revision (Part II) Ke Chen
Revision (Part II) Ke Chen
Naïve Bayes Classifier
Generative Models and Naïve Bayes
LECTURE 07: BAYESIAN ESTIMATION
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Generative Models and Naïve Bayes
Logistic Regression [Many of the slides were originally created by Prof. Dan Jurafsky from Stanford.]
NAÏVE BAYES CLASSIFICATION
Naïve Bayes Classifier
Naïve Bayes Classifier
Presentation transcript:

Naïve Bayes Classifier Adopted from slides by Ke Chen from University of Manchester and YangQiu Song from MSRA

Generative vs. Discriminative Classifiers Training classifiers involves estimating f: X  Y, or P(Y|X) Discriminative classifiers (also called ‘informative’ by Rubinstein&Hastie): Assume some functional form for P(Y|X) Estimate parameters of P(Y|X) directly from training data Generative classifiers Assume some functional form for P(X|Y), P(X) Estimate parameters of P(X|Y), P(X) directly from training data Use Bayes rule to calculate P(Y|X= xi)

Bayes Formula

Generative Model Color Size Texture Weight …

Discriminative Model Logistic Regression Color Size Texture Weight …

Comparison Generative models Discriminative models Assume some functional form for P(X|Y), P(Y) Estimate parameters of P(X|Y), P(Y) directly from training data Use Bayes rule to calculate P(Y|X= x) Discriminative models Directly assume some functional form for P(Y|X) Estimate parameters of P(Y|X) directly from training data

Probability Basics Prior, conditional and joint probability for random variables Prior probability: Conditional probability: Joint probability: Relationship: Independence: Bayesian Rule

Probability Basics Quiz: We have two six-sided dice. When they are tolled, it could end up with the following occurance: (A) dice 1 lands on side “3”, (B) dice 2 lands on side “1”, and (C) Two dice sum to eight. Answer the following questions:

Probabilistic Classification Establishing a probabilistic model for classification Discriminative model Discriminative Probabilistic Classifier

Probabilistic Classification Establishing a probabilistic model for classification (cont.) Generative model Generative Probabilistic Model for Class 1 for Class 2 for Class L

Probabilistic Classification MAP classification rule MAP: Maximum A Posterior Assign x to c* if Generative classification with the MAP rule Apply Bayesian rule to convert them into posterior probabilities Then apply the MAP rule

Naïve Bayes Bayes classification Naïve Bayes classification Difficulty: learning the joint probability Naïve Bayes classification Assumption that all input attributes are conditionally independent! MAP classification rule: for For a class, the previous generative model can be decomposed by n generative models of a single input.

Naïve Bayes Naïve Bayes Algorithm (for discrete input attributes) Learning Phase: Given a training set S, Output: conditional probability tables; for elements Test Phase: Given an unknown instance , Look up tables to assign the label c* to X’ if

Example Example: Play Tennis

Example Learning Phase 2/9 3/5 4/9 0/5 3/9 2/5 2/9 2/5 4/9 3/9 1/5 3/9 Outlook Play=Yes Play=No Sunny 2/9 3/5 Overcast 4/9 0/5 Rain 3/9 2/5 Temperature Play=Yes Play=No Hot 2/9 2/5 Mild 4/9 Cool 3/9 1/5 Humidity Play=Yes Play=No High 3/9 4/5 Normal 6/9 1/5 Wind Play=Yes Play=No Strong 3/9 3/5 Weak 6/9 2/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14

Example Test Phase Given a new instance, x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) Look up tables MAP rule P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.

Example Test Phase Given a new instance, x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) Look up tables MAP rule P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.

Relevant Issues Violation of Independence Assumption For many real world tasks, Nevertheless, naïve Bayes works surprisingly well anyway! Zero conditional probability Problem If no example contains the attribute value In this circumstance, during test For a remedy, conditional probabilities estimated with

Relevant Issues Continuous-valued Input Attributes Numberless values for an attribute Conditional probability modeled with the normal distribution Learning Phase: Output: normal distributions and Test Phase: Calculate conditional probabilities with all the normal distributions Apply the MAP rule to make a decision

Conclusions Naïve Bayes based on the independence assumption Training is very easy and fast; just requiring considering each attribute in each class separately Test is straightforward; just looking up tables or calculating conditional probabilities with normal distributions A popular generative model Performance competitive to most of state-of-the-art classifiers even in presence of violating independence assumption Many successful applications, e.g., spam mail filtering A good candidate of a base learner in ensemble learning Apart from classification, naïve Bayes can do more…

Extra Slides

Naïve Bayes (1) Revisit Which is equal to Naïve Bayes assumes conditional independency Then the inference of posterior is

Naïve Bayes (2) Training: Observation is multinomial; Supervised, with label information Maximum Likelihood Estimation (MLE) Maximum a Posteriori (MAP): put Dirichlet prior Classification

Naïve Bayes (3) What if we have continuous Xi? Generative training Prediction

Naïve Bayes (4) Problems Features may overlapped Features may not be independent Size and weight of tiger Use a joint distribution estimation (P(X|Y), P(Y)) to solve a conditional problem (P(Y|X= x)) Can we discriminatively train? Logistic regression Regularization Gradient ascent