Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki

Slides:



Advertisements
Similar presentations
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Advertisements

Classification Techniques: Decision Tree Learning
What we will cover here What is a classifier
Naïve Bayes Classifier
Naïve Bayes Classifier
On Discriminative vs. Generative classifiers: Naïve Bayes
Algorithms: The basic methods. Inferring rudimentary rules Simplicity first Simple algorithms often work surprisingly well Many different kinds of simple.
Probabilistic inference
Bayes Rule How is this rule derived? Using Bayes rule for probabilistic inference: –P(Cause | Evidence): diagnostic probability –P(Evidence | Cause): causal.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Introduction to Bayesian Learning Bob Durrant School of Computer Science University of Birmingham (Slides: Dr Ata Kabán)
Review. 2 Statistical modeling  “Opposite” of 1R: use all the attributes  Two assumptions: Attributes are  equally important  statistically independent.
Generative Models Rong Jin. Statistical Inference Training ExamplesLearning a Statistical Model  Prediction p(x;  ) Female: Gaussian distribution N(
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
Naïve Bayes Classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 14, 2014.
Revision (Part II) Ke Chen COMP24111 Machine Learning Revision slides are going to summarise all you have learnt from Part II, which should be helpful.
Thanks to Nir Friedman, HU
SEEM Tutorial 2 Classification: Decision tree, Naïve Bayes & k-NN
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Review: Probability Random variables, events Axioms of probability
Crash Course on Machine Learning
Naïve Bayes Classifier Ke Chen Extended by Longin Jan Latecki COMP20411 Machine Learning.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
NAÏVE BAYES CLASSIFIER 1 ACM Student Chapter, Heritage Institute of Technology 10 th February, 2012 SIGKDD Presentation by Anirban Ghose Parami Roy Sourav.
DATA MINING : CLASSIFICATION. Classification : Definition  Classification is a supervised learning.  Uses training sets which has correct answers (class.
Bayesian Networks. Male brain wiring Female brain wiring.
Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki
Last lecture summary Naïve Bayes Classifier. Bayes Rule Normalization Constant LikelihoodPrior Posterior Prior and likelihood must be learnt (i.e. estimated.
Naive Bayes Classifier
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Classification Techniques: Bayesian Classification
Chapter 6 Bayesian Learning
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.2 Statistical Modeling Rodney Nielsen Many.
Algorithms for Classification: The Basic Methods.
Review: Probability Random variables, events Axioms of probability Atomic events Joint and marginal probability distributions Conditional probability distributions.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten’s and E. Frank’s “Data Mining” and Jeremy Wyatt and others.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
A C B. Will I play tennis today? Features – Outlook: {Sun, Overcast, Rain} – Temperature:{Hot, Mild, Cool} – Humidity:{High, Normal, Low} – Wind:{Strong,
DATA MINING LECTURE 10b Classification k-nearest neighbor classifier
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen.
Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
Knowledge-based systems Sanaullah Manzoor CS&IT, Lahore Leads University
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen.
Naive Bayes (Generative Classifier) vs. Logistic Regression (Discriminative Classifier) Minkyoung Kim.
Last lecture summary Naïve Bayes Classifier. Bayes Rule Normalization Constant LikelihoodPrior Posterior Prior and likelihood must be learnt (i.e. estimated.
Bayesian Learning Reading: Tom Mitchell, “Generative and discriminative classifiers: Naive Bayes and logistic regression”, Sections 1-2. (Linked from.
Classification: Naïve Bayes Classifier
Naïve Bayes Classifier
Naive Bayes Classifier
CSE543: Machine Learning Lecture 2: August 6, 2014
text processing And naïve bayes
Data Science Algorithms: The Basic Methods
Naïve Bayes Classifier
Oliver Schulte Machine Learning 726
Naïve Bayes Classifier
Data Mining Lecture 11.
Classification Techniques: Bayesian Classification
Revision (Part II) Ke Chen
Revision (Part II) Ke Chen
Naïve Bayes Classifier
Generative Models and Naïve Bayes
Generative Models and Naïve Bayes
NAÏVE BAYES CLASSIFICATION
Naïve Bayes Classifier
Naïve Bayes Classifier
Presentation transcript:

Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki

2 Probability Basics Prior, conditional and joint probability –Prior probability: –Conditional probability: –Joint probability: –Relationship: –Independence: Bayesian Rule

3 Probabilistic Classification Establishing a probabilistic model for classification –Discriminative model –Generative model MAP classification rule –MAP: Maximum A Posterior –Assign x to c* if Generative classification with the MAP rule –Apply Bayesian rule to convert:

4 Naïve Bayes Bayes classification Difficulty: learning the joint probability Naïve Bayes classification –Making the assumption that all input attributes are independent –MAP classification rule

5 Naïve Bayes Naïve Bayes Algorithm (for discrete input attributes) –Learning Phase: Given a training set S, Output: conditional probability tables; for elements –Test Phase: Given an unknown instance, Look up tables to assign the label c* to X’ if

6 Example Example: Play Tennis

7 Learning Phase OutlookPlay=YesPlay=No Sunny 2/93/5 Overcast 4/90/5 Rain 3/92/5 TemperaturePlay=YesPlay=No Hot 2/92/5 Mild 4/92/5 Cool 3/91/5 HumidityPlay=YesPlay=No High 3/94/5 Normal 6/91/5 WindPlay=YesPlay=No Strong 3/93/5 Weak 6/92/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14 P(Outlook=o|Play=b)P(Temperature=t|Play=b) P(Humidity=h|Play=b)P(Wind=w|Play=b)

8 Example Test Phase –Given a new instance x ’, P(Play =Yes| x ’) ? P(Play = No| x ’) x ’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) –Look up tables –MAP rule P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Play=Yes| x ’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = P(Play=No| x ’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = Given the fact P(Play =Yes| x ’) < P(Play = No| x ’), we label x ’ to be “No”.

9 Relevant Issues Violation of Independence Assumption –For many real world tasks, –Nevertheless, naïve Bayes works surprisingly well anyway! Zero conditional probability Problem –If no example contains the attribute value –In this circumstance, during test –For a remedy, conditional probabilities estimated with Laplace smoothing:

10 Relevant Issues Continuous-valued Input Attributes –Numberless values for an attribute –Conditional probability modeled with the normal distribution –Learning Phase: Output: normal distributions and –Test Phase: Calculate conditional probabilities with all the normal distributions Apply the MAP rule to make a decision

11 Conclusions Naïve Bayes based on the independence assumption –Training is very easy and fast; just requiring considering each attribute in each class separately –Test is straightforward; just looking up tables or calculating conditional probabilities with normal distributions A popular generative model –Performance competitive to most of state-of-the-art classifiers even in presence of violating independence assumption –Many successful applications, e.g., spam mail filtering –Apart from classification, naïve Bayes can do more…

Homework 1. Compute P(Play=Yes|x’) and P(Play=No|x’) with m=0 and with m=1 for x’=(Outlook=Overcast, Temperature=Cool, Humidity=High, Wind=Strong) Does the result change? Your training data contains 100 s with the following statistics: 60 of those 100 s (60%) are spam  48 of those 60 s (80%) that are spam have the word "buy"  42 of those 60 s (70%) that are spam have the word "win" 40 of those 100 s (40%) aren't spam  4 of those 40 s (10%) that aren't spam have the word "buy"  6 of those 40 s (15%) that aren't spam have the word "win" A new has been received and it has the words "buy" and "win". Classify it and send it to either to the inbox or to the spam folder. For this you need to compute P(spam=1 | buy=1, win=1) and P(spam=0 | buy=1, win=1), where we interpret spam, buy, and win as binary random variables such that spam=1 means that the is a spam, spam=0 means that it is not a spam, buy=1 means that the word “buy” is present in the , and similarly for win=1. You need to write the formulas you are using. (Here m=0.)