Generative Models and Naïve Bayes

Slides:



Advertisements
Similar presentations
Unsupervised Learning
Advertisements

Classification Techniques: Decision Tree Learning
What we will cover here What is a classifier
Naïve Bayes Classifier
Naïve Bayes Classifier
On Discriminative vs. Generative classifiers: Naïve Bayes
Algorithms: The basic methods. Inferring rudimentary rules Simplicity first Simple algorithms often work surprisingly well Many different kinds of simple.
Assuming normally distributed data! Naïve Bayes Classifier.
Revision (Part II) Ke Chen COMP24111 Machine Learning Revision slides are going to summarise all you have learnt from Part II, which should be helpful.
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Crash Course on Machine Learning
Naïve Bayes Classifier Ke Chen Extended by Longin Jan Latecki COMP20411 Machine Learning.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
NAÏVE BAYES CLASSIFIER 1 ACM Student Chapter, Heritage Institute of Technology 10 th February, 2012 SIGKDD Presentation by Anirban Ghose Parami Roy Sourav.
Bayesian Networks. Male brain wiring Female brain wiring.
Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki
DATA MINING LECTURE 10 Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines.
Last lecture summary Naïve Bayes Classifier. Bayes Rule Normalization Constant LikelihoodPrior Posterior Prior and likelihood must be learnt (i.e. estimated.
Introduction to machine learning and data mining 1 iCSC2014, Juan López González, University of Oviedo Introduction to machine learning Juan López González.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Classification Techniques: Bayesian Classification
Naïve Bayes Classifier Ke Chen Modified and extended by Longin Jan Latecki
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.2 Statistical Modeling Rodney Nielsen Many.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
NTU & MSRA Ming-Feng Tsai
DATA MINING LECTURE 10b Classification k-nearest neighbor classifier
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen.
Bayesian Learning. Bayes Classifier A probabilistic framework for solving classification problems Conditional Probability: Bayes theorem:
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
Knowledge-based systems Sanaullah Manzoor CS&IT, Lahore Leads University
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen.
Naive Bayes (Generative Classifier) vs. Logistic Regression (Discriminative Classifier) Minkyoung Kim.
Introduction to Machine Learning Nir Ailon Lecture 11: Probabilistic Models.
Last lecture summary Naïve Bayes Classifier. Bayes Rule Normalization Constant LikelihoodPrior Posterior Prior and likelihood must be learnt (i.e. estimated.
Bayesian Learning Reading: Tom Mitchell, “Generative and discriminative classifiers: Naive Bayes and logistic regression”, Sections 1-2. (Linked from.
Classification: Naïve Bayes Classifier
Oliver Schulte Machine Learning 726
Naïve Bayes Classifier
CSE543: Machine Learning Lecture 2: August 6, 2014
COMP61011 : Machine Learning Probabilistic Models + Bayes’ Theorem
text processing And naïve bayes
Data Science Algorithms: The Basic Methods
Decision Trees: Another Example
Naïve Bayes Classifier
Bayes Net Learning: Bayesian Approaches
Oliver Schulte Machine Learning 726
Naïve Bayes Classifier
Decision Tree Saed Sayad 9/21/2018.
Bayesian Classification
Data Mining Lecture 11.
Overview of Supervised Learning
Classification Techniques: Bayesian Classification
Revision (Part II) Ke Chen
Revision (Part II) Ke Chen
Naïve Bayes Classifier
DATA MINING LECTURE 10 Classification k-nearest neighbor classifier
Play Tennis ????? Day Outlook Temperature Humidity Wind PlayTennis
Ensemble learning.
Multivariate Methods Berlin Chen
Artificial Intelligence 9. Perceptron
Multivariate Methods Berlin Chen, 2005 References:
Generative Models and Naïve Bayes
A task of induction to find patterns
A task of induction to find patterns
NAÏVE BAYES CLASSIFICATION
Naïve Bayes Classifier
Naïve Bayes Classifier
Presentation transcript:

Generative Models and Naïve Bayes Ke Chen Reading: [14.3, EA], [3.5, KPM], [1.5.4, CMB]

Outline Background and Probability Basics Probabilistic Classification Principle Probabilistic discriminative models Generative models and their application to classification MAP and converting generative into discriminative Naïve Bayes – an generative model Principle and Algorithms (discrete vs. continuous) Example: Play Tennis Zero Conditional Probability and Treatment Summary

Background There are three methodologies: a) Model a classification rule directly Examples: k-NN, linear classifier, SVM, neural nets, … b) Model the probability of class memberships given input data Examples: logistic regression, probabilistic neural nets (softmax),… c) Make a probabilistic model of data within each class Examples: naive Bayes, model-based …. Important ML taxonomy for learning models probabilistic models vs non-probabilistic models discriminative models vs generative models

Background Based on the taxonomy, we can see the essence of different supervised learning models (classifiers) more clearly. Probabilistic Non-Probabilistic Discriminative Logistic Regression Probabilistic neural nets …….. K-nn Linear classifier SVM Neural networks …… Generative Naïve Bayes Model-based (e.g., GMM) N.A. (?)

Probability Basics Prior, conditional and joint probability for random variables Prior probability: Conditional probability: Joint probability: Relationship: Independence: Bayesian Rule Generative Discriminative

Probabilistic Classification Principle Establishing a probabilistic model for classification Discriminative model Discriminative Probabilistic Classifier To train a discriminative classifier (regardless its probabilistic or non-probabilistic nature), all training examples of different classes must be jointly used to build up a single discriminative classifier. Output L probabilities for L class labels in a probabilistic classifier while a single label is achieved by a non-probabilistic discriminative classifier .

Probabilistic Classification Principle Establishing a probabilistic model for classification (cont.) Generative model (must be probabilistic) L probabilistic models have to be trained independently Each is trained on only the examples of the same label Output L probabilities for a given input with L models “Generative” means that such a model can produce data subject to the distribution via sampling. Generative Probabilistic Model for Class 1 for Class L Practical consideration: a generative model allows to expand the classifier to a new class (unseen in the previous learning) much more efficiently.

Probabilistic Classification Principle Maximum A Posterior (MAP) classification rule For an input x, find the largest one from L probabilities output by a discriminative probabilistic classifier Assign x to label c* if is the largest. Generative classification with the MAP rule Apply Bayesian rule to convert them into posterior probabilities Then apply the MAP rule to assign a label Common factor for all L probabilities

Naïve Bayes Bayes classification Naïve Bayes classification Difficulty: learning the joint probability is often infeasible! Naïve Bayes classification Assume all input features are class conditionally independent! Apply the MAP classification rule: assign to c* if Applying the independence assumption For a class, the previous generative model can be decomposed by n generative models of a single input.

Naïve Bayes  

Example Example: Play Tennis

Example Learning Phase 2/9 3/5 4/9 0/5 3/9 2/5 2/9 2/5 4/9 3/9 1/5 3/9 Outlook Play=Yes Play=No Sunny 2/9 3/5 Overcast 4/9 0/5 Rain 3/9 2/5 Temperature Play=Yes Play=No Hot 2/9 2/5 Mild 4/9 Cool 3/9 1/5 Humidity Play=Yes Play=No High 3/9 4/5 Normal 6/9 1/5 Wind Play=Yes Play=No Strong 3/9 3/5 Weak 6/9 2/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14

Example Test Phase Given a new instance, predict its label x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) Look up tables achieved in the learning phrase Decision making with the MAP rule P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Yes|x’) ≈ [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’) ≈ [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.

Naïve Bayes Algorithm: Continuous-valued Features Numberless values taken by a continuous-valued feature Conditional probability is often modelled with the normal distribution Learning Phase: Output: normal distributions and Test Phase: Given an unknown instance Instead of looking-up tables, calculate conditional probabilities with all the normal distributions achieved in the learning phrase Apply the MAP rule to assign a label (the same as done for the discrete case)

Naïve Bayes Example: Continuous-valued Features Temperature is naturally of continuous value. Yes: 25.2, 19.3, 18.5, 21.7, 20.1, 24.3, 22.8, 23.1, 19.8 No: 27.3, 30.1, 17.4, 29.5, 15.1 Estimate mean and variance for each class Learning Phase: output two Gaussian models for P(temp|C)

Zero conditional probability If no example contains the feature value In this circumstance, we face a zero conditional probability problem during test For a remedy, class conditional probabilities re-estimated with (m-estimate) Formally, the m-estimate method is a smoothing technique used in natural language process. The general smoothing techniques include Laplace or additive smoothing (For details, see https://en.wikipedia.org/wiki/Additive_smoothing)

Zero conditional probability Example: P(outlook=overcast|no)=0 in the play-tennis dataset Adding m “virtual” examples (m: tunable but up to 1% of #training examples) In this dataset, # of training examples for the “no” class is 5. Assume that we add m=1 “virtual” example in our m-estimate treatment. The “outlook” feature can takes only 3 values. So p=1/3. Re-estimate P(outlook|no) with the m-estimate  

Summary Probabilistic Classification Principle Discriminative vs. Generative models: learning P(c|x) vs. P(x|c)P(c) Generative models for classification: MAP and Bayesian rule Naïve Bayes: the conditional independence assumption Training and test are very efficient. Two different data types lead to two different learning algorithms. Naïve Bayes: a popular generative model for classification Performance competitive to many state-of-the-art classifiers even in the presence of violating the conditional independence assumption Many successful applications, e.g., spam mail filtering, … A good candidate of a base learner in ensemble learning