1 Naïve Bayes A probabilistic ML algorithm. 2 Axioms of Probability Theory All probabilities between 0 and 1 True proposition has probability 1, false.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Natural Language Processing COMPSCI 423/723
Probability: Review The state of the world is described using random variables Probabilities are defined over events –Sets of world states characterized.
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
Uncertainty Everyday reasoning and decision making is based on uncertain evidence and inferences. Classical logic only allows conclusions to be strictly.
Text Categorization CSC 575 Intelligent Information Retrieval.
What is Statistical Modeling
Data Mining Classification: Naïve Bayes Classifier
Overview Full Bayesian Learning MAP learning
Parameter Estimation: Maximum Likelihood Estimation Chapter 3 (Duda et al.) – Sections CS479/679 Pattern Recognition Dr. George Bebis.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
1 Text Categorization CSE 454. Administrivia Mailing List Groups for PS1 Questions on PS1? Groups for Project Ideas for Project.
Generative Models Rong Jin. Statistical Inference Training ExamplesLearning a Statistical Model  Prediction p(x;  ) Female: Gaussian distribution N(
Text Classification – Naive Bayes
Thanks to Nir Friedman, HU
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
For Monday after Spring Break Read Homework: –Chapter 13, exercise 6 and 8 May be done in pairs.
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
Crash Course on Machine Learning
Recitation 1 Probability Review
11 CS 388: Natural Language Processing: Discriminative Training and Conditional Random Fields (CRFs) for Sequence Labeling Raymond J. Mooney University.
Bayesian Networks. Male brain wiring Female brain wiring.
1 Inductive Classification Based on the ML lecture by Raymond J. Mooney University of Texas at Austin.
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Text Classification, Active/Interactive learning.
1 CS 343: Artificial Intelligence Probabilistic Reasoning and Naïve Bayes Raymond J. Mooney University of Texas at Austin.
Naive Bayes Classifier
Bayesian Networks Martin Bachler MLA - VO
1 CS 391L: Machine Learning: Bayesian Learning: Naïve Bayes Raymond J. Mooney University of Texas at Austin.
1 Inductive Classification Based on the ML lecture by Raymond J. Mooney University of Texas at Austin.
CSE 446: Point Estimation Winter 2012 Dan Weld Slides adapted from Carlos Guestrin (& Luke Zettlemoyer)
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
1 Text Categorization. 2 Categorization Given: –A description of an instance, x  X, where X is the instance language or instance space. –A fixed set.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Classification Techniques: Bayesian Classification
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
1 Naïve Bayes Classification CS 6243 Machine Learning Modified from the slides by Dr. Raymond J. Mooney
Chapter 6 Bayesian Learning
1 Text Categorization Slides based on R. Mooney (UT Austin)
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
1 Text Categorization CSE Categorization Given: –A description of an instance, x  X, where X is the instance language or instance space. –A fixed.
Review of statistical modeling and probability theory Alan Moses ML4bio.
1 Machine Learning: Lecture 6 Bayesian Learning (Based on Chapter 6 of Mitchell T.., Machine Learning, 1997)
Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners
Chapter 6. Classification and Prediction Classification by decision tree induction Bayesian classification Rule-based classification Classification by.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
CS 2750: Machine Learning Probability Review Prof. Adriana Kovashka University of Pittsburgh February 29, 2016.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Bayesian Learning. Bayes Classifier A probabilistic framework for solving classification problems Conditional Probability: Bayes theorem:
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
1 Probabilistic ML algorithm Naïve Bayes and Maximum Likelyhood.
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
Bayesian Learning. Uncertainty & Probability Baye's rule Choosing Hypotheses- Maximum a posteriori Maximum Likelihood - Baye's concept learning Maximum.
1 Chapter 6 Bayesian Learning lecture slides of Raymond J. Mooney, University of Texas at Austin.
CS479/679 Pattern Recognition Dr. George Bebis
Review of Probability.
Naive Bayes Classifier
Web-Mining Agents Part: Data Mining
CS 2750: Machine Learning Probability Review Density Estimation
Classification Techniques: Bayesian Classification
Text Categorization.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Probabilistic ML algorithm Naïve Bayes and Maximum Likelyhood
LECTURE 07: BAYESIAN ESTIMATION
Hankz Hankui Zhuo Bayesian Networks Hankz Hankui Zhuo
Speech recognition, machine learning
Speech recognition, machine learning
Presentation transcript:

1 Naïve Bayes A probabilistic ML algorithm

2 Axioms of Probability Theory All probabilities between 0 and 1 True proposition has probability 1, false has probability 0. P(true) = 1 P(false) = 0. The probability of disjunction is: A B

3 Conditional Probability P(A | B) is the probability of A given B Assumes that B is all and only information known. Defined by: A B

4 Independence A and B are independent iff: Therefore, if A and B are independent: These two constraints are logically equivalent

5 Joint Distribution The joint probability distribution for a set of random INDEPENDENT variables, X 1,…,X n gives the probability of every combination of values (an n-dimensional array with v n values if all variables are discrete with v values, all v n values must sum to 1): P(X 1,…,X n ) The probability of all possible conjunctions (assignments of values to some subset of variables) can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. circlesquare red blue circlesquare red blue0.20 Class=positive Class=negative

6 Probabilistic Classification Let Y be the random variable for the class which takes values {y 1,y 2,…y m } (m possible classifications for our instances). Let X be the random variable describing an instance consisting of a vector of values for n features, let x k be a possible value for X and x ik a possible value for X i. For classification, we need to compute: P(Y=y i | X=x k ) for i=1…m E.g. the objective is to classify a new unseen x k by estimating the probability of each possible classification y i, given the feature values of the instance to be classified x k :

Probabilistic Classification (2) However, given no other assumptions, this requires a table giving the probability of each category for each possible instance in the instance space, which is impossible to accurately estimate from a reasonably-sized training set. –Assuming that Y and all X i are binary, we need 2 n entries to specify P(Y=pos | X=x k ) for each of the 2 n possible x k since: –P(Y=neg | X=x k ) = 1 – P(Y=pos | X=x k ) –Compared to 2 n+1 – 1 entries for the joint distribution P(Y,X 1,X 2 …X n ) 7

8 Bayes Theorem Simple proof from definition of conditional probability: QED: (Def. cond. prob.)

9 Bayesian Categorization For each classification value y i we have (applying Bayes): P(Y=yi) and P(X=x k ) are called priors and can be estimated from learning set D since categories are complete and disjoint

Complete and Disjoint Complete: Y can only assume values in Disjoint: If a set of categories is complete and disjoint, Z is a random variable, and z is any of its possible values, then: 10

11 Bayesian Categorization (cont.) To know P(Y=y i |X=x k ) need to know: –Priors: P(Y=y i ) –Conditionals: P(X=x k | Y=y i ) P(Y=y i ) are easily estimated from data. –If n i of the examples in D have value Y=y i then P(Y=y i ) = n i / |D|

Bayesian Categorization (cont.) To summarize: We need to estimate Which is equivalent to estimate (Bayes) We know that: Therefore the problem is to estimate But: So we really need to estimate only 12 P(Y=y i ) = n i / |D|

Bayesian Categorization (cont.) In other terms, to estimate the probability of a category given an instance P(Y=y i /X=x k ), we need to estimate the probability of seeing that instance given the category P(X=x k /Y=y i ). IS THIS SIMPLER THAN THE ORIGINAL PROBLEM? Too many possible instances (e.g. 2 n for binary features) to estimate all P(X=x k | Y=y i ) for all i. Still need to make some sort of independence assumptions about the features to make learning tractable 13

14 Generative Probabilistic Models Assume a simple (usually unrealistic) probabilistic method by which the data was generated. For categorization, each category has a different parameterized generative model that characterizes that category. Training: Use the data for each category to estimate the parameters of the generative model for that category. –Maximum Likelihood Estimation (MLE): Set parameters to maximize the probability that the model produced the given training data. –If M λ denotes a model with parameter values λ and D i is the training data for the i-th class, find model parameters for class i (λ i ) that maximize the likelihood of D k : Testing: Use Bayesian analysis to determine the category model that most likely generated a specific test instance. A generative model is a model for randomly generating observable data, typically given some hidden parameters. It specifies a joint probability distribution over observation and label sequences. A generative model is a model for randomly generating observable data, typically given some hidden parameters. It specifies a joint probability distribution over observation and label sequences.

Model parameters 15 M λ is estimated on the learning set D Our (hidden) model parameters are the conditional probabilities, those generating our observable data

16 Naïve Bayes Generative Model Size Color Shape Positive Negative pos neg pos neg sm med lg med sm med lg red blue grn circ sqr tri circ sqr tri sm lg med sm lg med lg sm blue red grn blue grn red grn blue circ sqr tri circ sqr circ tri Category

17 Naïve Bayes Inference Problem Size Color Shape Positive Negative pos neg pos neg sm med lg med sm med lg red blue grn circ sqr tri circ sqr tri sm lg med sm lg med lg sm blue red grn blue grn red grn blue circ sqr tri circ sqr circ tri Category lg red circ ?? I estimate on the learning set the probability of extracting lg, red, circ from the red or blue urns. I estimate on the learning set the probability of extracting lg, red, circ from the red or blue urns.

HOW? 18

19 Naïve Bayesian Categorization Assume features of an instance are independent given the category (conditionally independent). Therefore, we only need to know P(X i | Y) for each possible pair of a feature-value and a category. If Y and all X i and binary, this requires specifying only 2n parameters: –P(X i =true | Y=true) and P(X i =true | Y=false) for each X i –P(X i =false | Y) = 1 – P(X i =true | Y) Compared to specifying 2 n parameters without any independence assumptions.

20 Naïve Bayes Example ProbabilityY=positiveY=negative P(Y)0.5 P(small | Y)3/8 P(medium | Y)3/82/8 P(large | Y)2/83/8 P(red | Y)5/82/8 P(blue | Y)2/83/8 P(green | Y)1/83/8 P(square | Y)1/83/8 P(triangle | Y)2/83/8 P(circle | Y)5/82/8 Test Instance: Training set Have 3 sm out of 8 instances in red “Size” urn then P(small/pos)=3/8=0,375 (round 4) Have 3 sm out of 8 instances in red “Size” urn then P(small/pos)=3/8=0,375 (round 4)

21 Naïve Bayes Example Probabilitypositivenegative P(Y)0.5 P(medium | Y)3/82/8 P(red | Y)5/82/8 P(circle | Y)5/82/8 P(positive | X) = P(positive)*P(medium | positive)*P(red | positive)*P(circle | positive) / P(X) 0.5 * 3/8 * 5/8 * 5/8 P(negative | X) = P(negative)*P(medium | negative)*P(red | negative)*P(circle | negative) / P(X) 0.5 * 2/8 * 2/8 * 2/8 = 0,073 = Test Instance: P(positive/X)>P(negative/X)  positive

Naive summary Classify any new datum instance x k =(x 1,…x n ) as: To do this based on training examples, estimate the parameters from the training examples in D: –For each target value of the classification variable (hypothesis) y i –For each attribute value a t of each datum instance

23 Estimating Probabilities Normally, as in previous example, probabilities are estimated based on observed frequencies in the training data. If D contains n k examples in category y k, and n ijk of these n k examples have the jth value for feature X i, x ij, then: However, estimating such probabilities from small training sets is error-prone. If due only to chance, a rare feature, X i, is always false in the training data,  y k :P(X i =true | Y=y k ) = 0. If X i =true then occurs in a test example, X, the result is that  y k : P(X | Y=y k ) = 0 and  y k : P(Y=y k | X) = 0

24 Probability Estimation Example Probabilitypositivenegative P(Y)0.5 P(small | Y)0.5 P(medium | Y)0.0 P(large | Y)0.5 P(red | Y) P(blue | Y) P(green | Y)0.0 P(square | Y)0.0 P(triangle | Y) P(circle | Y) ExSizeColorShapeCategory 1smallredcirclepositive 2largeredcirclepositive 3smallredtrianglenegitive 4largebluecirclenegitive Test Instance X: P(positive | X) = 0.5 * 0.0 * 1.0 * 1.0 / P(X) = 0 P(negative | X) = 0.5 * 0.0 * 0.5 * 0.5 / P(X) = 0

25 Smoothing To account for estimation from small samples, probability estimates are adjusted or smoothed. Laplace smoothing using an m-estimate assumes that each feature is given a prior probability, p, that is assumed to have been previously observed in a “ virtual ” sample of size m. For binary features, p is simply assumed to be 0.5.

26 Laplace Smothing Example Assume training set contains 10 positive examples: –4: small –0: medium –6: large Estimate parameters as follows (if m=1, p=1/3) –P(small | positive) = (4 + 1/3) / (10 + 1) = –P(medium | positive) = (0 + 1/3) / (10 + 1) = 0.03 –P(large | positive) = (6 + 1/3) / (10 + 1) = –P(small or medium or large | positive) = 1.0

27 Continuous Attributes If X i is a continuous feature rather than a discrete one, need another way to calculate P(X i | Y). Assume that X i has a Gaussian distribution whose mean and variance depends on Y. During training, for each combination of a continuous feature X i and a class value for Y, y k, estimate a mean, μ ik, and standard deviation σ ik based on the values of feature X i in class y k in the training data. μ ik is the mean value of Xi observed over instances for which Y= y k in D During testing, estimate P(X i | Y=y k ) for a given example, using the Gaussian distribution defined by μ ik and σ ik.

28 Comments on Naïve Bayes Tends to work well despite strong assumption of conditional independence. Experiments show it to be quite competitive with other classification methods on standard UCI datasets. Although it does not produce accurate probability estimates when its independence assumptions are violated, it may still pick the correct maximum-probability class in many cases. –Able to learn conjunctive concepts in any case Does not perform any search of the hypothesis space. Directly constructs a hypothesis from parameter estimates that are easily calculated from the training data. –Strong bias Not guarantee consistency with training data. Typically handles noise well since it does not even focus on completely fitting the training data.