Presentation is loading. Please wait.

Presentation is loading. Please wait.

Naïve Bayes Classifier. Bayes Classifier l A probabilistic framework for classification problems l Often appropriate because the world is noisy and also.

Similar presentations


Presentation on theme: "Naïve Bayes Classifier. Bayes Classifier l A probabilistic framework for classification problems l Often appropriate because the world is noisy and also."— Presentation transcript:

1 Naïve Bayes Classifier

2 Bayes Classifier l A probabilistic framework for classification problems l Often appropriate because the world is noisy and also some relationships are probabilistic in nature –Is predicting who will win a baseball game probabilistic in nature? l Before getting the heart of the matter, we will go over some basic probability. –Stop me if you have questions!!!!

3 Conditional Probability l The conditional probability of an event C given that event A has occurred is denoted P(C|A) l Here are some practice questions that you should be able to answer (e.g., if you took CISC 1100/1400)  Given a standard six-sided die: –What is P(roll a 1|roll an odd number)? –What is P(roll a 1|roll an even number)? –What is P(roll a number >2)? –What is P(roll a number >2|roll a number > 1)?  Given a standard deck of cards: –What is P(pick an Ace|pick a red card)? –What is P(pick a red card|pick an Ace)? –What is P(pick ace of clubs|pick an Ace)?

4 Conditional Probability Continued l P(A,C) is the probability that A and C both occur –What is P(pick an Ace, pick a red card) = ???  Note that this is the same a the probability of picking a red ace l Two events are independent if one occurring does not impact the occurrence or non-occurrence of the other one l Please give me some examples of independent events l In the example from above, are A and C independent events? l If two events A and B are independent, then: –P(A, B) = P(A) x P(B) –Note that this helps us answer the P(A,C) above, although most of you probably solved it directly.

5 Conditional Probability Continued C A A  C How does the Venn Diagram show these equations to be true? The following are true: P(C|A) = P(A,C)/P(A) and P(A|C) = P(A,C)/P(C)

6 An Example l Let’s use the example from before, where: –A = “pick and Ace” and –C = “pick a red card” l Using the previous equation: –P(C|A) = P(A,C)/P(A) l We now get: –P(red card|Ace) = P(red ace)/P(Ace) –P(red card|Ace) = (2/52)/(4/52) = 2/4 =.5 l Hopefully this makes sense, just like the Venn Diagram should

7 Bayes Theorem l Bayes Theorem states that: –P(C|A) =[ P(A|C) P(C)] / P(A) l Prove this equation using the two equations we informally showed to be true using the Venn Diagrams:  P(C|A) = P(A,C)/P(A) and P(A|C) = P(A,C)/P(C) l Start with P(C|A) = P(A,C)/P(A) –Then notice that we are part way there and only need to replace P(A,C) –By rearranging the second equation (after the “and”) to isolate P(A,C), we can substitute in P(A|C)P(C) for P(A,C) and the proof is done!

8 Some more terminology l The Prior Probability is the probability assuming no specific information. –Thus we would refer to P(A) as the prior probability of even A occurring –We would not say that P(A|C) is the prior probability of A occurring l The Posterior probability is the probability given that we know something –We would say that P(A|C) is the posterior probability of A (given that C occurs)

9 Example of Bayes Theorem l Given: –A doctor knows that meningitis causes stiff neck 50% of the time –Prior probability of any patient having meningitis is 1/50,000 –Prior probability of any patient having stiff neck is 1/20 l If a patient has stiff neck, what’s the probability he/she has meningitis?

10 Bayesian Classifiers l Given a record with attributes (A 1, A 2,…,A n ) –The goal is to predict class C –Actually, we want to find the value of C that maximizes P(C| A 1, A 2,…,A n ) l Can we estimate P(C| A 1, A 2,…,A n ) directly (w/o Bayes)? –Yes, we simply need to count up the number of times we see A 1, A 2,…,A n and then see what fraction belongs to each class –For example, if n=3 and the feature vector “4,3,2” occurs 10 times and 4 of these belong to C1 and 6 to C2, then:  What is P(C1|”4,3,2”)?  What is P(C2|”4,3,2”)? l Unfortunately, this is generally not feasible since not every feature vector will be found in the training set. –If it did, we would not need to generalize, only memorize

11 Bayesian Classifiers l Indirect Approach: Use Bayes Theorem –compute the posterior probability P(C | A 1, A 2, …, A n ) for all values of C using the Bayes theorem –Choose value of C that maximizesP(C | A 1, A 2, …, A n ) –Equivalent to choosing value of C that maximizes P(A 1, A 2, …, A n |C) P(C)  Since the denominator is the same for all values of C

12 Naïve Bayes Classifier l How can we estimate P(A 1, A 2, …, A n |C)? –We can measure it directly, but only if the training set samples every feature vector. Not practical! l So, we must assume independence among attributes A i when class is given: –P(A 1, A 2, …, A n |C) = P(A 1 | C j ) P(A 2 | C j )… P(A n | C j ) –Then we can estimate P(A i | C j ) for all A i and C j from training data  This is reasonable because now we are looking only at one feature at a time. We can expect to see each feature value represented in the training data. l New point is classified to C j if P(C j )  P(A i | C j ) is maximal.

13 How to Estimate Probabilities from Data? l Class: P(C) = N c /N –e.g., P(No) = 7/10, P(Yes) = 3/10 For discrete attributes: P(A i | C k ) = |A ik |/ N c –where |A ik | is number of instances having attribute A i and belongs to class C k –Examples: P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0

14 How to Estimate Probabilities from Data? l For continuous attributes: –Discretize the range into bins –Two-way split: (A v)  choose only one of the two splits as new attribute  Creates a binary feature –Probability density estimation:  Assume attribute follows a normal distribution and use the data to fit this distribution  Once probability distribution is known, can use it to estimate the conditional probability P(A i |c) l We will not deal with continuous values on HW or exam –Just understand the general ideas above –For the example tax cheating example, we will assume that “Taxable Income” is discrete  Each of the 10 values will therefore have a prior probability of 1/10 k

15 Example of Naïve Bayes l We start with a test example and want to know its class. Does this individual evade their taxes: Yes or No? –Here is the feature vector:  Refund = No, Married, Income = 120K –Now what do we do?  First try writing out the thing we want to measure

16 Example of Naïve Bayes l We start with a test example and want to know its class. Does this individual evade their taxes: Yes or No? –Here is the feature vector:  Refund = No, Married, Income = 120K –Now what do we do?  First try writing out the thing we want to measure  P(Evade|[No, Married, Income=120K]) –Next, what do we need to maximize?

17 Example of Naïve Bayes l We start with a test example and want to know its class. Does this individual evade their taxes: Yes or No? –Here is the feature vector:  Refund = No, Married, Income = 120K –Now what do we do?  First try writing out the thing we want to measure  P(Evade|[No, Married, Income=120K]) –Next, what do we need to maximize?  P(C j )  P(A i | C j )

18 Example of Naïve Bayes l Since we want to maximize P(C j )  P(A i | C j ) –What quantities do we need to calculate in order to use this equation? –Recall that we have three attributes: –Refund: Yes, No –Marital Status: Single, Married, Divorced –Taxable Income: 10 different “discrete” values  While we could compute every P(A i | C j ) for all A i, we only need to do it for the attribute values in the test example

19 Values to Compute l Given we need to compute P(C j )  P(A i | C j ) l We need to compute the class probabilities –P(Evade=No) –P(Evade=Yes) l We need to compute the conditional probabilities –P(Refund=No|Evade=No) –P(Refund=No|Evade=Yes) –P(Marital Status=Married|Evade=No) –P(Marital Status=Married|Evade=Yes) –P(Income=120K|Evade=No) –P(Income=120K|Evade=Yes)

20 Computed Values l Given we need to compute P(C j )  P(A i | C j ) l We need to compute the class probabilities –P(Evade=No) = 7/10 =.7 –P(Evade=Yes) = 3/10 =.3 l We need to compute the conditional probabilities –P(Refund=No|Evade=No) = 4/7 –P(Refund=No|Evade=Yes) 3/3 = 1.0 –P(Marital Status=Married|Evade=No) = 4/7 –P(Marital Status=Married|Evade=Yes) =0/3 = 0 –P(Income=120K|Evade=No) = 1/7 –P(Income=120K|Evade=Yes) = 0/7 = 0

21 Finding the Class l Now compute P(C j )  P(A i | C j ) for both classes for the test example [No, Married, Income = 120K] –For Class Evade=No we get: .7 x 4/7 x 4/7 x 1/7 = 0.032 –For Class Evade=Yes we get: .3 x 1 x 0 x 0 = 0 –Which one is best?  Clearly we would select “No” for the class value  Note that these are not the actual probabilities of each class, since we did not divide by P([No, Married, Income = 120K])

22 Naïve Bayes Classifier l If one of the conditional probability is zero, then the entire expression becomes zero –This is not ideal, especially since probability estimates may not be very precise for rarely occurring values –We use the Laplace estimate to improve things. Without a lot of observations, the Laplace estimate moves the probability towards the value assuming all classes equally likely Examples: 1 class A and 5 class B P(A) = 1/6 but with Laplace = 2/7 0 class A and 2 class B P(A) = 0/10 = 0 with Laplace = 1/4

23 Naïve Bayes (Summary) l Robust to isolated noise points l Robust to irrelevant attributes l Independence assumption may not hold for some attributes –But works surprisingly well in practice for many problems

24 Play-tennis example: estimate P(x i |C) outlook P(sunny|p) = 2/9P(sunny|n) = 3/5 P(overcast|p) = 4/9P(overcast|n) = 0 P(rain|p) = 3/9P(rain|n) = 2/5 Temperature P(hot|p) = 2/9P(hot|n) = 2/5 P(mild|p) = 4/9P(mild|n) = 2/5 P(cool|p) = 3/9P(cool|n) = 1/5 Humidity P(high|p) = 3/9P(high|n) = 4/5 P(normal|p) = 6/9P(normal|n) = 2/5 windy P(true|p) = 3/9P(true|n) = 3/5 P(false|p) = 6/9P(false|n) = 2/5 P(p) = 9/14 P(n) = 5/14

25 Play-tennis example: classifying X l An unseen sample X = l P(X|p)·P(p) = P(rain|p)·P(hot|p)·P(high|p)·P(false|p)·P(p) = 3/9·2/9·3/9·6/9·9/14 = 0.010582 l P(X|n)·P(n) = P(rain|n)·P(hot|n)·P(high|n)·P(false|n)·P(n) = 2/5·2/5·4/5·2/5·5/14 = 0.018286 l Sample X is classified in class n (don’t play)

26 Example of Naïve Bayes Classifier A: attributes M: mammals N: non-mammals P(A|M)P(M) > P(A|N)P(N) => Mammals


Download ppt "Naïve Bayes Classifier. Bayes Classifier l A probabilistic framework for classification problems l Often appropriate because the world is noisy and also."

Similar presentations


Ads by Google