Presentation is loading. Please wait.

Presentation is loading. Please wait.

Naïve Bayes Classifier. Red = 2.125 Yellow = 6.143 Mass = 134.32 Volume = 24.21 Apple Sensors, scales, etc… 8/29/03Bayesian Classifier2.

Similar presentations


Presentation on theme: "Naïve Bayes Classifier. Red = 2.125 Yellow = 6.143 Mass = 134.32 Volume = 24.21 Apple Sensors, scales, etc… 8/29/03Bayesian Classifier2."— Presentation transcript:

1 Naïve Bayes Classifier

2 Red = 2.125 Yellow = 6.143 Mass = 134.32 Volume = 24.21 Apple Sensors, scales, etc… 8/29/03Bayesian Classifier2

3  Let’s look at one dimension 8/29/03Bayesian Classifier3

4  What if we wanted to ask the question “what is the probability that some fruit with a given redness value is an apple?” 8/29/03Bayesian Classifier4 Could we just look at how far away it is from the apple peak? Is it the highest PDF above the X-value in question?

5  If a fruit has a redness of 4.05 do we know the probability that it’s an apple?  What do we know? 8/29/03Bayesian Classifier5 If it is a histogram of counts then it straight forward Probability it’s an apple 28.57% Probability it’s an orange 71.43% Getting the probability is simple If it is a histogram of counts then it straight forward Probability it’s an apple 28.57% Probability it’s an orange 71.43% Getting the probability is simple

6  Probability density function  Continuous  Probability not count  Might be tempted to use the same approach 8/29/03Bayesian Classifier6 Parametric (  and  parameters) vs. non-parametric

7  What if had trillion oranges and only 100 apples  Might be the most common apple and have a higher value at 4.05 than oranges even though the universe would have way more oranges at that value 8/29/03Bayesian Classifier7

8  2506 apples  2486 oranges  If a fruit has a redness of 4.05 do we know the probability that it’s an apple if we don’t have specific counts at 4.05? 8/29/03Bayesian Classifier8

9 8/29/03Bayesian Classifier9  Above from the book  h is hypothesis, D is training Data Does this make sense?

10  2506 apples  2486 oranges  Probability that redness would be 4.05 if know an apple  About 10/2506  P(apple)?  2506/(2506+2486)  P(redness=4.05)  About (10+25)/(2506+2486) 8/29/03Bayesian Classifier10 ?

11  Whether have counts or PDF  How do we classify?  Simply find the most probable class 8/29/03Bayesian Classifier11

12  I think of the ratio of P(h) to P(D) as an adjustment to the easily determined P(D|h) in order to account for differences in sample size 8/29/03Bayesian Classifier12 Prior Probabilities or Priors Posterior Probability

13  Maximum a posteriori hypothesis (MAP)  ä-( ˌ )pō- ˌ stir-ē- ˈ o ̇ r-ē  Relating to or derived by reasoning from observed facts; inductive  A priori: relating to or derived by reasoning from self-evident propositions; deductive  Approach: Brute-force MAP learning algorithm 8/29/03Bayesian Classifier13

14 Mass (normalized) 012345678910 12345678910 Red Intensity (normalized) More dimensions can be helpful 8/29/03Bayesian Classifier14 Linearly Separable

15  Color (red and yellow) says apple but mass and volume say orange?  Take a vote? 8/29/03Bayesian Classifier15 How handle multiple dimensions?

16  Assume each dimension is independent (doesn’t co-vary with any other dimension)  Can use the product rule  The probability that a fruit is an apple given a set of measurements (dimensions) is: 8/29/03Bayesian Classifier16

17  Known as a Naïve Bayes Classifier  Where v j is class and a i is an attribute  Derivation 8/29/03Bayesian Classifier17

18 8/29/03Bayesian Classifier18  You wish to classify an instance with the following attributes  1.649917 5.197862 134.898820 16.137695  The first column is redness, then yellowness, followed by mass then volume  The training data has in the redness histogram bin in which the instance falls  0 apples, 0 peaches, 9 oranges, and 22 lemons  In the bin for yellowness there are  235, 262, 263, and 239  In the bin for mass there are  106, 176, 143, and 239  In the bin for vol there are  What 3, 57, 7, and 184 What are each of the probabilities that it is an Apple Peach Orange Lemon What are each of the probabilities that it is an Apple Peach Orange Lemon

19 8/29/03Bayesian Classifier19 RedYellowMassVol Apples 02351063 peaches 026217657 oranges 92631437 lemons 22239 184 Total 31999664251 apples 00.240.160.010 peaches 00.260.270.230 oranges 0.290.260.220.030.0005 lemons 0.710.240.360.730.0044

20 8/29/03Bayesian Classifier20  Is it really a zero percent chance that it’s an apple?  Are these really probabilities (hint: 0.0005 + 0.0044 not equal to 1)?  What of the bin size? RedYellowMassVol apples 02351063 peaches 026217657 oranges 92631437 lemons 22239 184 Total 31999664251 apples 00.240.160.010 peaches 00.260.270.230 oranges 0.290.260.220.280.0004 lemons 0.710.240.360.730.0044

21 8/29/03Bayesian Classifier21

22 8/29/03Bayesian Classifier22  Do too many dimensions hurt? What if only some dimensions contribute to ability to classify? What would the other dimensions do to the probabilities?

23 8/29/03Bayesian Classifier23  With imagination and innovation can learn to classify many things you wouldn’t expect  What if you wanted to learn to classify documents, how might you go about it?

24 8/29/03Bayesian Classifier24  Learning to classify text  Collect all words in examples  Calculate P(v j ) and P(w k |v j )  Each instance will be a vector of size |vocabulary|  Classes (v’s) (category)  Each word (w) is a dimension

25 8/29/03Bayesian Classifier25  20 News groups  1000 training documents from each group  The groups were the classes  89% classification accuracy 89 out of every 100 times could tell which newsgroup a document came from

26 8/29/03Bayesian Classifier26  Rift Valley fever virus  Basically RNA (like DNA but with an extra oxygen – the D in DNA is deoxy)  Encapsulated in a protein sheath  Important protein involved in the encapsulation process  Nucleocapsid

27 8/29/03Bayesian Classifier27  SELEX (Systematic Evolution of Ligands by Exponential Enrichment)  Identify RNA segments that have a high affinity for nucleocapsid (aptamer vs. non-aptamer)

28 8/29/03Bayesian Classifier28  Each known aptamer was 30 nucleotides long  A 30 character string  4 nucleotides (ACGU)  What would the data look-like  How would we “bin” the data?

29 8/29/03Bayesian Classifier29  Have seen  Fruit example  Documents  RNA (nucleotides)  Which is best for Bayesian?

30 8/29/03Bayesian Classifier30

31 8/29/03Bayesian Classifier31  The brighter the spot, the greater the mRNA concentration

32 8/29/03Bayesian Classifier32  Thousands of genes (dimensions)  Many genes not affected (distributions for disease and normal same in that dimension) gene patient g1g1 g2g2 g3g3 …gngn disease p1p1 x 1,1 x 1,2 x 1,3 …x 1,n Y p2p2 x 2,1 x 2,2 x 2,3 …x 2,n N............ pmpm x m,1 x m,2 x m,3 …x m,n ?

33 8/29/03Bayesian Classifier33  Perhaps at good growth locations  pH  Average temperature  Average sunlight exposure  Salinity  Average length of day  What else?  What would the data look-like?

34 8/29/03Bayesian Classifier34

35 8/29/03Bayesian Classifier35

36 8/29/03Bayesian Classifier36

37 8/29/03Bayesian Classifier37


Download ppt "Naïve Bayes Classifier. Red = 2.125 Yellow = 6.143 Mass = 134.32 Volume = 24.21 Apple Sensors, scales, etc… 8/29/03Bayesian Classifier2."

Similar presentations


Ads by Google