Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Statistical Inference (By Michael Jordon) l Bayesian perspective –conditional perspective—inferences.

Similar presentations


Presentation on theme: "© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Statistical Inference (By Michael Jordon) l Bayesian perspective –conditional perspective—inferences."— Presentation transcript:

1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Statistical Inference (By Michael Jordon) l Bayesian perspective –conditional perspective—inferences should be made conditional on the current data –natural in the setting of a long-term project with a domain expert –the optimist: let’s make the best possible use of our sophisticated inferential tool l Frequentist perspective –unconditional perspective—inferential methods should give good answers in repeated use –natural in the setting of writing software that will be used by many people with many data sets –the pessimist: let’s protect ourselves against bad decisions given that our inferential procedure is inevitably based on a simplification of reality

2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2 Bayes Classifier l A probabilistic framework for solving classification problems l Conditional Probability: l Bayes theorem:

3 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3 Example of Bayes Theorem l Given: –A doctor knows that C causes A 50% of the time –Prior probability of any patient having C is 1/50,000 –Prior probability of any patient having A is 1/20 l If a patient has A, what’s the probability he/she has C?

4 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4 Bayesian Classifiers l Consider each attribute and class label as random variables l Given a record with attributes (A 1, A 2,…,A n ) –Goal is to predict class C –Specifically, we want to find the value of C that maximizes P(C| A 1, A 2,…,A n ) l Can we estimate P(C| A 1, A 2,…,A n ) directly from data?

5 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5 Bayesian Classifiers l Approach: –compute the posterior probability P(C | A 1, A 2, …, A n ) for all values of C using the Bayes theorem –Choose value of C that maximizes P(C | A 1, A 2, …, A n ) –Equivalent to choosing value of C that maximizes P(A 1, A 2, …, A n |C) P(C) l How to estimate P(A 1, A 2, …, A n | C )?

6 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6 Naïve Bayes Classifier l Assume independence among attributes A i when class is given: –P(A 1, A 2, …, A n |C) = P(A 1 | C j ) P(A 2 | C j )… P(A n | C j ) –Can estimate P(A i | C j ) for all A i and C j. –New point is classified to C j if P(C j )  P(A i | C j ) is maximal.

7 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7 Training dataset Class: C1:buys_computer= ‘yes’ C2:buys_computer= ‘no’ Data sample X =(age<=30, Income=medium, Student=yes Credit_rating= Fair)

8 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8 Naïve Bayesian Classifier: Example l Compute P(X|Ci) for each class P(age=“<30” | buys_computer=“yes”) = 2/9=0.222 P(age=“<30” | buys_computer=“no”) = 3/5 =0.6 P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444 P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4 P(student=“yes” | buys_computer=“yes)= 6/9 =0.667 P(student=“yes” | buys_computer=“no”)= 1/5=0.2 P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667 P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4 X=(age<=30, income =medium, student=yes, credit_rating=fair) P(X|Ci) : P(X|buys_computer=“yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer=“no”) = 0.6 x 0.4 x 0.2 x 0.4 =0.019 P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”) = 0.028 P(X|buys_computer=“no”) * P(buys_computer=“no”) = 0.007 X belongs to class “buys_computer=yes”

9 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 9 Naïve Bayes Classifier l If one of the conditional probability is zero, then the entire expression becomes zero. l Probability estimation: c: number of classes p: prior probability m: parameter (equivalent sample size)

10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10 Naïve Bayes (Summary) l Robust to isolated noise points because such points are averaged out. l Handle missing values by ignoring the instance during probability estimate calculations l Robust to irrelevant attributes. If Ai is irrelevant, then P(Ai | Y) becomes almost uniformly distributed. l Independence assumption may not hold for some attributes –Use other techniques such as Bayesian Belief Networks (BBN)

11 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11 Instance-Based Classifiers Store the training records Use training records to predict the class label of unseen cases

12 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 12 Nearest Neighbor Classifiers l Basic idea: –If it walks like a duck, quacks like a duck, then it’s probably a duck Training Records Test Record Compute Distance Choose k of the “nearest” records

13 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13 Nearest-Neighbor Classifiers l Requires three things –The set of stored records –Distance Metric to compute distance between records –The value of k, the number of nearest neighbors to retrieve l To classify an unknown record: –Compute distance to other training records –Identify k nearest neighbors –Use class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote)

14 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14 Nearest Neighbor Classification l Compute distance between two points: –Euclidean distance l Determine the class from nearest neighbor list –take the majority vote of class labels among the k-nearest neighbors –Weigh the vote according to distance  weight factor, w = 1/d 2

15 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15 Definition of Nearest Neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x

16 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 16 Nearest Neighbor Classification… l Choosing the value of k: –If k is too small, sensitive to noise points –If k is too large, neighborhood may include points from other classes

17 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17 K-nearest neighbors

18 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 18 Distance functions l D sum (A,B) = D gender (A,B) + D age (A,B) + D salary (A,B) l D norm (A,B) = D sum (A,B)/max(D sum ) l D euclid (A,B) = sqrt(D gender (A,B) 2 + D age (A,B) 2 + D salary (A,B) 2

19 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19

20 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20 Remarks on Lazy vs. Eager Learning l Instance-based learning: lazy evaluation l Decision-tree and Bayesian classification: eager evaluation l Key differences –Lazy method may consider query instance xq when deciding how to generalize beyond the training data D –Eager method cannot since they have already chosen global approximation when seeing the query l Efficiency: Lazy - less time training but more time predicting l Accuracy –Lazy method effectively uses a richer hypothesis space since it uses many local linear functions to form its implicit global approximation to the target function –Eager: must commit to a single hypothesis that covers the entire instance space


Download ppt "© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Statistical Inference (By Michael Jordon) l Bayesian perspective –conditional perspective—inferences."

Similar presentations


Ads by Google