Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification with Decision Trees and Rules Evgueni Smirnov.

Similar presentations


Presentation on theme: "Classification with Decision Trees and Rules Evgueni Smirnov."— Presentation transcript:

1 Classification with Decision Trees and Rules Evgueni Smirnov

2 Overview Classification Problem Decision Trees for Classification Decision Rules for Classification

3 Classification Task Given: X is an instance space defined as {X i } i ∈ 1..N where X i is a discrete/continuous variable. Y is a finite class set. Training data D ⊆ X x Y. Find: Class y ∈ Y of an instance x ∈ X.

4 Instances, Classes, Instance Spaces friendly robots A class is a set of objects in a world that are unified by a reason. A reason may be a similar appearance, structure or function. Example. The set: {children, photos, cat, diplomas} can be viewed as a class “Most important things to take out of your apartment when it catches fire”.

5 head = square body = round smiling = yes holding = flag color = yellow X Instances, Classes, Instance Spaces friendly robots

6 head = square body = round smiling = yes holding = flag color = yellow X friendly robots H smiling = yes  friendly robots M Instances, Classes, Instance Spaces

7 X H    M Classification problem

8 Decision Trees for Classification Classification Problem Definition of Decision Trees Variable Selection: Impurity Reduction, Entropy, and Information Gain Learning Decision Trees Overfitting and Pruning Handling Variables with Many Values Handling Missing Values Handling Large Data: Windowing

9 Decision Trees for Classification A decision tree is a tree where: –Each interior node tests a variable –Each branch corresponds to a variable value –Each leaf node is labelled with a class (class node) A1 A2 A3 c1 c2 c1 c2 c1 a11 a12 a13 a21a22 a31 a32

10 A simple database: playtennis DayOutlookTemperatureHumidityWindPlay Tennis D1SunnyHotHighWeakNo D2SunnyHotHighStrongNo D3OvercastHotHighWeakYes D4RainMildNormalWeakYes D5RainCoolNormalWeakYes D6RainCoolNormalStrongNo D7OvercastCoolHighStrongYes D8SunnyMildNormalWeakNo D9SunnyHotNormalWeakYes D10RainMildNormalStrongYes D11SunnyCoolNormalStrongYes D12OvercastMildHighStrongYes D13OvercastHotNormalWeakYes D14RainMildHighStrongNo

11 Decision Tree For Playing Tennis Outlook sunnyovercastrainy Humidity Windy highnormal no falsetrue yes no

12 Classification with Decision Trees Classify(x: instance, node: variable containing a node of DT) if node is a classification node then –return the class of node; else –determine the child of node that match x. –return Classify(x, child). A1 A2 A3 c1 c2 c1 c2 c1 a11 a12 a13 a21a22 a31 a32

13 Decision Tree Learning Basic Algorithm: 1. X i  the “best" decision variable for a node N. 2. Assign X i as decision variable for the node N. 3. For each value of X i, create new descendant of N. 4. Sort training examples to leaf nodes. 5. IF training examples perfectly classified, THEN Stop. ELSE Iterate over new leaf nodes.

14 Variable Quality Measures _____________________________________ Outlook Temp Hum Wind Play --------------------------------------------------------- Rain Mild High Weak Yes Rain Cool Normal Weak Yes Rain Cool Normal Strong No Rain Mild Normal Weak Yes Rain Mild High Strong No Outlook ____________________________________ Outlook Temp Hum Wind Play ------------------------------------------------------- Sunny Hot High Weak No Sunny Hot High Strong No Sunny Mild High Weak No Sunny Cool Normal Weak Yes Sunny Mild Normal Strong Yes _____________________________________ Outlook Temp Hum Wind Play --------------------------------------------------------- Overcast Hot High Weak Yes Overcast Cool Normal Strong Yes Sunny Overcast Rain

15 Variable Quality Measures Let S be a sample of training instances and p j be the proportions of instances of class j (j=1,…,J) in S. Define an impurity measure I(S) that satisfies: –I(S) is minimum only when p i =1 and p j =0 for j  i (all objects are of the same class); –I(S) is maximum only when p j =1/J (there is exactly the same number of objects of all classes); –I(S) is symmetric with respect to p 1,…,p J;

16 Reduction of Impurity: Discrete Variables The “best” variable is the variable X i that determines a split maximizing the expected reduction of impurity: where S xij is the subset of instances from S s.t. X i =x ij.    j x ij S xij I S S SIXiXi SI ) ( || || )(),( X i S xi1 S xi2 S xij …….

17 Information Gain: Entropy Let S be a sample of training examples, and p + is the proportion of positive examples in S and p - is the proportion of negative examples in S. Then: entropy measures the impurity of S: E( S) = - p + log 2 p + – p - log 2 p -

18 Entropy Example In the Play Tennis dataset we had two target classes: yes and no Out of 14 instances, 9 classified yes, rest no OutlookTemp. Humidi ty WindyPlay SunnyHotHighFalseNo SunnyHotHighTrueNo Overcas t HotHighFalseYes RainyMildHighFalseYes RainyCoolNormalFalseYes RainyCoolNormalTrueNo Overcas t CoolNormalTrueYes OutlookTemp. Humidi ty Windyplay SunnyMildHighFalseNo SunnyCoolNormalFalseYes RainyMildNormalFalseYes SunnyMildNormalTrueYes Overcas t MildHighTrueYes Overcas t HotNormalFalseYes RainyMildHighTrueNo

19 Information Gain Information Gain is the expected reduction in entropy caused by partitioning the instances from S according to a given discrete variable. Gain(S, X i ) = E(S) - where S xij is the subset of instances from S s.t. X i =x ij. X i S xi1 S xi2 S xij …….

20 Example _____________________________________ Outlook Temp Hum Wind Play --------------------------------------------------------- Rain Mild High Weak Yes Rain Cool Normal Weak Yes Rain Cool Normal Strong No Rain Mild Normal Weak Yes Rain Mild High Strong No Outlook ____________________________________ Outlook Temp Hum Wind Play ------------------------------------------------------- Sunny Hot High Weak No Sunny Hot High Strong No Sunny Mild High Weak No Sunny Cool Normal Weak Yes Sunny Mild Normal Strong Yes _____________________________________ Outlook Temp Hum Wind Play --------------------------------------------------------- Overcast Hot High Weak Yes Overcast Cool Normal Strong Yes Sunny Overcast Rain Which attribute should be tested here? Gain (S sunny, Humidity) = =.970 - (3/5) 0.0 - (2/5) 0.0 =.970 Gain (S sunny, Temperature) =.970 - (2/5) 0.0 - (2/5) 1.0 - (1/5) 0.0 =.570 Gain (S sunny, Wind) =.970 - (2/5) 1.0 - (3/5).918 =.019

21 Continuous Variables Temp.Play 80No 85No 83Yes 75Yes 68Yes 65No 64Yes 72No 75Yes 70Yes 69Yes 72Yes 81Yes 71No 85 Yes81 Yes83 Yes75 Yes75 No80 Yes70 No71 No72 Yes72 Yes69 Yes68 No65 Yes64 PlayTemp. Sort Temp.< 64.5  I=0.048 Temp.< 84  I=0.113 Temp.< 80.5  I=0.000 Temp.< 77.5  I=0.025 Temp.< 73.5  I=0.001 Temp.< 70.5  I=0.045 Temp.< 66.5  I=0.010

22 ID3 Algorithm Informally: –Determine the variable with the highest information gain on the training set. –Use this variable as the root, create a branch for each of the values the attribute can have. –For each branch, repeat the process with subset of the training set that is classified by that branch.

23 Hypothesis Space Search in ID3 The hypothesis space is the set of all decision trees defined over the given set of variables. ID3’s hypothesis space is a compete space; i.e., the target tree is there! ID3 performs a simple-to- complex, hill climbing search through this space.

24 Hypothesis Space Search in ID3 The evaluation function is the information gain. ID3 maintains only a single current decision tree. ID3 performs no backtracking in its search. ID3 uses all training instances at each step of the search.

25 Decision Trees are Non-linear Classifiers A2<0.33 ? good A1<0.91 ? A1<0.23 ? A2<0.91 ? A2<0.75 ? A2<0.49 ? A2<0.65 ? good badgood bad good yesno

26 Posterior Class Probabilities Outlook SunnyOvercastRainy no: 2 pos and 3 neg P pos = 0.4, P neg = 0.6 Windy FalseTrue no: 2 pos and 0 neg P pos = 1.0, P neg = 0.0 no: 0 pos and 2 neg P pos = 0.0, P neg = 1.0 no: 3 pos and 0 neg P pos = 1.0, P neg = 0.0

27 Overfitting Definition: Given a hypothesis space H, a hypothesis h  H is said to overfit the training data if there exists some hypothesis h’  H, such that h has smaller error that h’ over the training instances, but h’ has a smaller error that h over the entire distribution of instances.

28 Reasons for Overfitting Noisy training instances. Consider an noisy training example: Outlook = Sunny; Temp = Hot; Humidity = Normal; Wind = True; PlayTennis = No This instance affects the training instances: Outlook = Sunny; Temp = Cool; Humidity = Normal; Wind = False; PlayTennis = Yes Outlook = Sunny; Temp = Mild; Humidity = Normal; Wind = True; PlayTennis = Yes Outlook sunnyovercastrainy HumidityWindy highnormal no falsetrue yes no

29 Reasons for Overfitting Outlook sunnyovercastrainy HumidityWindy highnormal no falsetrue yes no Windy true yes false Temp high yesno mild cool ? Outlook = Sunny; Temp = Hot; Humidity = Normal; Wind = True; PlayTennis = No Outlook = Sunny; Temp = Cool; Humidity = Normal; Wind = False; PlayTennis = Yes Outlook = Sunny; Temp = Mild; Humidity = Normal; Wind = True; PlayTennis = Yes

30 area with probably wrong predictions + + + + + + + - - - - - -- - - - - - - -+ - - - - - Reasons for Overfitting Small number of instances are associated with leaf nodes. In this case it is possible that for coincidental regularities to occur that are unrelated to the actual borders.

31 Approaches to Avoiding Overfitting Pre-pruning: stop growing the tree earlier, before it reaches the point where it perfectly classifies the training data Post-pruning: Allow the tree to overfit the data, and then post-prune the tree.

32 Pre-pruning Outlook SunnyOvercastRainy HumidityWindy HighNormal no FalseTrue yes no Sunny Rainy Outlook Overcast no yes ? It is difficult to decide when to stop growing the tree. A possible scenario is to stop when the leaf nodes get less than m training instances. Here is an example for m = 5. 32 2 23

33 Validation Set Validation set is a set of instances used to evaluate the utility of nodes in decision trees. The validation set has to be chosen so that it is unlikely to suffer from same errors or fluctuations as the set used for decision-tree training. Usually before pruning the training data is split randomly into a growing set and a validation set.

34 Reduced-Error Pruning (Sub-tree replacement) Split data into growing and validation sets. Pruning a decision node d consists of: 1.removing the subtree rooted at d. 2.making d a leaf node. 3.assigning d the most common classification of the training instances associated with d. Outlook sunnyovercastrainy HumidityWindy highnormal no falsetrue yes no 3 instances2 instances Accuracy of the tree on the validation set is 90%.

35 Reduced-Error Pruning (Sub-tree replacement) Split data into growing and validation sets. Pruning a decision node d consists of: 1.removing the subtree rooted at d. 2.making d a leaf node. 3.assigning d the most common classification of the training instances associated with d. Outlook sunnyovercastrainy Windy no falsetrue yes no Accuracy of the tree on the validation set is 92.4%.

36 Reduced-Error Pruning (Sub-tree replacement) Split data into growing and validation sets. Pruning a decision node d consists of: 1.removing the subtree rooted at d. 2.making d a leaf node. 3.assigning d the most common classification of the training instances associated with d. Do until further pruning is harmful: 1.Evaluate impact on validation set of pruning each possible node (plus those below it). 2.Greedily remove the one that most improves validation set accuracy. Outlook sunnyovercastrainy Windy no falsetrue yes no Accuracy of the tree on the validation set is 92.4%.

37 Outlook Humidity Wind no yes no yes Sunny Overcast Rain HighNormal Strong Weak Temp. noyes Mild Cool,Hot Outlook Humidity Wind no yes no yes Sunny Overcast Rain HighNormal Strong Weak yes Outlook Wind no yes no yes Sunny Overcast Rain Strong Weak Outlook no yes Sunny Overcast Rain yes T2T2T2T2 T1T1T1T1 T3T3T3T3 T4T4T4T4 T5T5T5T5 Error GS =0%, Error VS =10% Error GS =6%, Error VS =8% Error GS =13%, Error VS =15% Error GS =27%, Error VS =25% Error GS =33%, Error VS =35% Reduced-Error Pruning (Sub-tree replacement)

38 Reduced Error Pruning Example

39 Reduced-Error Pruning (Sub-tree raising) Split data into growing and validation sets. Raising a sub-tree with root d consists of: 1.removing the sub-tree rooted at the parent of d. 2.place d at the place of its parent. 3.Sort the training instances associated with the parent of d using the sub-tree with root d. Outlook sunnyovercastrainy HumidityWindy highnormal no falsetrue yes no 3 instances2 instances Accuracy of the tree on the validation set is 90%.

40 Reduced-Error Pruning (Sub-tree raising) Split data into growing and validation sets. Raising a sub-tree with root d consists of: 1.removing the sub-tree rooted at the parent of d. 2.place d at the place of its parent. 3.Sort the training instances associated with the parent of d using the sub-tree with root d. Outlook sunnyovercastrainy HumidityWindy highnormal no falsetrue yes no 3 instances2 instances Accuracy of the tree on the validation set is 90%.

41 Reduced-Error Pruning (Sub-tree raising) Split data into growing and validation sets. Raising a sub-tree with root d consists of: 1.removing the sub-tree rooted at the parent of d. 2.place d at the place of its parent. 3.Sort the training instances associated with the parent of d using the sub-tree with root d. Humidity highnormal noyes Accuracy of the tree on the validation set is 73%. So, No!

42 Rule Post-Pruning IF (Outlook = Sunny) & (Humidity = High) THEN PlayTennis = No IF (Outlook = Sunny) & (Humidity = Normal) THEN PlayTennis = Yes ………. 1.Convert tree to equivalent set of rules. 2.Prune each rule independently of others. 3.Sort final rules by their estimated accuracy, and consider them in this sequence when classifying subsequent instances. Outlook sunnyovercastrainy HumidityWindy normal no falsetrue yes no false

43 Decision Tree are non-linear. Can we make them linear? A2<0.33 ? good A1<0.91 ? A1<0.23 ? A2<0.91 ? A2<0.75 ? A2<0.49 ? A2<0.65 ? good badgood bad good yesno

44 Oblique Decision Trees x + y < 1 Class = + Class = Test condition may involve multiple attributes More expressive representation Finding optimal test condition is computationally expensive!

45 Variables with Many Values Problem: –Not good splits: they fragment the data too quickly, leaving insufficient data at the next level –The reduction of impurity of such test is often high (example: split on the object id). Two solutions: –Change the splitting criterion to penalize variables with many values –Consider only binary splits Letter a b c yz …

46 Variables with Many Values Example: outlook in the playtennis –InfoGain(outlook) = 0.246 –Splitinformation(outlook) = 1.577 –Gainratio(outlook) = 0.246/1.577=0.156 < 0.246 Problem: the gain ratio favours unbalanced tests

47 Variables with Many Values

48

49 Missing Values 1.If node n tests variable X i, assign most common value of X i among other instances sorted to node n. 2.If node n tests variable X i, assign a probability to each of possible values of X i. These probabilities are estimated based on the observed frequencies of the values of X i. These probabilities are used in the information gain measure (via info gain).    j x ij S xij I S S SIXiXi SI ) ( || || )(),(

50 Windowing If the data don’t fit main memory use windowing: 1.Select randomly n instances from the training data D and put them in window set W. 2.Train a decision tree DT on W. 3. Determine a set M of instances from D misclassified by DT. 4. W = W U M. 5. IF Not(StopCondition) THEN GoTo 2;

51 Summary Points 1.Decision tree learning provides a practical method for concept learning. 2.ID3-like algorithms search complete hypothesis space. 3.The inductive bias of decision trees is preference (search) bias. 4.Overfitting the training data is an important issue in decision tree learning. 5.A large number of extensions of the ID3 algorithm have been proposed for overfitting avoidance, handling missing attributes, handling numerical attributes, etc.

52 Learning Decision Rules Decision Rules Basic Sequential Covering Algorithm Learn-One-Rule Procedure Pruning

53 Definition of Decision Rules Example: If you run the Prism algorithm from Weka on the weather data you will get the following set of decision rules: if outlook = overcast then PlayTennis = yes if humidity = normal and windy = FALSE then PlayTennis = yes if temperature = mild and humidity = normal then PlayTennis = yes if outlook = rainy and windy = FALSE then PlayTennis = yes if outlook = sunny and humidity = high then PlayTennis = no if outlook = rainy and windy = TRUE then PlayTennis = no Definition: Decision rules are rules with the following form: if then concept C.

54 Why Decision Rules? Decision rules are more compact. Decision rules are more understandable. Example: Let X  {0,1}, Y  {0,1}, Z  {0,1}, W  {0,1}. The rules are: if X=1 and Y=1 then 1 if Z=1 and W=1 then 1 Otherwise 0; X 0 Y 10 1Z 10 0 W 10 1 0 Z 10 0 W 10 1 0 1

55 Why Decision Rules? + + + + + + + + + + + + - - - - - - - - - - - - - Decision boundaries of decision trees + + + + + + + + + + + + - - - - - - - - - - - - - Decision boundaries of decision rules

56 How to Learn Decision Rules? 1.We can convert trees to rules 2.We can use specific rule-learning methods

57 Sequential Covering Algorithms function LearnRuleSet(Target, Attrs, Examples, Threshold): LearnedRules :=  Rule := LearnOneRule(Target, Attrs, Examples) while performance(Rule,Examples) > Threshold, do LearnedRules := LearnedRules  {Rule} Examples := Examples \ {examples covered by Rule} Rule := LearnOneRule(Target, Attrs, Examples) sort LearnedRules according to performance return LearnedRules

58 IF true THEN pos Illustration + + + + + + + + + + + + - - - - - - - - - - - - -

59 IF A THEN pos Illustration + + + + + + + + + + + + - - - - - - - - - - - - -

60 IF true THEN pos IF A THEN posIF A & B THEN pos Illustration + + + + + + + + + + + + - - - - - - - - - - - - -

61 IF true THEN pos Illustration + + + + + + + + + + + + - - - - - - - - - - - - - IF A & B THEN pos

62 IF true THEN posIF C THEN pos Illustration + + + + + + + + + + + + - - - - - - - - - - - - - IF A & B THEN pos

63 IF true THEN posIF C THEN posIF C & D THEN pos Illustration + + + + + + + + + + + + - - - - - - - - - - - - - IF A & B THEN pos

64 Learning One Rule To learn one rule we use one of the strategies below: Top-down: –Start with maximally general rule –Add literals one by one Bottom-up: –Start with maximally specific rule –Remove literals one by one

65 Bottom-up vs. Top-down + + + + + + + + + + + + - - - - - - - - - - - - - Top-down: typically more general rules Bottom-up: typically more specific rules

66 Learning One Rule Bottom-up: Example-driven (AQ family). Top-down: Generate-then-Test (CN-2).

67 Example of Learning One Rule

68 Heuristics for Learning One Rule –When is a rule “good”? High accuracy; Less important: high coverage. –Possible evaluation functions: Relative frequency: nc/n, where nc is the number of correctly classified instances, and n is the number of instances covered by the rule; m-estimate of accuracy: (nc+ mp)/(n+m), where nc is the number of correctly classified instances, n is the number of instances covered by the rule, p is the prior probablity of the class predicted by the rule, and m is the weight of p. Entropy.

69 How to Arrange the Rules 1.The rules are ordered according to the order they have been learned. This order is used for instance classification. 2.The rules are ordered according to their accuracy. This order is used for instance classification. 3.The rules are not ordered but there exists a strategy how to apply the rules (e.g., an instance covered by conflicting rules gets the classification of the rule that classifies correctly more training instances; if an instance is not covered by any rule, then it gets the classification of the majority class represented in the training data).

70 Approaches to Avoiding Overfitting Pre-pruning: stop learning the decision rules before they reach the point where they perfectly classify the training data Post-pruning: allow the decision rules to overfit the training data, and then post- prune the rules.

71 Post-Pruning 1.Split instances into Growing Set and Pruning Set; 2.Learn set SR of rules using Growing Set; 3.Find the best simplification BSR of SR. 4.while (Accuracy(BSR, Pruning Set) > Accuracy(SR, Pruning Set) ) do 4.1 SR = BSR; 4.2 Find the best simplification BSR of SR. 5. return BSR;

72 Incremental Reduced Error Pruning D1D1 D2D2 D3D3 D3D3 D 22 D1D1 D 21 Post-pruning

73 Incremental Reduced Error Pruning 1.Split Training Set into Growing Set and Validation Set; 2.Learn rule R using Growing Set; 3.Prune the rule R using Validation Set; 4.if performance(R, Training Set) > Threshold 4.1 Add R to Set of Learned Rules 4.2 Remove in Training Set the instances covered by R; 4.2 go to 1; 5. else return Set of Learned Rules

74 Summary Points 1.Decision rules are easier for human comprehension than decision trees. 2.Decision rules have simpler decision boundaries than decision trees. 3.Decision rules are learned by sequential covering of the training instances.

75 Lab 1: Some Details

76 Model Evaluation Techniques Evaluation on the training set: too optimistic Training set Classifier Training set

77 Model Evaluation Techniques Hold-out Method: depends on the make-up of the test set. Training set Classifier Test set Data To improve the precision of the hold-out method: it is repeated many times.

78 Model Evaluation Techniques k-fold Cross Validation Classifier Data train testtraintesttraintesttrain

79 Intro to Weka @relation weather.symbolic @attribute outlook {sunny, overcast, rainy} @attribute temperature {hot, mild, cool} @attribute humidity {high, normal} @attribute windy {TRUE, FALSE} @attribute play {TRUE, FALSE} @data sunny,hot,high,FALSE,FALSE sunny,hot,high,TRUE,FALSE overcast,hot,high,FALSE,TRUE rainy,mild,high,FALSE,TRUE rainy,cool,normal,FALSE,TRUE rainy,cool,normal,TRUE,FALSE overcast,cool,normal,TRUE,TRUE ………….

80 References Mitchell, Tom. M. 1997. Machine Learning. New York: McGraw-Hill Quinlan, J. R. 1986. Induction of decision trees. Machine Learning Stuart Russell, Peter Norvig, 2010. Artificial Intelligence: A Modern Approach. New Jersey: Prantice Hall


Download ppt "Classification with Decision Trees and Rules Evgueni Smirnov."

Similar presentations


Ads by Google