Presentation is loading. Please wait.

Presentation is loading. Please wait.

Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Prof. Giancarlo Mauri Lezione 3 - Learning Decision.

Similar presentations


Presentation on theme: "Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Prof. Giancarlo Mauri Lezione 3 - Learning Decision."— Presentation transcript:

1 Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Prof. Giancarlo Mauri Lezione 3 - Learning Decision Trees

2 Outline  Decision tree representation  ID3 learning algorithm  Entropy, information gain  Overfitting

3 Decision Tree for PlayTennis Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No

4 Decision Tree for PlayTennis Outlook SunnyOvercastRain Humidity HighNormal NoYes Each internal node tests an attribute Each branch corresponds to an attribute value node Each leaf node assigns a classification

5 No Decision Tree for PlayTennis Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak No Yes No Outlook Temperature Humidity Wind PlayTennis Sunny Hot High Weak ?

6 Decision Tree for Conjunction Outlook SunnyOvercastRain Wind StrongWeak NoYes No Outlook=Sunny  Wind=Weak No

7 Decision Tree for Disjunction Outlook SunnyOvercastRain Yes Outlook=Sunny  Wind=Weak Wind StrongWeak NoYes Wind StrongWeak NoYes

8 Decision Tree for XOR Outlook SunnyOvercastRain Wind StrongWeak YesNo Outlook=Sunny XOR Wind=Weak Wind StrongWeak NoYes Wind StrongWeak NoYes

9 Decision Tree Outlook SunnyOvercast Rain Humidity HighNormal Wind StrongWeak NoYes No decision trees represent disjunctions of conjunctions (Outlook=Sunny  Humidity=Normal)  (Outlook=Overcast)  (Outlook=Rain Ù Wind=Weak)

10 Learning decision trees Problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1. Alternate: is there an alternative restaurant nearby? 2. Bar: is there a comfortable bar area to wait in? 3. Fri/Sat: is today Friday or Saturday? 4. Hungry: are we hungry? 5. Patrons: number of people in the restaurant (None, Some, Full) 6. Price: price range ($, $$, $$$) 7. Raining: is it raining outside? 8. Reservation: have we made a reservation? 9. Type: kind of restaurant (French, Italian, Thai, Burger) 10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)

11 Attribute-based representations  Examples described by attribute values (Boolean, discrete, continuous)  E.g., situations where I will/won't wait for a table:  Classification of examples is positive (T) or negative (F)

12 Decision trees  One possible representation for hypotheses  E.g., here is the “true” tree for deciding whether to wait:

13 Expressiveness  Decision trees can express any function of the input attributes  E.g., for Boolean functions, truth table row → path to leaf:  Trivially, there is a consistent decision tree for any training set with one path to leaf for each example (unless f nondeterministic in x) but it probably won't generalize to new examples  Prefer to find more compact decision trees (Occam’s razor)

14 Hypothesis spaces How many distinct decision trees with n Boolean attributes? = number of Boolean functions = number of distinct truth tables with 2 n rows = 2 2 n  E.g., with 6 Boolean attributes, there are 18,446,744,073,709,551,616 trees

15 Hypothesis spaces How many purely conjunctive hypotheses (e.g., Hungry   Rain)?  Each attribute can be in (positive), in (negative), or out  3 n distinct conjunctive hypotheses  More expressive hypothesis space  increases chance that target function can be expressed  increases number of hypotheses consistent with training set  may get worse predictions

16 When to consider Decision Trees  Instances describable by attribute-value pairs  Target function is discrete valued  Disjunctive hypothesis may be required  Possibly noisy training data  Missing attribute values  Examples:  Medical diagnosis  Credit risk analysis  Object classification for robot manipulator (Tan 1993)

17 Decision tree learning  Aim: find a small tree consistent with the training examples  Idea: (recursively) choose "most significant" attribute as root of (sub)tree

18 ID3 - Top-Down Induction of Decision Trees 1.A  the “best” decision attribute for next node 2.Assign A as decision attribute for node 3. For each value of A create new descendant 4.Sort training examples to leaf node according to the attribute value of the branch 5.If all training examples are perfectly classified (same value of target attribute) stop, else iterate over new leaf nodes

19 Which Attribute is ”best”? A 1 =? TrueFalse [21+, 5-][8+, 30-] [29+,35-] A 2 =? TrueFalse [18+, 33-] [11+, 2-] [29+,35-]

20 Choosing an attribute  Idea: a good attribute splits the examples into subsets that are (ideally) "all positive" or "all negative"  Patrons? is a better choice

21 Using information theory  Training set S, possible values v i, i = 1…n  Def. Information Content (Entropy) of S: I(P(v 1 ), …, P(v n )) = Σ i=1 -P(v i ) log 2 P(v i )  In the boolean case, with S containing p positive examples and n negative examples:

22 Entropy  S is a set of training examples  p + is the proportion of positive examples  p - is the proportion of negative examples  Entropy measures the impurity of S Entropy(S) = -p + log 2 p + - p - log 2 p -

23 Entropy  Entropy(S) = expected number of bits needed to encode class (+ or -) of randomly drawn members of S (under the optimal, shortest length-code)  Information theory optimal length code assigns –log 2 p bits to messages having probability p  So the expected number of bits to encode (+ or -) of random member of S: -p + log 2 p + - p - log 2 p -

24 Information gain  A chosen attribute A, with v distinct values, divides the training set E into subsets E 1, …, E v according to their values. The entropy still remaining is:  Information Gain (IG) or expected reduction in entropy due to sorting S on attribute A  Choose the attribute with the largest IG

25 Information Gain A 1 =? TrueFalse [21+, 5-][8+, 30-] [29+,35-] A 2 =? TrueFalse [18+, 33-] [11+, 2-] [29+,35-] IG(S,A) = Entropy(S) -  v  values(A) |S v |/|S| Entropy(S v ) Entropy([29+,35-]) = -29/64 log 2 29/64 – 35/64 log 2 35/64 = 0.99

26 Information gain For the training set, p = n = 6, I(6/12, 6/12) = 1 bit Consider the attributes Patrons and Type (and others too): Patrons has the highest IG of all attributes and so is chosen by the DTL algorithm as the root

27 Example contd.  Decision tree learned from the 12 examples:  Substantially simpler than “true” tree---a more complex hypothesis isn’t justified by small amount of data

28 Information Gain A 1 =? TrueFalse [21+, 5-][8+, 30-] [29+,35-] Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 IG(S,A 1 )=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 IG(S,A 2 )=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.12 A 2 =? TrueFalse [18+, 33-] [11+, 2-] [29+,35-]

29 Training Examples DayOutlookTemp.HumidityWindPlay Tennis D1SunnyHotHighWeakNo D2SunnyHotHighStrongNo D3OvercastHotHighWeakYes D4RainMildHighWeakYes D5RainCoolNormalWeakYes D6RainCoolNormalStrongNo D7OvercastCoolNormalWeakYes D8SunnyMildHighWeakNo D9SunnyColdNormalWeakYes D10RainMildNormalStrongYes D11SunnyMildNormalStrongYes D12OvercastMildHighStrongYes D13OvercastHotNormalWeakYes D14RainMildHighStrongNo

30 Selecting the Next Attribute Humidity HighNormal [3+, 4-][6+, 1-] S=[9+,5-] E=0.940 Gain(S,Humidity) =0.940-(7/14)*0.985 – (7/14)*0.592 =0.151 E=0.985 E=0.592 Wind WeakStrong [6+, 2-][3+, 3-] S=[9+,5-] E=0.940 E=0.811E=1.0 Gain(S,Wind) =0.940-(8/14)*0.811 – (6/14)*1.0 =0.048

31 Selecting the Next Attribute Outlook Sunny Rain [2+, 3-] [3+, 2-] S=[9+,5-] E=0.940 Gain(S,Outlook) =0.940-(5/14)*0.971 -(4/14)*0.0 – (5/14)*0.0971 =0.247 E=0.971 Over cast [4+, 0] E=0.0

32 ID3 Algorithm Outlook SunnyOvercastRain Yes [D1,D2,…,D14] [9+,5-] S sunny =[D1,D2,D8,D9,D11] [2+,3-] ? ? [D3,D7,D12,D13] [4+,0-] [D4,D5,D6,D10,D14] [3+,2-] Gain(S sunny, Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = 0.970 Gain(S sunny, Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = 0.570 Gain(S sunny, Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019

33 ID3 Algorithm Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No [D3,D7,D12,D13] [D8,D9,D11] [D6,D14] [D1,D2] [D4,D5,D10]

34 Hypothesis Space Search ID3 + - + A1 - - + + - + A2 + - - + - + A2 - A4 + - A2 - A3 - +

35 Hypothesis Space Search ID3  Hypothesis space is complete!  Target function surely in there…  Outputs a single hypothesis  No backtracking on selected attributes (greedy search)  Local minimal (suboptimal splits)  Statistically-based search choices  Robust to noisy data  Inductive bias (search bias)  Prefer shorter trees over longer ones  Place high information gain attributes close to the root

36 Inductive Bias in ID3  H is the power set of instances X  Unbiased ?  Preference for short trees, and for those with high information gain attributes near the root  Bias is a preference for some hypotheses, rather than a restriction of the hypothesis space H  Occam’s razor: prefer the shortest (simplest) hypothesis that fits the data

37 Occam’s Razor Why prefer short hypotheses? Argument in favor:  Fewer short hypotheses than long hypotheses  A short hypothesis that fits the data is unlikely to be a coincidence  A long hypothesis that fits the data might be a coincidence Argument opposed:  There are many ways to define small sets of hypotheses  E.g. All trees with a prime number of nodes that use attributes beginning with ”Z”  What is so special about small sets based on size of hypothesis

38 Occam’s Razor Hypothesis A: Objects do not instantaneously change their color. green blue Hypothesis B: grue bleen 1.1.2000 0:00 On 1.1.2000 objects that were grue turned instantaneously bleen and objects that were bleen turned instantaneously grue. Definition B: Definition A: time

39 Overfitting Consider error of hypothesis h over  Training data: error train (h)  Entire distribution D of data: error D (h) Hypothesis h  H overfits training data if there is an alternative hypothesis h’  H such that error train (h) < error train (h’) and error D (h) > error D (h’)

40 Avoid Overfitting How can we avoid overfitting?  Stop growing when data split not statistically significant  Grow full tree then post-prune  Minimum description length (MDL): Minimize: size(tree) + size(misclassifications(tree))

41 Reduced-Error Pruning  Split data into training and validation set  Do until further pruning is harmful:  Evaluate impact on validation set of pruning each possible node (plus those below it)  Greedily remove the one that most improves the validation set accuracy  Produces smallest version of most accurate subtree

42 Rule-Post Pruning  Convert tree to equivalent set of rules  Prune each rule independently of each other  Sort final rules into a desired sequence to use  Method used in C4.5

43 Converting a Tree to Rules Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No R 1 : If (Outlook=Sunny)   (Humidity=High) Then PlayTennis=No R 2 : If (Outlook=Sunny)  (Humidity=Normal) Then PlayTennis=Yes R 3 : If (Outlook=Overcast) Then PlayTennis=Yes R 4 : If (Outlook=Rain)  (Wind=Strong) Then PlayTennis=No R 5 : If (Outlook=Rain)  (Wind=Weak) Then PlayTennis=Yes

44 Continuous Valued Attributes Create a discrete attribute to test continuous  Temperature = 24.5 0 C  (Temperature > 20.0 0 C) = {true, false} Where to set the threshold? Temperatur15 0 C 18 0 C 19 0 C 22 0 C 24 0 C 27 0 C PlayTennisNo Yes No (see paper by [Fayyad, Irani 1993]

45 Attributes with many Values  Problem: if an attribute has many values, maximizing InformationGain will select it.  E.g.: Imagine using Date=12.7.1996 as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -  i=1..c |S i |/|S| log 2 |S i |/|S| Where S i is the subset for which attribute A has the value v i

46 Attributes with Cost Consider:  Medical diagnosis : blood test costs 1000 SEK  Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain 2 (S,A)/Cost(A) [Tan, Schimmer 1990] 2 Gain(S,A) -1/(Cost(A)+1) w w  [0,1] [Nunez 1988]

47 Unknown Attribute Values What is some examples missing values of A? Use training example anyway sort through tree  If node n tests A, assign most common value of A among other examples sorted to node n.  Assign most common value of A among other examples with same target value  Assign probability pi to each possible value vi of A  Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion

48 Cross-Validation  Estimate the accuracy of a hypothesis induced by a supervised learning algorithm  Predict the accuracy of a hypothesis over future unseen instances  Select the optimal hypothesis from a given set of alternative hypotheses  Pruning decision trees  Model selection  Feature selection  Combining multiple classifiers (boosting)

49 Holdout Method  Partition data set D = {(v 1,y 1 ),…,(v n,y n )} into training D t and validation set D h =D\D t Training D t Validation D\D t acc h = 1/h  (vi,yi)  Dh  (I(D t,v i ),y i ) I(D t,v i ) : output of hypothesis induced by learner I trained on data D t for instance v i  (i,j) = 1 if i=j and 0 otherwise Problems: makes insufficient use of data training and validation set are correlated

50 Cross-Validation  k-fold cross-validation splits the data set D into k mutually exclusive subsets D 1,D 2,…,D k  Train and test the learning algorithm k times, each time it is trained on D\D i and tested on D i D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 acc cv = 1/n  (vi,yi)  D  (I(D\D i,v i ),y i )

51 Cross-Validation  Uses all the data for training and testing  Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m)  Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n-fold cross-validation)  In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set

52 Bootstrap  Samples n instances uniformly from the data set with replacement  Probability that any given instance is not chosen after n samples is (1-1/n) n  e -1  0.632  The bootstrap sample is used for training the remaining instances are used for testing  acc boot = 1/b  i=1 b (0.632  0 i + 0.368 acc s ) where  0 i is the accuracy on the test data of the i-th bootstrap sample, acc s is the accuracy estimate on the training set and b the number of bootstrap samples

53 Wrapper Model Input features Feature subset search Feature subset evaluation Feature subset evaluation Induction algorithm

54 Wrapper Model  Evaluate the accuracy of the inducer for a given subset of features by means of n-fold cross-validation  The training data is split into n folds, and the induction algorithm is run n times. The accuracy results are averaged to produce the estimated accuracy.  Forward elimination: Starts with the empty set of features and greedily adds the feature that improves the estimated accuracy at most  Backward elimination: Starts with the set of all features and greedily removes features and greedily removes the worst feature

55 Bagging  For each trial t=1,2,…,T create a bootstrap sample of size N.  Generate a classifier C t from the bootstrap sample  The final classifier C* takes class that receives the majority votes among the C t Training set 1 Training set 2 Training set T … C1C1 C2C2 CTCT train … instance C*C* yesno yes

56 Bagging  Bagging requires ”instable” classifiers like for example decision trees or neural networks ”The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.” (Breiman 1996)

57 Performance measurement  How do we know that h ≈ f ? 1. Use theorems of computational/statistical learning theory 2. Try h on a new test set of examples (use same distribution over example space as training set) Learning curve = % correct on test set as a function of training set size


Download ppt "Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Prof. Giancarlo Mauri Lezione 3 - Learning Decision."

Similar presentations


Ads by Google