Machine Learning Lecture 2: Decision Tree Learning.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
Advertisements

Decision Trees Decision tree representation ID3 learning algorithm
1er. Escuela Red ProTIC - Tandil, de Abril, Decision Tree Learning 3.1 Introduction –Method for approximation of discrete-valued target functions.
Decision Tree Approach in Data Mining
Decision Tree Algorithm (C4.5)
ICS320-Foundations of Adaptive and Learning Systems
Classification Techniques: Decision Tree Learning
Decision Tree Learning 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
Decision Trees IDHairHeightWeightLotionResult SarahBlondeAverageLightNoSunburn DanaBlondeTallAverageYesnone AlexBrownTallAverageYesNone AnnieBlondeShortAverageNoSunburn.
Part 7.3 Decision Trees Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting.
CS 590M Fall 2001: Security Issues in Data Mining Lecture 4: ID3.
Decision tree LING 572 Fei Xia 1/10/06. Outline Basic concepts Main issues Advanced topics.
Decision Tree Learning Learning Decision Trees (Mitchell 1997, Russell & Norvig 2003) –Decision tree induction is a simple but powerful learning paradigm.
Induction of Decision Trees
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Prof. Giancarlo Mauri Lezione 3 - Learning Decision.
Decision Trees Decision tree representation Top Down Construction
Ch 3. Decision Tree Learning
Machine Learning Lecture 10 Decision Trees G53MLE Machine Learning Dr Guoping Qiu1.
Decision Tree Learning
Decision tree learning
By Wang Rui State Key Lab of CAD&CG
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
Mohammad Ali Keyvanrad
Decision tree learning Maria Simi, 2010/2011 Inductive inference with decision trees  Decision Trees is one of the most widely used and practical methods.
Machine Learning Lecture 10 Decision Tree Learning 1.
CpSc 810: Machine Learning Decision Tree Learning.
Decision-Tree Induction & Decision-Rule Induction
Decision Tree Learning
Artificial Intelligence Project #3 : Analysis of Decision Tree Learning Using WEKA May 23, 2006.
CS690L Data Mining: Classification
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
CS 5751 Machine Learning Chapter 3 Decision Tree Learning1 Decision Trees Decision tree representation ID3 learning algorithm Entropy, Information gain.
机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. 课程基本信息  主讲教师:陈昱 Tel :  助教:程再兴, Tel :  课程网页:
Decision Trees, Part 1 Reading: Textbook, Chapter 6.
Decision Tree Learning
Decision Tree Learning Presented by Ping Zhang Nov. 26th, 2007.
Seminar on Machine Learning Rada Mihalcea Decision Trees Very short intro to Weka January 27, 2003.
Machine Learning Recitation 8 Oct 21, 2009 Oznur Tastan.
Outline Decision tree representation ID3 learning algorithm Entropy, Information gain Issues in decision tree learning 2.
Friday’s Deliverable As a GROUP, you need to bring 2N+1 copies of your “initial submission” –This paper should be a complete version of your paper – something.
Decision Tree Learning DA514 - Lecture Slides 2 Modified and expanded from: E. Alpaydin-ML (chapter 9) T. Mitchell-ML.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CSE573 Autumn /09/98 Machine Learning Administrative –Last topic: Decision Tree Learning Reading: 5.1, 5.4 Last time –finished NLP sample system’s.
CSE573 Autumn /11/98 Machine Learning Administrative –Finish this topic –The rest of the time is yours –Final exam Tuesday, Mar. 17, 2:30-4:20.
Decision Tree Learning
Machine Learning Inductive Learning and Decision Trees
DECISION TREES An internal node represents a test on an attribute.
Università di Milano-Bicocca Laurea Magistrale in Informatica
CS 9633 Machine Learning Decision Tree Learning
Decision Tree Learning
Decision trees (concept learnig)
Decision trees (concept learnig)
Classification Algorithms
Decision Tree Learning
Artificial Intelligence
Lecture 3: Decision Tree Learning
Introduction to Machine Learning Algorithms in Bioinformatics: Part II
Decision Tree Saed Sayad 9/21/2018.
Introduction to Data Mining, 2nd Edition by
Introduction to Data Mining, 2nd Edition by
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning: Lecture 3
Decision Trees Decision tree representation ID3 learning algorithm
Machine Learning Chapter 3. Decision Tree Learning
Decision Trees.
Decision Trees Decision tree representation ID3 learning algorithm
Decision Trees Berlin Chen
INTRODUCTION TO Machine Learning
Decision Tree.
Presentation transcript:

Machine Learning Lecture 2: Decision Tree Learning

Classifiaction A1 A2 target x1 TRUE x2 FALSE

Decision Trees Decision trees are powerful and popular tools for classification and prediction. The attractiveness of decision trees is due to the fact that, in contrast to neural networks, decision trees represent rules. Rules can readily be expressed so that humans can understand them or even directly used in a database access language like SQL so that records falling into a particular category may be retrieved.

Decision Tree Representation A decision tree is an arrangement of tests that provides an appropriate classification at every step in an analysis. "In general, decision trees represent a disjunction of conjunctions of constraints on the attribute-values of instances. Each path from the tree root to a leaf corresponds to a conjunction of attribute tests, and the tree itself to a disjunction of these conjunctions" (Mitchell, 1997, p.53). More specifically, decision trees classify instances by sorting them down the tree from the root node to some leaf node, which provides the classification of the instance. Each node in the tree specifies a test of some attribute of the instance, and each branch descending from that node corresponds to one of the possible values for this attribute.

Decision Tree Representation An instance is classified by starting at the root node of the decision tree, testing the attribute specified by this node, then moving down the tree branch corresponding to the value of the attribute. This process is then repeated at the node on this branch and so on until a leaf node is reached. Diagram Each non-leaf node is connected to a test that splits its set of possible answers into subsets corresponding to different test results. Each branch carries a particular test result's subset to another node. Each node is connected to a set of possible answers.

Decision Tree Representation

Play Tennis Dataset Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Weak Yes D8 Sunny Mild High Weak No D9 Sunny Cold Normal Weak Yes D10 Rain Mild Normal Strong Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No

Play Tennis Dataset This data set consists of 4 input discrete features: Outlook  {Sunny, Overcast, Rain} Temperature  {Hot, Mild, Cool, Cold} Humidity  {High, Normal} Wind  {Strong, Weak} 2 classes: playTennis  {Yes, No}

Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes

Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Each internal node tests an attribute High Normal Each branch corresponds to an attribute value node No Yes Each leaf node assigns a classification

Decision Tree for PlayTennis Outlook Temperature Humidity Sunny Hot High Wind PlayTennis Weak ? Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes

Decision Tree for Conjunction Outlook=Sunny  Wind=Weak Outlook Sunny Overcast Rain Wind No No Strong Weak No Yes

Decision Tree for Disjunction Outlook=Sunny  Wind=Weak Outlook Sunny Overcast Rain Yes Wind Wind Strong Weak Strong Weak No Yes No Yes

Decision Tree for XOR Outlook=Sunny XOR Wind=Weak Outlook Sunny Overcast Rain Wind Wind Wind Strong Weak Strong Weak Strong Weak Yes No No Yes No Yes

Decision Tree • decision trees represent disjunctions of conjunctions Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes (Outlook=Sunny  Humidity=Normal)  (Outlook=Overcast)  (Outlook=Rain  Wind=Weak)

When to consider Decision Trees Decision tree learning is generally best suited to problems with the following characteristics n Instances are represented by attribute-value pairs n Target function is discrete valued n Disjunctive hypothesis may be required n Possibly noisy training data n Missing attribute values n Examples: n Medical diagnosis n Credit risk analysis n Object classification for robot manipulator

Top-Down Induction of Decision Trees (ID3)

Which Attribute is ”best”? True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]

Entropy n S is a sample of training examples n p+ is the proportion of positive examples n p- is the proportion of negative examples n Entropy measures the impurity of S

Conditional Entropy the conditional entropy (or equivocation) quantifies the amount of information needed to describe the outcome of a random variable X given that the value of another random variable Y is known

Mutual Information the mutual information of two random variables is a measure of the mutual dependence or the amount of shared information between the two random variables.

Information Gain I(S; Ai) = H(S) - H(S| Ai) Gain(S, Ai): expected reduction in entropy (impurity) due to splitting region S on a discrete attribute Ai I(S; Ai) = H(S) - H(S| Ai) Gain(S, Ai)=Entropy(S) - vvalues(Ai) |Siv|/|S| Entropy(S | Ai=v) |S|= number of training examples in region S |Siv|= number of training examples in region S with Ai =v [29+,35-] A 1=? A2=? [29+,35-] True False [21+, 5-] [8+, 30-] True False [18+, 33-] [11+, 2-] Entropy(S) = Entropy([29+,35-]) = -29/64 log2 29/64 - 35/64 log2 35/64 = 0.99

Information Gain Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 Gain(S, A2)=Entropy(S) Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S, A1)=Entropy(S) -51/64*Entropy(S1=[18+,33-]) -26/64*Entropy(S1=[21+,5-]) -13/64*Entropy(S2=[11+,2-]) =0.12 -38/64*Entropy(S2=[8+,30-]) =0.27 [29+,35-] A 1=? A2=? [29+,35-] True False [21+, 5-] [8+, 30-] True False [18+, 33-] [11+, 2-] The best attribute is the attribute with maximum information gain A* = arg max Gain(S, Ai)

Training Examples Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Weak Yes D8 Sunny Mild High Weak No D9 Sunny Cold Normal Weak Yes D10 Rain Mild Normal Strong Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No

Selecting the Next Attribute S=[9+,5-] E=0.940 Humidity S=[9+,5-] E=0.940 Wind High Normal Weak Strong [3+, 4-] [6+, 1-] [6+, 2-] [3+, 3-] E=0.985 E=0.592 E=0.811 E=1.0 Gain(S,Wind) Gain(S,Humidity) =0.940-(7/14)*0.985 =0.940-(8/14)*0.811 - (6/14)*1.0 - (7/14)*0.592 =0.151 =0.048

Selecting the Next Attribute Outlook Over cast Sunny [2+, 3-] E=0.971 Rain [3+, 2-] E=0.971 [4+, 0] E=0.0 Gain(S,Outlook) =0.940-(5/14)*0.971 -(4/14)*0.0 - (5/14)*0.0971 =0.247

ID3 Algorithm [D1,D2,…,D14] [9+,5-] Outlook Sunny Overcast Rain Ssunny=[D1,D2,D8,D9,D11] [D3,D7,D12,D13][D4,D5,D6,D10,D14] [2+,3-] [4+,0-] [3+,2-] Yes ? ? Gain(Ssunny , Humidity)=0.970-(3/5)0.0 - 2/5(0.0) = 0.970 Gain(Ssunny , Temp.)=0.970-(2/5)0.0 -2/5(1.0)-(1/5)0.0 = 0.570 Gain(Ssunny , Wind)=0.970= -(2/5)1.0 - 3/5(0.918) = 0.019

ID3 Algorithm Outlook Sunny Overcast Rain Humidity Yes Wind [D3,D7,D12,D13] High Normal Strong Weak Yes [D4,D5,D10] No Yes No [D6,D14] [D1,D2] [D8,D9,D11]

Hypothesis Space Search ID3 + - + A2 A1 - - + - + + + + - + - - A2 A2 - + - + - A3 A4 + - - +

Issues in Decision Tree Learning Avoiding Overfitting the Data Incorporating Continuous-Valued Attributes Alternative Measures for Selecting Attributes Handling Training Examples with Missing Attribute Values

Overfitting Consider error of hypothesis h over n Training data: errortrain(h) n Entire distribution D of data: errorD(h) Hypothesis hH overfits training data if there is an alternative hypothesis h’H such that errortrain(h) < errortrain(h’) and errorD(h) > errorD(h’)

Overfitting in Decision Trees error validation set training set optimal #nodes #nodes

Overfitting in Function Approximation

Generalization n Generalization means the ability of a learning algorithm to generalize from training examples to previously unseen examples and to predict their target value correctly. n The training error is no indicator of how well the learner is able to predict future examples. n Therefore, partition the data set into training and validation set. n Train the learning algorithm on the training data only and use the validation set to predict the accuracy/error of the learner on future unseen instances.

Cross-Validation induced by a supervised learning algorithm n Estimate the accuracy of a hypothesis induced by a supervised learning algorithm n Predict the accuracy of a hypothesis over future unseen instances n Select the optimal hypothesis from a given set of alternative hypotheses n Pruning decision trees n Model selection n Feature selection n Combining multiple classifiers (boosting)

Cross-Validation each time it is trained on D\Di and tested on Di n k-fold cross-validation splits the data set D into k mutually exclusive subsets D1,D2,…,Dk D1 D2 D3 D4 n Train and test the learning algorithm k times, each time it is trained on D\Di and tested on Di D1 D2 D3 D4 D1 D2 D3 D4 D1 D2 D3 D4 D1 D2 D3 D4 acccv = 1/n  (vi,yi)D d(I(D\Di,vi),yi)

Cross-Validation n Uses all the data for training and testing n Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m) n Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n- fold cross-validation) n In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set

Avoid Overfitting How can we avoid overfitting? n Stop growing when data split not statistically significant n Grow full tree then post-prune n Minimum description length (MDL): Minimize: size(tree) + size(misclassifications(tree))

1. Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that most improves the validation set accuracy Produces smallest version of most accurate subtree

2. Rule-Post Pruning Method used in C4.5

Converting a Tree to Rules Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes R1: If (Outlook=Sunny)  (Humidity=High) Then PlayTennis=No R2: If (Outlook=Sunny)  (Humidity=Normal) Then PlayTennis=Yes R3: If (Outlook=Overcast) Then PlayTennis=Yes R4: If (Outlook=Rain)  (Wind=Strong) Then PlayTennis=No R5: If (Outlook=Rain)  (Wind=Weak) Then PlayTennis=Yes

Incorporating Continuous Valued Attributes Create a discrete attribute to test continuous Sort the examples by the continuous attribute Identify adjacent examples that differ in their target Attribute, evaluate each candidate threshold Temperatur 40 48 60 0 0 C C 0 0 0 C 72 C 80 C 900C PlayTennis No No Yes Yes Yes No (Temperature > threshold) = {true, false} Where to set the threshold? Temperature >54 and Temperature >85

Iris Flower Classification

Iris Flower Classification

Attributes with many Values n Problem: if an attribute has many values, maximizing InformationGain will select it. n E.g.: Imagine using Date=12.7.1996 as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -Si=1..c |Si|/|S| log2 |Si|/|S| Where Si is the subset for which attribute A has the value vi

Handling Attributes with Different Costs Consider: n Medical diagnosis : blood test costs 1000 SEK n Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain2(S,A)/Cost(A) 2Gain(S,A)-1/(Cost(A)+1)w w [0,1]

Handling Examples with Missing Attribute Values What if some examples missing values of some attribute A? Estimate the missing value based on other examples with known values n If node n tests A, assign most common value of A among other training examples at node n. n Assign most common value of A among other examples at node n with the same target attribute value n Assign probability pi to each possible value vi of A n Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion

References T. Mitchell, "Decision Tree Learning", in T. Mitchell, Machine Learning, The McGraw-Hill Companies, Inc., 1997, pp. 52-78.