1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.

Slides:



Advertisements
Similar presentations
Machine Learning in Real World: C4.5
Advertisements

Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Decision Tree Approach in Data Mining
Pavan J Joshi 2010MCS2095 Special Topics in Database Systems
Decision Tree.
C4.5 - pruning decision trees
Classification Techniques: Decision Tree Learning
Chapter 7 – Classification and Regression Trees
Chapter 7 – Classification and Regression Trees
Decision Trees.
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
Classification and Prediction
Constructing Decision Trees. A Decision Tree Example The weather data example. ID codeOutlookTemperatureHumidityWindyPlay abcdefghijklmnabcdefghijklmn.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Classification: Decision Trees
Induction of Decision Trees
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Evaluating Hypotheses
Decision Trees an Introduction.
Classification with Decision Trees II
Example of a Decision Tree categorical continuous class Splitting Attributes Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K.
Classification.
Classification: Decision Trees 2 Outline  Top-Down Decision Tree Construction  Choosing the Splitting Attribute  Information Gain and Gain Ratio.
Chapter 7 Decision Tree.
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
1 Data Mining Lecture 3: Decision Trees. 2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes,
Data Mining Schemes in Practice. 2 Implementation: Real machine learning schemes  Decision trees: from ID3 to C4.5  missing values, numeric attributes,
Chapter 9 – Classification and Regression Trees
Classification I. 2 The Task Input: Collection of instances with a set of attributes x and a special nominal attribute Y called class attribute Output:
Chapter 4 Classification. 2 Classification: Definition Given a collection of records (training set ) –Each record contains a set of attributes, one of.
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Computational Intelligence: Methods and Applications Lecture 19 Pruning of decision trees Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Inferring Decision Trees Using the Minimum Description Length Principle J. R. Quinlan and R. L. Rivest Information and Computation 80, , 1989.
Decision Trees Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Decision Tree Learning Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 25, 2014.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
CS690L Data Mining: Classification
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
MACHINE LEARNING 10 Decision Trees. Motivation  Parametric Estimation  Assume model for class probability or regression  Estimate parameters from all.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
1 Decision Tree Learning Original slides by Raymond J. Mooney University of Texas at Austin.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 6.2: Classification Rules Rodney Nielsen Many.
Decision Trees Example of a Decision Tree categorical continuous class Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Splitting.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 6.1: Decision Trees Rodney Nielsen Many /
1 Classification: predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values.
Decision Trees.
Decision Trees by Muhammad Owais Zahid
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
Decision Tree Learning DA514 - Lecture Slides 2 Modified and expanded from: E. Alpaydin-ML (chapter 9) T. Mitchell-ML.
Data Mining Chapter 4 Algorithms: The Basic Methods Reporter: Yuen-Kuei Hsueh.
Data Mining Chapter 4 Algorithms: The Basic Methods - Constructing decision trees Reporter: Yuen-Kuei Hsueh Date: 2008/7/24.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Rodney Nielsen Many / most of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall Data Mining Practical Machine Learning Tools and Techniques.
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
C4.5 algorithm Let the classes be denoted {C1, C2,…, Ck}. There are three possibilities for the content of the set of training samples T in the given node.
C4.5 - pruning decision trees
C4.5 algorithm Let the classes be denoted {C1, C2,…, Ck}. There are three possibilities for the content of the set of training samples T in the given node.
Artificial Intelligence
Ch9: Decision Trees 9.1 Introduction A decision tree:
Data Science Algorithms: The Basic Methods
Decision Tree Saed Sayad 9/21/2018.
Classification and Prediction
Classification with Decision Trees
Data Mining(中国人民大学) Yang qiang(香港科技大学) Han jia wei(UIUC)
Junheng, Shengming, Yunsheng 10/19/2018
©Jiawei Han and Micheline Kamber
Decision Trees Jeff Storey.
Data Mining CSCI 307, Spring 2019 Lecture 15
Presentation transcript:

1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han

2 Continuous Classes Sometimes, classes are continuous in that they come from a continuous domain, e.g., temperature or stock price. Regression is well suited in this case: Linear and multiple regression Non-Linear regression We shall focus on categorical classes, e.g., colors or Yes/No binary decisions. We will deal with continuous class values later in CART

3 DECISION TREE [Quinlan93] An internal node represents a test on an attribute. A branch represents an outcome of the test, e.g., Color=red. A leaf node represents a class label or class label distribution. At each node, one attribute is chosen to split training examples into distinct classes as much as possible A new case is classified by following a matching path to a leaf node.

Training Set

Outlook overcast humiditywindy highnormal false true sunny rain NNPP P overcast Example

6 Building Decision Tree [Q93] Top-down tree construction At start, all training examples are at the root. Partition the examples recursively by choosing one attribute each time. Bottom-up tree pruning Remove subtrees or branches, in a bottom-up manner, to improve the estimated accuracy on new cases.

7 Choosing the Splitting Attribute At each node, available attributes are evaluated on the basis of separating the classes of the training examples. A Goodness function is used for this purpose. Typical goodness functions: information gain (ID3/C4.5) information gain ratio gini index

8 Which attribute to select?

9 A criterion for attribute selection Which is the best attribute? The one which will result in the smallest tree Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets that an attribute produces Strategy: choose attribute that results in greatest information gain

10 Computing information Information is measured in bits Given a probability distribution, the info required to predict an event is the distribution’s entropy Entropy gives the information required in bits (this can involve fractions of bits!) Formula for computing the entropy:

11 Example: attribute “Outlook” “Outlook” = “Sunny”: “Outlook” = “Overcast”: “Outlook” = “Rainy”: Expected information for attribute: Note: this is normally not defined.

12 Computing the information gain Information gain: information before splitting – information after splitting Information gain for attributes from weather data:

13 Continuing to split

14 The final decision tree Note: not all leaves need to be pure; sometimes identical instances have different classes  Splitting stops when data can’t be split any further

15 Highly-branching attributes Problematic: attributes with a large number of values (extreme case: ID code) Subsets are more likely to be pure if there is a large number of values  Information gain is biased towards choosing attributes with a large number of values  This may result in overfitting (selection of an attribute that is non-optimal for prediction) Another problem: fragmentation

16 The gain ratio Gain ratio: a modification of the information gain that reduces its bias on high-branch attributes Gain ratio takes number and size of branches into account when choosing an attribute It corrects the information gain by taking the intrinsic information of a split into account Also called split ratio Intrinsic information: entropy of distribution of instances into branches (i.e. how much info do we need to tell which branch an instance belongs to)

Gain Ratio Gain ratio should be Large when data is evenly spread Small when all data belong to one branch Gain ratio (Quinlan’86) normalizes info gain by this reduction:

18 Computing the gain ratio Example: intrinsic information for ID code Importance of attribute decreases as intrinsic information gets larger Example of gain ratio: Example:

19 Gain ratios for weather data OutlookTemperature Info:0.693Info:0.911 Gain: Gain: Split info: info([5,4,5])1.577Split info: info([4,6,4])1.362 Gain ratio: 0.247/ Gain ratio: 0.029/ HumidityWindy Info:0.788Info:0.892 Gain: Gain: Split info: info([7,7])1.000Split info: info([8,6])0.985 Gain ratio: 0.152/10.152Gain ratio: 0.048/

20 More on the gain ratio “Outlook” still comes out top However: “ID code” has greater gain ratio Standard fix: ad hoc test to prevent splitting on that type of attribute Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic information is very low Standard fix: First, only consider attributes with greater than average information gain Then, compare them on gain ratio

Gini Index If a data set T contains examples from n classes, gini index, gini(T) is defined as where pj is the relative frequency of class j in T. gini(T) is minimized if the classes in T are skewed. After splitting T into two subsets T1 and T2 with sizes N1 and N2, the gini index of the split data is defined as The attribute providing smallest ginisplit(T) is chosen to split the node.

22 Discussion Consider the following variations of decision trees

23 1. Apply KNN to each leaf node Instead of choosing a class label as the majority class label, use KNN to choose a class label

24 2. Apply Naïve Bayesian at each leaf node For each leave node, use all the available information we know about the test case to make decisions Instead of using the majority rule, use probability/likelihood to make decisions

25 3. Use error rates instead of entropy If a node has N1 positive class labels P, and N2 negative class labels N, If N1> N2, then choose P The error rate = N2/(N1+N2) at this node The expected error at a parent node can be calculated as weighted sum of the error rates at each child node The weights are the proportion of training data in each child

26 4. When there is missing value, allow tests to be done Attribute selection criterion: minimal total cost (C total = C mc + C test ) instead of minimal entropy in C4.5 If growing a tree has a smaller total cost, then choose an attribute with minimal total cost. Otherwise, stop and form a leaf. Label leaf also according to minimal total cost: Suppose the leaf have P positive examples and N negative examples FP denotes the cost of a false positive example and FN false negative If (P×FN  N×FP) THEN label = positive ELSE label = negative More in the next lecture slides…

27 Missing Values Missing values in test data Humidity={High, Normal}, but which one? Allow splitting of the values down to each branch of the decision tree Methods 1. equal proportion ½ to each side, 2. unequal proportion: use proportion = training data % Weighted result:

28 Dealing with Continuous Class Values 1.Use the mean of a set as a predicted value 2.Use a linear regression formula to compute the predicted value In linear algebra

29 Using Entropy Reduction to Discretize Continuous Variables Given the following data sorted by increasing Temperature values, and associated Play attribute values: Task: Partition the continuous ranged temperature into discrete values: Cold and Warm Hint: decision of boundary by entropy reduction! FFFFTTTTTTTTTF

30 Entropy-Based Discretization Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the entropy after partitioning is The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization. The process is recursively applied to partitions obtained until some stopping criterion is met, e.g., Experiments show that it may reduce data size and improve classification accuracy

31 How to Calculate ent(S)? Given two classes Yes and No, in a set S, Let p1 be the proportion of Yes Let p2 be the proportion of No, p1 + p2 = 100% Entropy is: ent(S) = -p1*log(p1) – p2*log(p2) When p1=1, p2=0, ent(S)=0, When p1=50%, p2=50%, ent(S)=maximum! See TA ’ s tutorial notes for an Example.

32 Numeric attributes Standard method: binary splits (i.e. temp < 45) Difference to nominal attributes: every attribute offers many possible split points Solution is straightforward extension: Evaluate info gain (or other measure) for every possible split point of attribute Choose “best” split point Info gain for best split point is info gain for attribute Computationally more demanding

33 An example Split on temperature attribute from weather data: Eg. 4 yes es and 2 no s for temperature < 71.5 and 5 yes es and 3 no s for temperature  71.5 Info([4,2],[5,3]) = (6/14)info([4,2]) + (8/14)info([5,3]) = bits Split points are placed halfway between values All split points can be evaluated in one pass! Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No

34 Missing values C4.5 splits instances with missing values into pieces (with weights summing to 1) A piece going down a particular branch receives a weight proportional to the popularity of the branch Info gain etc. can be used with fractional instances using sums of weights instead of counts During classification, the same procedure is used to split instances into pieces Probability distributions are merged using weights

35 Stopping Criteria When all cases have the same class. The leaf node is labeled by this class. When there is no available attribute. The leaf node is labeled by the majority class. When the number of cases is less than a specified threshold. The leaf node is labeled by the majority class.

36 Pruning Pruning simplifies a decision tree to prevent overfitting to noise in the data Two main pruning strategies: 1.Postpruning: takes a fully-grown decision tree and discards unreliable parts 2.Prepruning: stops growing a branch when information becomes unreliable  Postpruning preferred in practice because of early stopping in prepruning

37 Prepruning Usually based on statistical significance test Stops growing the tree when there is no statistically significant association between any attribute and the class at a particular node Most popular test: chi-squared test ID3 used chi-squared test in addition to information gain Only statistically significant attributes where allowed to be selected by information gain procedure

38 The Weather example: Observed Count Play  Outlook YesNoOutlook Subtotal Sunny202 Cloudy011 Play Subtotal: 21Total count in table =3

39 The Weather example: Expected Count Play  Outlook YesNoSubtotal Sunny 2*2/6=4/3= 1.3 2*1/6=2/3= Cloudy 2*1/3=0.61*1/3=0.3 1 Subtotal:21Total count in table =3 If attributes were independent, then the subtotals would be Like this

40 Question: How different between observed and expected? If Chi-squared value is very large, then A1 and A2 are not independent  that is, they are dependent! Degrees of freedom: if table has n*m items, then freedom = (n-1)*(m-1) If all attributes in a node are independent with the class attribute, then stop splitting further.

41 Postpruning Builds full tree first and prunes it afterwards Attribute interactions are visible in fully-grown tree Problem: identification of subtrees and nodes that are due to chance effects Two main pruning operations: 1.Subtree replacement 2.Subtree raising Possible strategies: error estimation, significance testing, MDL principle

42 Subtree replacement Bottom-up: tree is considered for replacement once all its subtrees have been considered

43 Subtree raising Deletes node and redistributes instances Slower than subtree replacement (Worthwhile?)

44 Estimating error rates Pruning operation is performed if this does not increase the estimated error Of course, error on the training data is not a useful estimator (would result in almost no pruning) One possibility: using hold-out set for pruning (reduced-error pruning) C4.5’s method: using upper limit of 25% confidence interval derived from the training data Standard Bernoulli-process-based method

Training Set

46 Post-pruning in C4.5 Bottom-up pruning: at each non-leaf node v, if merging the subtree at v into a leaf node improves accuracy, perform the merging. Method 1: compute accuracy using examples not seen by the algorithm. Method 2: estimate accuracy using the training examples: Consider classifying E examples incorrectly out of N examples as observing E events in N trials in the binomial distribution. For a given confidence level CF, the upper limit on the error rate over the whole population is with CF% confidence.

47 Usage in Statistics: Sampling error estimation Example: population: 1,000,000 people, could be regarded as infinite population mean: percentage of the left handed people sample: 100 people sample mean: 6 left-handed How to estimate the REAL population mean? Pessimistic Estimate U 0.25 (100,6)L 0.25 (100,6) 6 Possibility(%) % confidence interval 15%

48 Usage in Decision Tree (DT): error estimation for some node in the DT example: unknown testing data: could be regarded as infinite universe population mean: percentage of error made by this node sample: 100 examples from training data set sample mean: 6 errors for the training data set How to estimate the REAL average error rate? Pessimistic Estimate U 0.25 (100,6)L 0.25 (100,6) 6 Possibility(%) % confidence interval Heuristic! But works well...

49 C4.5’s method Error estimate for subtree is weighted sum of error estimates for all its leaves Error estimate for a node: If c = 25% then z = 0.69 (from normal distribution) f is the error on the training data N is the number of instances covered by the leaf

50 Example for Estimating Error Consider a subtree rooted at Outlook with 3 leaf nodes: Sunny: Play = yes : (0 error, 6 instances) Overcast: Play= yes: (0 error, 9 instances) Cloudy: Play = no (0 error, 1 instance) The estimated error for this subtree is 6* * *0.323=1.217 If the subtree is replaced with the leaf “yes”, the estimated error is So no pruning is performed

51 Example continued Outlook yes no sunny overcast cloudy yes ?

52 Another Example f=0.33 e=0.47 f=0.5 e=0.72 f=0.33 e=0.47 f=5/14 e=0.46 Combined using ratios 6:2:6 this gives 0.51

53 Continuous Case: The CART Algorithm

54 Numeric prediction Counterparts exist for all schemes that we previously discussed Decision trees, rule learners, SVMs, etc. All classification schemes can be applied to regression problems using discretization Prediction: weighted average of intervals’ midpoints (weighted according to class probabilities) Regression more difficult than classification (i.e. percent correct vs. mean squared error)

55 Regression trees Differences to decision trees: Splitting criterion: minimizing intra-subset variation Pruning criterion: based on numeric error measure Leaf node predicts average class values of training instances reaching that node Can approximate piecewise constant functions Easy to interpret More sophisticated version: model trees

56 Model trees Regression trees with linear regression functions at each node Linear regression applied to instances that reach a node after full regression tree has been built Only a subset of the attributes is used for LR Attributes occurring in subtree (+maybe attributes occurring in path to the root) Fast: overhead for LR not large because usually only a small subset of attributes is used in tree

57 Smoothing Naïve method for prediction outputs value of LR for corresponding leaf node Performance can be improved by smoothing predictions using internal LR models Predicted value is weighted average of LR models along path from root to leaf Smoothing formula: Same effect can be achieved by incorporating the internal models into the leaf nodes

58 Building the tree Splitting criterion: standard deviation reduction Termination criteria (important when building trees for numeric prediction): Standard deviation becomes smaller than certain fraction of sd for full training set (e.g. 5%) Too few instances remain (e.g. less than four)

59 Model tree for servo data

60 Variations of CART Applying Logistic Regression predict probability of “True” or “False” instead of making a numerical valued prediction predict a probability value (p) rather than the outcome itself Probability= odds ratio

61 Other Trees Classification Trees Current node Children nodes (L, R): Decision Trees Current node Children nodes (L, R): GINI index used in CART (STD= ) Current node Children nodes (L, R):

Scalability: Previous works Incremental tree construction [Quinlan 1993] using partial data to build a tree. testing other examples and mis-classified ones are used to rebuild the tree interactively. still a main-memory algorithm. Best known algorithms: ID3 C4.5 C5

63 Efforts on Scalability Most algorithms assume data can fit in memory. Recent efforts focus on disk-resident implementation for decision trees. Random sampling Partitioning Examples SLIQ (EDBT ’ [MAR96]) SPRINT (VLDB96 -- [SAM96]) PUBLIC (VLDB98 -- [RS98]) RainForest (VLDB98 -- [GRG98])