Review - Decision Trees

Slides:



Advertisements
Similar presentations
DECISION TREES. Decision trees  One possible representation for hypotheses.
Advertisements

CHAPTER 9: Decision Trees
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Classification: Definition Given a collection of records (training set ) –Each record contains a set of attributes, one of the attributes is the class.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Classification: Definition l Given a collection of records (training set) l Find a model.
1 Data Mining Classification Techniques: Decision Trees (BUSINESS INTELLIGENCE) Slides prepared by Elizabeth Anglo, DISCS ADMU.
Decision Tree.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Classification Kuliah 4 4/29/2015. Classification: Definition  Given a collection of records (training set )  Each record contains a set of attributes,
Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,
Chapter 7 – Classification and Regression Trees
Chapter 7 – Classification and Regression Trees
Classification: Basic Concepts and Decision Trees.
Lecture Notes for Chapter 4 Introduction to Data Mining
Classification: Decision Trees, and Naïve Bayes etc. March 17, 2010 Adapted from Chapters 4 and 5 of the book Introduction to Data Mining by Tan, Steinbach,
Lecture outline Classification Decision-tree classification.
Lecture Notes for Chapter 4 Introduction to Data Mining
CSci 8980: Data Mining (Fall 2002)
1 BUS 297D: Data Mining Professor David Mease Lecture 5 Agenda: 1) Go over midterm exam solutions 2) Assign HW #3 (Due Thurs 10/1) 3) Lecture over Chapter.
Classification: Basic Concepts and Decision Trees
Tree-based methods, neutral networks
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
Lecture 5 (Classification with Decision Trees)
Example of a Decision Tree categorical continuous class Splitting Attributes Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K.
DATA MINING : CLASSIFICATION. Classification : Definition  Classification is a supervised learning.  Uses training sets which has correct answers (class.
Lecture Notes 4 Pruning Zhangxi Lin ISQS
1 Statistics 202: Statistical Aspects of Data Mining Professor David Mease Tuesday, Thursday 9:00-10:15 AM Terman 156 Lecture 11 = Finish ch. 4 and start.
DATA MINING LECTURE 9 Classification Basic Concepts Decision Trees.
1 Data Mining Lecture 3: Decision Trees. 2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes,
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Chapter 9 – Classification and Regression Trees
Chapter 4 Classification. 2 Classification: Definition Given a collection of records (training set ) –Each record contains a set of attributes, one of.
Classification: Basic Concepts, Decision Trees, and Model Evaluation
Classification. 2 Classification: Definition  Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes.
Classification Basic Concepts, Decision Trees, and Model Evaluation
Decision Trees and an Introduction to Classification.
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Machine Learning: Decision Trees Homework 4 assigned
Modul 6: Classification. 2 Classification: Definition  Given a collection of records (training set ) Each record contains a set of attributes, one of.
Decision Trees Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation COSC 4368.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Minqi Zhou.
Longin Jan Latecki Temple University
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Classification: Basic Concepts, Decision Trees. Classification: Definition l Given a collection of records (training set ) –Each record contains a set.
Decision Trees Example of a Decision Tree categorical continuous class Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Splitting.
Lecture Notes for Chapter 4 Introduction to Data Mining
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4.
Machine Learning: Decision Trees Homework 4 assigned courtesy: Geoffrey Hinton, Yann LeCun, Tan, Steinbach, Kumar.
1 Illustration of the Classification Task: Learning Algorithm Model.
Big Data Analysis and Mining Qinpei Zhao 赵钦佩 2015 Fall Decision Tree.
Classification: Basic Concepts, Decision Trees. Classification Learning: Definition l Given a collection of records (training set) –Each record contains.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining By Tan, Steinbach,
Decision Trees.
Eco 6380 Predictive Analytics For Economists Spring 2016 Professor Tom Fomby Department of Economics SMU.
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Lecture Notes for Chapter 4 Introduction to Data Mining
Ch9: Decision Trees 9.1 Introduction A decision tree:
Data Mining Classification: Basic Concepts and Techniques
Introduction to Data Mining, 2nd Edition by
Introduction to Data Mining, 2nd Edition by
Classification Basic Concepts, Decision Trees, and Model Evaluation
Introduction to Data Mining, 2nd Edition by
Machine Learning” Notes 2
Data Mining: Concepts and Techniques
Basic Concepts and Decision Trees
آبان 96. آبان 96 Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan,
COP5577: Principles of Data Mining Fall 2008 Lecture 4 Dr
Presentation transcript:

Review - Decision Trees ZHANGXI LIN ISQS 7339 TEXAS TECH UNIVERSITY

Outline Basic decision tree modeling Determining the best split Determining when to stop splitting ISQS 7342-001, Business Analytics

Basic Decision Tree Modeling

Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class. Find a model for class variable as a function of the values of other variables. Goal: previously unseen records should be assigned a class as accurately as possible. A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. ISQS 7342-001, Business Analytics

Decision Tree Based Classification Advantages: Inexpensive to construct Extremely fast at classifying unknown records Easy to interpret for small-sized trees Accuracy is comparable to other classification techniques for many simple data sets ISQS 7342-001, Business Analytics

Decision Tree Induction Many Algorithms: Hunt’s Algorithm (one of the earliest) CHAID (Chi-square Automatic Interaction Detection) CRT (Classification and Regression Trees) ID3, C4.5 SLIQ,SPRINT ISQS 7342-001, Business Analytics

General Structure of Hunt’s Algorithm Let Dt be the set of training records that reach a node t General Procedure: If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt If Dt is an empty set, then t is a leaf node labeled by the default class, yd If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. Dt ? ISQS 7342-001, Business Analytics

Hunt’s Algorithm Refund Refund Refund Marital Marital Status Status Don’t Cheat Yes No Don’t Cheat Refund Don’t Cheat Yes No Marital Status Single, Divorced Married Refund Don’t Cheat Yes No Marital Status Single, Divorced Married Taxable Income < 80K >= 80K ISQS 7342-001, Business Analytics

Tree Induction Greedy strategy. Issues Split the records based on a variable test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting ISQS 7342-001, Business Analytics

How to Specify Test Condition? Depends on variable types Nominal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split ISQS 7342-001, Business Analytics

Splitting Based on Ordinal Variables Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets. Need to find optimal partitioning. What about this split? Size Small Medium Large Size {Small, Medium} {Large} Size {Medium, Large} {Small} OR Size {Small, Large} {Medium} ISQS 7342-001, Business Analytics

Splitting Based on Continuous Attributes Different ways of handling Discretization to form an ordinal categorical attribute Static – discretize once at the beginning Dynamic – ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. Binary Decision: (A < v) or (A  v) consider all possible splits and finds the best cut can be more compute intensive ISQS 7342-001, Business Analytics

Splitting Based on Continuous Attributes ISQS 7342-001, Business Analytics

Determining the best split

Tree Induction Greedy strategy. Issues Split the records based on an variable test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting ISQS 7342-001, Business Analytics

How to determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best? ISQS 7342-001, Business Analytics

How to determine the Best Split Greedy approach: Nodes with homogeneous class distribution are preferred Need a measure of node impurity: Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity ISQS 7342-001, Business Analytics

Measures of Node Impurity Gini Index Entropy Misclassification error ISQS 7342-001, Business Analytics

Measure of Impurity: GINI Gini Index for a given node t : (NOTE: p( j | t) is the relative frequency of class j at node t). Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information ISQS 7342-001, Business Analytics

Examples for computing GINI P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Gini = 1 – (1/6)2 – (5/6)2 = 0.278 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 – (2/6)2 – (4/6)2 = 0.444 ISQS 7342-001, Business Analytics

Splitting Based on GINI Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the quality of split is computed as, where, ni = number of records at child i, n = number of records at node p. ISQS 7342-001, Business Analytics

Binary Variables: Computing GINI Index Splits into two partitions Effect of Weighing partitions: Larger and Purer Partitions are sought for. B? Yes No Node N1 Node N2 Gini(N1) = 1 – (5/7)2 – (2/7)2 = 0.194 Gini(N2) = 1 – (1/5)2 – (4/5)2 = 0.528 Gini(Children) = 7/12 * 0.194 + 5/12 * 0.528 = 0.333 ISQS 7342-001, Business Analytics

Categorical Attributes: Computing Gini Index For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions Multi-way split Two-way split (find best partition of values) ISQS 7342-001, Business Analytics

Continuous Variables: Computing Gini Index Use Binary Decisions based on one value Several Choices for the splitting value Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it Class counts in each of the partitions, A < v and A  v Simple method to choose best v For each v, scan the database to gather count matrix and compute its Gini index Computationally Inefficient! Repetition of work. ISQS 7342-001, Business Analytics

Continuous Variables: Computing Gini Index... For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing Gini index Choose the split position that has the least Gini index Split Positions Sorted Values ISQS 7342-001, Business Analytics

Alternative Splitting Criteria based on INFO Entropy at a given node t: (NOTE: p( j | t) is the relative frequency of class j at node t). Measures homogeneity of a node. Maximum (log nc) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information Entropy based computations are similar to the GINI index computations ISQS 7342-001, Business Analytics

Examples for computing Entropy P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0 P(C1) = 1/6 P(C2) = 5/6 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92 ISQS 7342-001, Business Analytics

Splitting Based on INFO... Information Gain: Parent Node, p is split into k partitions; ni is number of records in partition i Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) Used in ID3 and C4.5 Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure. ISQS 7342-001, Business Analytics

Splitting Based on INFO... Gain Ratio: Parent Node, p is split into k partitions ni is the number of records in partition i Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 Designed to overcome the disadvantage of Information Gain ISQS 7342-001, Business Analytics

Splitting Criteria based on Classification Error Classification error at a node t : Measures misclassification error made by a node. Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information ISQS 7342-001, Business Analytics

Examples for Computing Error P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3 ISQS 7342-001, Business Analytics

Comparison among Splitting Criteria For a 2-class problem: ISQS 7342-001, Business Analytics

Misclassification Error vs Gini Yes No Node N1 Node N2 Gini(N1) = 1 – (3/3)2 – (0/3)2 = 0 Gini(N2) = 1 – (4/7)2 – (3/7)2 = 0.489 Gini(Children) = 3/10 * 0 + 7/10 * 0.489 = 0.342 Gini improves !! ISQS 7342-001, Business Analytics

Determining when to stop splitting

Tree Induction Greedy strategy. Issues Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting ISQS 7342-001, Business Analytics

Stopping Criteria for Tree Induction Stop expanding a node when all the records belong to the same class Stop expanding a node when all the records have similar attribute values Early termination ISQS 7342-001, Business Analytics

Issues in Classification Underfitting and Overfitting Missing Values Costs of Classification ISQS 7342-001, Business Analytics

Underfitting and Overfitting (Example) 500 circular and 500 triangular data points. Circular points: 0.5  sqrt(x12+x22)  1 Triangular points: sqrt(x12+x22) > 0.5 or sqrt(x12+x22) < 1 ISQS 7342-001, Business Analytics

Underfitting and Overfitting Underfitting: when model is too simple, both training and test errors are large ISQS 7342-001, Business Analytics

Overfitting Training Set Test Set ISQS 7342-001, Business Analytics

Better Fitting Training Set Test Set ISQS 7342-001, Business Analytics

How to Address Overfitting Pre-Pruning (Early Stopping Rule) Stop the algorithm before it becomes a fully-grown tree Typical stopping conditions for a node: Stop if all instances belong to the same class Stop if all the attribute values are the same More restrictive conditions: Stop if number of instances is less than some user-specified threshold Stop if class distribution of instances are independent of the available features (e.g., using  2 test) Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain). ISQS 7342-001, Business Analytics

Data Splitting Test Validation Training ISQS 7342-001, Business Analytics

How to Address Overfitting… Post-pruning Grow decision tree to its entirety Trim the nodes of the decision tree in a bottom-up fashion If generalization error improves after trimming, replace sub-tree by a leaf node. Class label of leaf node is determined from majority class of instances in the sub-tree Can use MDL for post-pruning ISQS 7342-001, Business Analytics

Example of Post-Pruning Training Error (Before splitting) = 10/30 Pessimistic error = (10 + 0.5)/30 = 10.5/30 Training Error (After splitting) = 9/30 Pessimistic error (After splitting) = (9 + 4  0.5)/30 = 11/30 PRUNE! Class = Yes 20 Class = No 10 Error = 10/30 Class = Yes 8 Class = No 4 Class = Yes 3 Class = No 4 Class = Yes 4 Class = No 1 Class = Yes 5 Class = No 1 ISQS 7342-001, Business Analytics

Examples of Post-pruning Case 1: C0: 11 C1: 3 C0: 2 C1: 4 Optimistic error? Pessimistic error? Reduced error pruning? Don’t prune for both cases Don’t prune case 1, prune case 2 Case 2: C0: 14 C1: 3 C0: 2 C1: 2 Depends on validation set ISQS 7342-001, Business Analytics

Questions - Hows How do those popular decision tree algorithms embed the techniques we covered so far? How these algorithms differentiate them each other? How to improve the efficiency of a specific algorithm in the context of the problem being tackled? How does SAS adopt these algorithm? How to configure a SAS Decision Tree Node? How to customize the existing SAS decision tree algorithms using SAS procedures? ISQS 7342-001, Business Analytics

Review – Newly Covered Contents ISQS 7342-001, Business Analytics

Courseware Structure (Decision Tree) Textbook Slides SAS Course Notes Chapter 1 Slides #1 Chapter 1 DMDT Algorithms Slides #2 Chapter 2 Chapter 2 Partition Implement Slides #3 Chapter 3 Chapter 3 Pruning Slides #4 Chapter 4 Slides #5 Chapter 5 Chapter 4 Slides #6 Chapter 5 Chapter 1 PMADV Slides #7 Chapter 6 Chapter 2 Slides #8 Workshop Chapter 3 Slides #9 ISQS 7342-001, Business Analytics

Descriptive, Predictive, and Explanatory Analysis Simpson's paradox Interactions between inputs Algorithms Kass adjustment nd Bonferroni correction ISQS 7342-001, Business Analytics

Decision Tree Construction – Recursive Partitioning Decision Tree – Six Steps Why is clustering utilized in decision tree algorithms? How? How is the missing values problem resolved in decision tree modeling? What is surrogate split? How does it work? What are differences between CHAID and CRT? Binary vs. multi-way split SAS EM decision trees parameters P-Value Adjustments ISQS 7342-001, Business Analytics

Pruning Why How Performance What is cross validation? How to make a model “CART-like” or “CHAID-like”? How the settings match the features of CHAID algorithm or CART algorithm? ISQS 7342-001, Business Analytics

Auxiliary Use of Decision Trees Data exploration Selecting important inputs Collapsing levels Discretizing interval inputs Interaction detection Regression imputation ISQS 7342-001, Business Analytics

Improving Input Selections How First, a univariate screening is performed to eliminate those inputs with little promise of target association. This must be done with care to avoid eliminating inputs whose predictive value occurs only in conjunction with other inputs. Second, variable clustering techniques are used to group correlated interval inputs and minimize input redundancy. Third, enhanced weight-of-evidence methods are used to effectively incorporate categorical inputs into the final model. Principal component ISQS 7342-001, Business Analytics

Two-Stage Modeling What is the problem? How Two-stage model works? How to circumstance the range of the prediction? ISQS 7342-001, Business Analytics