Decision Trees an Introduction.

Slides:



Advertisements
Similar presentations
Data Mining in Artificial Intelligence: Decision Trees
Advertisements

Lecture 3: CBR Case-Base Indexing
COMP3740 CR32: Knowledge Management and Adaptive Systems
Data Mining Lecture 9.
Prof. Sin-Min Lee Department of Computer Science
Decision Tree Approach in Data Mining
Introduction Training Complexity, Pruning CART vs. ID3 vs. C4.5
Classification Techniques: Decision Tree Learning
Naïve Bayes: discussion
Decision Trees.
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
Decision Tree Algorithm
Constructing Decision Trees. A Decision Tree Example The weather data example. ID codeOutlookTemperatureHumidityWindyPlay abcdefghijklmnabcdefghijklmn.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Classification: Decision Trees
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Biological Data Mining (Predicting Post-synaptic Activity in Proteins)
Algorithms for Classification: Notes by Gregory Piatetsky.
Review. 2 Statistical modeling  “Opposite” of 1R: use all the attributes  Two assumptions: Attributes are  equally important  statistically independent.
Inferring rudimentary rules
Algorithms for Classification: The Basic Methods.
Classification.
Classification: Decision Trees 2 Outline  Top-Down Decision Tree Construction  Choosing the Splitting Attribute  Information Gain and Gain Ratio.
Machine Learning Lecture 10 Decision Trees G53MLE Machine Learning Dr Guoping Qiu1.
Learning Chapter 18 and Parts of Chapter 20
Data Mining: Classification
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Mohammad Ali Keyvanrad
Classification II. 2 Numeric Attributes Numeric attributes can take many values –Creating branches for each value is not ideal The value range is usually.
Classification I. 2 The Task Input: Collection of instances with a set of attributes x and a special nominal attribute Y called class attribute Output:
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Decision Tree Learning Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 25, 2014.
Decision-Tree Induction & Decision-Rule Induction
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Data Mining – Algorithms: Decision Trees - ID3 Chapter 4, Section 4.3.
CS690L Data Mining: Classification
Algorithms: Decision Trees 2 Outline  Introduction: Data Mining and Classification  Classification  Decision trees  Splitting attribute  Information.
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
MACHINE LEARNING 10 Decision Trees. Motivation  Parametric Estimation  Assume model for class probability or regression  Estimate parameters from all.
Algorithms for Classification: The Basic Methods.
Chapter 6 Classification and Prediction Dr. Bernard Chen Ph.D. University of Central Arkansas.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
Decision Trees, Part 1 Reading: Textbook, Chapter 6.
Slide 1 DSCI 4520/5240: Data Mining Fall 2013 – Dr. Nick Evangelopoulos Lecture 5: Decision Tree Algorithms Material based on: Witten & Frank 2000, Olson.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
1 Classification: predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values.
Decision Trees.
Machine Learning Recitation 8 Oct 21, 2009 Oznur Tastan.
Outline Decision tree representation ID3 learning algorithm Entropy, Information gain Issues in decision tree learning 2.
SEEM Tutorial 1 Classification: Decision tree Siyuan Zhang,
Decision Trees by Muhammad Owais Zahid
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Data Mining Chapter 4 Algorithms: The Basic Methods Reporter: Yuen-Kuei Hsueh.
Data Mining Chapter 4 Algorithms: The Basic Methods - Constructing decision trees Reporter: Yuen-Kuei Hsueh Date: 2008/7/24.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
By N.Gopinath AP/CSE.  A decision tree is a flowchart-like tree structure, where each internal node (nonleaf node) denotes a test on an attribute, each.
10. Decision Trees and Markov Chains for Gene Finding.
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
Classification Algorithms
Artificial Intelligence
Ch9: Decision Trees 9.1 Introduction A decision tree:
Data Science Algorithms: The Basic Methods
Classification and Prediction
Classification with Decision Trees
Data Mining(中国人民大学) Yang qiang(香港科技大学) Han jia wei(UIUC)
Junheng, Shengming, Yunsheng 10/19/2018
Data Mining CSCI 307, Spring 2019 Lecture 15
Presentation transcript:

Decision Trees an Introduction

Outline Top-Down Decision Tree Construction Choosing the Splitting Attribute Information Gain and Gain Ratio

Decision Tree An internal node is a test on an attribute. A branch represents an outcome of the test, e.g., house = Rent. A leaf node represents a class label or class label distribution. At each node, one attribute is chosen to split training examples into distinct classes as much as possible A new case is classified by following a matching path to a leaf node. Rent Buy Other Age < 35 Age ≥ 35 Price < 200K Price ≥ 200K Yes No

Weather Data: Play tennis or not Outlook Temperature Humidity Windy Play? sunny hot high false No true overcast Yes rain mild cool normal

Example Tree for “Play tennis”

Building Decision Tree [Q93] Top-down tree construction At start, all training examples are at the root. Partition the examples recursively by choosing one attribute each time. Bottom-up tree pruning Remove subtrees or branches, in a bottom-up manner, to improve the estimated accuracy on new cases. Discussed next week

Choosing the Splitting Attribute At each node, available attributes are evaluated on the basis of separating the classes of the training examples. A goodness function is used for this purpose. Typical goodness functions: information gain (ID3/C4.5) information gain ratio gini index

A criterion for attribute selection Which is the best attribute? The one which will result in the smallest tree Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets that an attribute produces Information gain uses entropy H(p) Strategy: choose attribute that results in greatest information gain

Which attribute to select?

Consider entropy H(p) done allmost 1 bit of information required pure, 100% yes not pure at all, 40% yes not pure at all, 40% yes pure, 100% yes done allmost 1 bit of information required to distinguish yes and no

Entropy log(p) is the 2-log of p Entropy: H(p) = – plog(p) – (1–p)log(1–p) H(0) = 0 pure node, distribution is skewed H(1) = 0 pure node, distribution is skewed H(0.5) = 1 mixed node, equal distribution

Example: attribute “Outlook” “Outlook” = “Sunny”: “Outlook” = “Overcast”: “Outlook” = “Rainy”: Expected information for “Outlook”: Note: log(0) is not defined, but we evaluate 0*log(0) as zero

Computing the information gain (information before split) – (information after split) Information gain for attributes from weather data:

Continuing to split

The final decision tree Note: not all leaves need to be pure; sometimes identical instances have different classes  Splitting stops when data can’t be split any further

Highly-branching attributes Problematic: attributes with a large number of values (extreme case: customer ID) Subsets are more likely to be pure if there is a large number of values Information gain is biased towards choosing attributes with a large number of values This may result in overfitting (selection of an attribute that is non-optimal for prediction)

Weather Data with ID ID Outlook Temperature Humidity Windy Play? A sunny hot high false No B true C overcast Yes D rain mild E cool normal F G H I J K L M N

Split for ID attribute Entropy of split = 0 (since each leaf node is “pure”, having only one case. Information gain is maximal for ID

Gain ratio Gain ratio: a modification of the information gain that reduces its bias on high-branch attributes Gain ratio should be Large when data is evenly spread Small when all data belong to one branch Gain ratio takes number and size of branches into account when choosing an attribute It corrects the information gain by taking the intrinsic information of a split into account (i.e. how much info do we need to tell which branch an instance belongs to)

Gain Ratio and Intrinsic Info. Intrinsic information: entropy of distribution of instances into branches Gain ratio (Quinlan ‘86) normalizes info gain by:

Computing the gain ratio Example: intrinsic information for ID Importance of attribute decreases as intrinsic information gets larger Example:

Gain ratios for weather data Outlook Temperature Info: 0.693 0.911 Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029 Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362 Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021 Humidity Windy Info: 0.788 0.892 Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048 Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985 Gain ratio: 0.152/1 Gain ratio: 0.048/0.985 0.049

More on the gain ratio “Outlook” still comes out top However: “ID” has greater gain ratio Standard fix: ad hoc test to prevent splitting on that type of attribute Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic information is very low Standard fix: First, only consider attributes with greater than average information gain Then, compare them on gain ratio