Inferring rudimentary rules

Slides:



Advertisements
Similar presentations
Lecture 3: CBR Case-Base Indexing
Advertisements

CHAPTER 9: Decision Trees
Machine Learning in Real World: C4.5
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Decision Tree Approach in Data Mining
Classification Techniques: Decision Tree Learning
Naïve Bayes: discussion
Decision Trees.
Algorithms: The basic methods. Inferring rudimentary rules Simplicity first Simple algorithms often work surprisingly well Many different kinds of simple.
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
Constructing Decision Trees. A Decision Tree Example The weather data example. ID codeOutlookTemperatureHumidityWindyPlay abcdefghijklmnabcdefghijklmn.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Classification: Decision Trees
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Biological Data Mining (Predicting Post-synaptic Activity in Proteins)
Algorithms for Classification: Notes by Gregory Piatetsky.
Decision Trees an Introduction.
Review. 2 Statistical modeling  “Opposite” of 1R: use all the attributes  Two assumptions: Attributes are  equally important  statistically independent.
Algorithms for Classification: The Basic Methods.
Classification: Decision Trees 2 Outline  Top-Down Decision Tree Construction  Choosing the Splitting Attribute  Information Gain and Gain Ratio.
Learning Chapter 18 and Parts of Chapter 20
Data Mining – Algorithms: OneR Chapter 4, Section 4.1.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Machine Learning Chapter 3. Decision Tree Learning
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Classification II. 2 Numeric Attributes Numeric attributes can take many values –Creating branches for each value is not ideal The value range is usually.
Data Mining – Algorithms: Prism – Learning Rules via Separating and Covering Chapter 4, Section 4.4.
Data Mining Schemes in Practice. 2 Implementation: Real machine learning schemes  Decision trees: from ID3 to C4.5  missing values, numeric attributes,
Chapter 9 – Classification and Regression Trees
Classification I. 2 The Task Input: Collection of instances with a set of attributes x and a special nominal attribute Y called class attribute Output:
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Inferring Decision Trees Using the Minimum Description Length Principle J. R. Quinlan and R. L. Rivest Information and Computation 80, , 1989.
1 Learning Chapter 18 and Parts of Chapter 20 AI systems are complex and may have many parameters. It is impractical and often impossible to encode all.
Decision-Tree Induction & Decision-Rule Induction
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Data Mining – Algorithms: Decision Trees - ID3 Chapter 4, Section 4.3.
CS690L Data Mining: Classification
Algorithms: Decision Trees 2 Outline  Introduction: Data Mining and Classification  Classification  Decision trees  Splitting attribute  Information.
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
Algorithms for Classification: The Basic Methods.
Data Mining – Algorithms: Naïve Bayes Chapter 4, Section 4.2.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Sections 4.1 Inferring Rudimentary Rules Rodney Nielsen.
Slide 1 DSCI 4520/5240: Data Mining Fall 2013 – Dr. Nick Evangelopoulos Lecture 5: Decision Tree Algorithms Material based on: Witten & Frank 2000, Olson.
Example: input data outlooktemp.humiditywindyplay sunnyhothighfalseno sunnyhothightrueno overcasthothighfalseyes rainymildhighfalseyes rainycoolnormalfalseyes.
1 Data Mining I Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
Chapter 4: Algorithms CS 795. Inferring Rudimentary Rules 1R – Single rule – one level decision tree –Pick each attribute and form a single level tree.
Decision Trees by Muhammad Owais Zahid
Data Mining Chapter 4 Algorithms: The Basic Methods Reporter: Yuen-Kuei Hsueh.
Data Mining Chapter 4 Algorithms: The Basic Methods - Constructing decision trees Reporter: Yuen-Kuei Hsueh Date: 2008/7/24.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
Algorithms for Classification:
Classification Algorithms
Artificial Intelligence
Data Science Algorithms: The Basic Methods
Decision Tree Saed Sayad 9/21/2018.
Machine Learning Techniques for Data Mining
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning: Lecture 3
Classification with Decision Trees
Machine Learning Chapter 3. Decision Tree Learning
Dept. of Computer Science University of Liverpool
Data Mining(中国人民大学) Yang qiang(香港科技大学) Han jia wei(UIUC)
Junheng, Shengming, Yunsheng 10/19/2018
Data Mining CSCI 307, Spring 2019 Lecture 15
Data Mining CSCI 307, Spring 2019 Lecture 6
Presentation transcript:

Inferring rudimentary rules 1R: learns a 1-level decision tree I.e., rules that all test one particular attribute Basic version One branch for each value Each branch assigns most frequent class Error rate: proportion of instances that don’t belong to the majority class of their corresponding branch Choose attribute with lowest error rate (assumes nominal attributes)

Pseudo-code for 1R For each attribute, For each value of the attribute, make a rule as follows: count how often each class appears find the most frequent class make the rule assign that class to this attribute-value Calculate the error rate of the rules Choose the rules with the smallest error rate Note: “missing” is treated as a separate attribute value

Evaluating the weather attributes Outlook Temp Humidity Windy Play Sunny Hot High False No True Overcast Yes Rainy Mild Cool Normal Attribute Rules Errors Total errors Outlook Sunny  No 2/5 4/14 Overcast  Yes 0/4 Rainy  Yes Temp Hot  No* 2/4 5/14 Mild  Yes 2/6 Cool  Yes 1/4 Humidity High  No 3/7 Normal  Yes 1/7 Windy False  Yes 2/8 True  No* 3/6 * indicates a tie

Dealing with numeric attributes Discretize numeric attributes Divide each attribute’s range into intervals Sort instances according to attribute’s values Place breakpoints where the class changes (the majority class) This minimizes the total error Example: temperature from weather data Outlook Temperature Humidity Windy Play Sunny 85 False No 80 90 True Overcast 83 86 Yes Rainy 75 … 64 65 68 69 70 71 72 72 75 75 80 81 83 85 Yes | No | Yes Yes Yes | No No Yes | Yes Yes | No | Yes Yes | No

The problem of overfitting This procedure is very sensitive to noise One instance with an incorrect class label will probably produce a separate interval Also: time stamp attribute will have zero errors Simple solution: enforce minimum number of instances in majority class per interval Example (with min = 3): 64 65 68 69 70 71 72 72 75 75 80 81 83 85 Yes | No | Yes Yes Yes | No No Yes | Yes Yes | No | Yes Yes | No 64 65 68 69 70 71 72 72 75 75 80 81 83 85 Yes No Yes Yes Yes | No No Yes Yes Yes | No Yes Yes No

With overfitting avoidance Resulting rule set: Attribute Rules Errors Total errors Outlook Sunny  No 2/5 4/14 Overcast  Yes 0/4 Rainy  Yes Temperature  77.5  Yes 3/10 5/14 > 77.5  No* 2/4 Humidity  82.5  Yes 1/7 3/14 > 82.5 and  95.5  No 2/6 > 95.5  Yes 0/1 Windy False  Yes 2/8 True  No* 3/6

Discussion of 1R 1R was described in a paper by Holte (1993) Contains an experimental evaluation on 16 datasets (using cross-validation so that results were representative of performance on future data) Minimum number of instances was set to 6 after some experimentation 1R’s simple rules performed not much worse than much more complex decision trees Simplicity first pays off! Very Simple Classification Rules Perform Well on Most Commonly Used Datasets Robert C. Holte, Computer Science Department, University of Ottawa

Constructing decision trees Strategy: top down Recursive divide-and-conquer fashion First: select attribute for root node Create branch for each possible attribute value Then: split instances into subsets One for each branch extending from the node Finally: repeat recursively for each branch, using only instances that reach the branch Stop if all instances have the same class

Which attribute to select?

Criterion for attribute selection Which is the best attribute? Want to get the smallest tree Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets Strategy: choose attribute that gives greatest information gain

Computing information Measure information in bits Given a probability distribution, the info required to predict an event is the distribution’s entropy Entropy gives the information required in bits (can involve fractions of bits!) Formula for computing the entropy:

Entropy of an attribute Several definitios of entropy. We will use the so-called Shannon entropy Each possible value x of an attribute A has its “self-information” I(x): I(x) = log2(1/px) = - log2(px) Entropy of the attribute H(A) is the average of information:

Claude Shannon “Father of information theory” Born: 30 April 1916 Died: 23 February 2001 Claude Shannon, who has died aged 84, perhaps more than anyone laid the groundwork for today’s digital revolution. His exposition of information theory, stating that all information could be represented mathematically as a succession of noughts and ones, facilitated the digital manipulation of data without which today’s information society would be unthinkable. Shannon’s master’s thesis, obtained in 1940 at MIT, demonstrated that problem solving could be achieved by manipulating the symbols 0 and 1 in a process that could be carried out automatically with electrical circuitry. That dissertation has been hailed as one of the most significant master’s theses of the 20th century. Eight years later, Shannon published another landmark paper, A Mathematical Theory of Communication, generally taken as his most important scientific contribution. Shannon applied the same radical approach to cryptography research, in which he later became a consultant to the US government. Many of Shannon’s pioneering insights were developed before they could be applied in practical form. He was truly a remarkable man, yet unknown to most of the world.

Link with Information Theory Same thing, different jargons: machine learning | information theory Information gain <==> conditional entropy H(Y | X) Information ratio <==> mutual information I(Y | X) Vide http://www.iiap.res.in/personnel/srik/qc_cit.html

Venn diagrams X and Y are events Can visualize equations as: S(X,Y) = S(X) + S(Y|X)

Entropy of a binary variable If X is binary, say 0 or 1: When p(0) = 1  H(x) = 1 bit  maximum When p(0)=1 or p(1)=1  H(x) = 0  minimum

Example: attribute Outlook Outlook = Sunny : Outlook = Overcast : Outlook = Rainy : Expected information for attribute: Note: this is assumed to be 0 by “continuity” arguments

Computing information gain Information gain: information before splitting – information after splitting Information gain for attributes from weather data: gain(Outlook ) = info([9,5]) – info([2,3],[4,0],[3,2]) = 0.940 – 0.693 = 0.247 bits gain(Outlook ) = 0.247 bits gain(Temperature ) = 0.029 bits gain(Humidity ) = 0.152 bits gain(Windy ) = 0.048 bits

Continuing to split gain(Temperature ) = 0.571 bits gain(Humidity ) = 0.971 bits gain(Windy ) = 0.020 bits

Final decision tree Note: not all leaves need to be pure; sometimes identical instances have different classes  Splitting stops when data can’t be split any further

Wishlist for a purity measure Properties we require from a purity measure: When node is pure, measure should be zero When impurity is maximal (i.e. all classes equally likely), measure should be maximal Measure should obey multistage property (i.e. decisions can be made in several stages): Entropy is the only function that satisfies all three properties!

Properties of the entropy The multistage property: Simplification of computation: Note: instead of maximizing info gain we could just minimize information

Highly-branching attributes Problematic: attributes with a large number of values (extreme case: ID code) Subsets are more likely to be pure if there is a large number of values Information gain is biased towards choosing attributes with a large number of values This may result in overfitting (selection of an attribute that is non-optimal for prediction) Another problem: fragmentation

Weather data with ID code Outlook Temp. Humidity Windy Play A Sunny Hot High False No B True C Overcast Yes D Rainy Mild E Cool Normal F G H I J K L M N

Tree stump for ID code attribute Entropy of split: Information gain is maximal for ID code (namely 0.940 bits)

Gain ratio Gain ratio: a modification of the information gain that reduces its bias Gain ratio takes number and size of branches into account when choosing an attribute It corrects the information gain by taking the intrinsic information of a split into account Intrinsic information: entropy of distribution of instances into branches (i.e. how much info do we need to tell which branch an instance belongs to)

Computing the gain ratio Example: intrinsic information for ID code Value of attribute decreases as intrinsic information gets larger Definition of gain ratio: Example:

Gain ratios for weather data Outlook Temperature Info: 0.693 0.911 Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029 Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362 Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021 Humidity Windy Info: 0.788 0.892 Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048 Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985 Gain ratio: 0.152/1 Gain ratio: 0.048/0.985 0.049

More on the gain ratio “Outlook” still comes out top However: “ID code” has greater gain ratio Standard fix: ad hoc test to prevent splitting on that type of attribute Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic information is very low Standard fix: only consider attributes with greater than average information gain

Discussion Top-down induction of decision trees: ID3, algorithm developed by Ross Quinlan Gain ratio just one modification of this basic algorithm  C4.5: deals with numeric attributes, missing values, noisy data Similar approach: CART There are many other attribute selection criteria! (But little difference in accuracy of result)