Download presentation
Presentation is loading. Please wait.
1
CS 540 - Fall 2016 (© Jude Shavlik), Lecture 4
CS 540 Fall 2015 (Shavlik) 7/24/2018 Today’s Topics Read Section of textbook and Wikipedia article(s) linked to class home page Read Chapter 3 & Section 4.1 (Skim Section 3.6 and rest of Chapter 4), Sections 5.1, 5.2, 5.3, 5,7, 5.8, & 5.9 (skim rest of Chapter 5) of textbook Sign up for Piazza! Information Gain Derived (and Generalized to k Output Categories) Handling Numeric and Hierarchical Features Advanced Topic: Regression Trees The Trouble with Too Many Possible Values What if Measuring Features is Costly? Feature Values Missing? Summer Internships Panel tomorrow [last Fri], Rm 1240 CS, 3:30pm 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
2
CS 540 - Fall 2016 (© Jude Shavlik), Lecture 4
ID3 Info Gain Measure Justified (Ref. C4.5, J. R. Quinlan, Morgan Kaufmann, 1993, pp 21-22) Definition of Information Info conveyed by message M depends on its probability, i.e., info(M) -log2[Prob(M)] (due to Claude Shannon) Note: last lecture we used infoNeeded() as a more informative name for info() The Supervised Learning Task Select example from a set S and announce it belongs to class C The probability of this occurring is approx fC the fraction of C ’s in S Hence info in this announcement is, by definition, -log2(fC) 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
3
ID3 Info Gain Measure (cont.)
Let there be K different classes in set S, namely C1, C2, …, CK What’s expected info from msg about class of an example in set S ? info(s) is the average number of bits of information (by looking at feature values) needed to classify member of set S 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
4
CS 540 - Fall 2016 (© Jude Shavlik), Lecture 4
9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
5
Handling Hierarchical Features in ID3
Define a new feature for each level in hierarchy, e.g., Let ID3 choose the appropriate level of abstraction! Shape Circular Polygonal Shape1 = { Circular, Polygonal } Shape2 = { } 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
6
Handling Numeric Features in ID3
On the fly create binary features and choose best Step 1: Plot current examples (green=pos, red=neg) Step 2: Divide midway between every consecutive pair of points with different categories to create new binary features, eg featurenew1 F<8 and featurenew2 F<10 Step 3: Choose split with best info gain (compete with all other features) 5 7 9 11 13 Value of Feature Note: “On the fly” means in each recursive call to ID3 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
7
Handling Numeric Features (cont.)
Technical Note Cannot discard numeric feature after use in one portion of d-tree F<10 F< 5 + - T F 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
8
CS 540 - Fall 2016 (© Jude Shavlik), Lecture 4
Advanced Topic: Regression Trees (assume features are numerically valued) Age > 25 No Yes Gender Output = 4 f3 + 7 f5 – 2 f9 M F Output = 100 f4 – 2 f8 Output = 7 f6 - 2 f1 - 2 f8 + f7 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
9
Advanced Topic: Scoring “Splits” for Regression (Real-Valued) Problems
We want to return real values at the leaves - For each feature, F, “split” as done in ID3 - Use residue remaining, say using Linear Least Squares (LLS), instead of info gain to score candidate splits Why not a weighted sum in total error? Commonly models at leaves are wgt’ed sums of features (y = mx + b) Some approaches just place constants at leaves Output LLS X 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
10
Unfortunate Characteristic Property of Using Info-Gain Measure
FAVORS FEATURES WITH HIGH BRANCHING FACTORS (ie, many possible values) Extreme Case: At most one example per leaf and all Info(.,.) scores for leaves equals zero, so gets perfect score! But generalizes very poorly (ie, memorizes data) 1 + 0 - 0 + 1 - 1 99 999999 Student ID … … 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
11
CS 540 - Fall 2016 (© Jude Shavlik), Lecture 4
One Fix (used in HW0/HW1) Convert all features to binary eg, Color = { Red, Blue, Green } From one N-valued feature to N binary-valued features Color = Red? Color = Blue? Color = Green? Used in Neural Nets and SVMs D-tree readability probably less, but not necessarily 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
12
Considering the Cost of Measuring a Feature
Want trees with high accuracy and whose tests are inexpensive to compute take temperature vs. do CAT scan Common Heuristic InformationGain(F)² / Cost(F) Used in medical domains as well as robot-sensing tasks 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
13
What about Missing Feature Values?
Quinlan proposed and eval’ed some ideas Might be best to use a Bayes Net (later) to infer the most likely values for missing features given class and known feature values 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
14
CS 540 - Fall 2016 (© Jude Shavlik), Lecture 4
We’ll return to d-trees after a digression into train/tune/test sets k-nearest neighbors Still to cover on d-trees overfitting reduction ensembles (train a set of d-trees) 9/20/16 CS Fall 2016 (© Jude Shavlik), Lecture 4
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.