Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

Similar presentations


Presentation on theme: "CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3"— Presentation transcript:

1 CS 540 - Fall 2015 (© Jude Shavlik), Lecture 7, Week 3
CS 540 Fall 2015 (Shavlik) 4/24/2017 Today’s Topics Ensembles Decision Forests (actually, Random Forests) Bagging and Boosting Decision Stumps 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

2 Ensembles (Bagging, Boosting, and all that)
Old View Learn one good model New View Learn a good set of models Probably best example of interplay between ‘theory & practice’ in machine learning Naïve Bayes, k-NN, neural net, d-tree, SVM, etc 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

3 Ensembles of Neural Networks (or any supervised learner)
OUTPUT Combiner Network Network Network INPUT Ensembles often produce accuracy gains of percentage points! Can combine “classifiers” of various types Eg, decision trees, rule sets, neural networks, etc. 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

4 Three Explanations of Why Ensembles Help
Statistical (sample effects) Computational (limited cycles for search) Representational (wrong hypothesis space) From: Dietterich, T. G. (2002). Ensemble Learning. In The Handbook of Brain Theory and Neural Networks, Second edition, (M.A. Arbib, Ed.), Cambridge, MA: The MIT Press, Key  true concept  learned models search path Concept Space Considered 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

5 Combining Multiple Models
Three ideas for combining predictions Simple (unweighted) votes Standard choice Weighted votes eg, weight by tuning-set accuracy Learn a combining function Prone to overfitting? ‘Stacked generalization’ (Wolpert) 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

6 Random Forests (Breiman, Machine Learning 2001; related to Ho, 1995)
A variant of something called BAGGING (‘multi-sets’) Algorithm Repeat k times Draw with replacement N examples, put in train set Build d-tree, but in each recursive call Choose (w/o replacement) i features Choose best of these i as the root of this (sub)tree Do NOT prune In HW2, we’ll give you 101 ‘bootstrapped’ samples of the Thoracic Surgery Dataset Let N = # of examples F = # of features i = some number << F 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

7 CS 540 - Fall 2015 (© Jude Shavlik), Lecture 7, Week 3
Using Random Forests After training we have K decision trees How to use on TEST examples? Some variant of If at least L of these K trees say ‘true’ then output ‘true’ How to choose L ? Use a tune set to decide 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

8 CS 540 - Fall 2015 (© Jude Shavlik), Lecture 7, Week 3
More on Random Forests Increasing i Increases correlation among individual trees (BAD) Also increases accuracy of individual trees (GOOD) Can also use tuning set to choose good value for i Overall, random forests Are very fast (eg, 50K examples, 10 features, 10 trees/min on 1 GHz CPU back in 2004) Deal well with large # of features Reduce overfitting substantially; NO NEED TO PRUNE! Work very well in practice 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

9 A Relevant Early Paper on ENSEMBLES
Hansen & Salamen, PAMI:20, 1990 If (a) the combined predictors have errors that are independent from one another And (b) prob any given model correct predicts any given testset example is > 50%, then 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

10 Some More Relevant Early Papers
CS 540 Fall 2015 (Shavlik) 4/24/2017 Some More Relevant Early Papers Schapire, Machine Learning:5, 1990 (‘Boosting’) If you have an algorithm that gets > 50% on any distribution of examples, you can create an algorithm that gets > (100% - ), for any  > 0 Need an infinite (or very large, at least) source of examples - Later extensions (eg, AdaBoost) address this weakness Also see Wolpert, ‘Stacked Generalization,’ Neural Networks, 1992 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

11 Some Methods for Producing ‘Uncorrelated’ Members of an Ensemble
K times randomly choose (with replacement) N examples from a training set of size N Give each training set to a std ML algo ‘Bagging’ by Brieman (Machine Learning, 1996) Want unstable algorithms (so learned models vary) Reweight examples each cycle (if wrong, increase weight; else decrease weight) ‘AdaBoosting’ by Freund & Schapire (1995, 1996) 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

12 CS 540 - Fall 2015 (© Jude Shavlik), Lecture 7, Week 3
Empirical Studies (from Freund & Schapire; reprinted in Dietterich’s AI Magazine paper) Error Rate of C4.5 Error Rate of Bagged (Boosted) C4.5 (Each point one data set) Boosting and Bagging helped almost always! Error Rate of AdaBoost Error Rate of Bagging On average, Boosting slightly better? ID3 successor 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

13 Some More Methods for Producing “Uncorrelated” Members of an Ensemble
Directly optimize accuracy + diversity Opitz & Shavlik (1995; used genetic algo’s) Melville & Mooney (2004-5) Different number of hidden units in a neural network, different k in k -NN, tie-breaking scheme, example ordering, diff ML algos, etc Various people See papers of Rich Caruana’s group for large-scale empirical studies of ensembles 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

14 Boosting/Bagging/etc Wrapup
An easy to use and usually highly effective technique always consider it (Bagging, at least) when applying ML to practical problems Does reduce ‘comprehensibility’ of models see work by Craven & Shavlik though (‘rule extraction’) Increases runtime, but cycles usually much cheaper than examples (and easily parallelized) 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

15 Decision “Stumps” (formerly part of HW; try on your own!)
Holte (ML journal) compared: Decision trees with only one decision (decision stumps) vs Trees produced by C4.5 (with pruning algorithm used) Decision ‘stumps’ do remarkably well on UC Irvine data sets Archive too easy? Some datasets seem to be Decision stumps are a ‘quick and dirty control for comparing to new algorithms But ID3/C4.5 easy to use and probably a better control 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3

16 C4.5 Compared to 1R (‘Decision Stumps’)
Testset Accuracy C4.5 Compared to 1R (‘Decision Stumps’) Dataset C4.5 1R BC 72.0% 68.7% CH 99.2% GL 63.2% 67.6% G2 74.3% 53.8% HD 73.6% 72.9% HE 81.2% 76.3% HO 83.6% 81.0% HY 99.1% 97.2% IR 93.8% 93.5% LA 77.2% 71.5% LY 77.5% 70.7% MU 100.0% 98.4% SE 97.7% 95.0% SO 97.5% VO 95.6% 95.2% V1 89.4% 86.8% See Holte paper in Machine Learning for key (eg, HD=heart disease) 9/22/15 CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3


Download ppt "CS Fall 2015 (© Jude Shavlik), Lecture 7, Week 3"

Similar presentations


Ads by Google