Download presentation
Presentation is loading. Please wait.
Published byAileen Crystal Leonard Modified over 9 years ago
1
Are we still talking about diversity in classifier ensembles? Ludmila I Kuncheva School of Computer Science Bangor University, UK
2
Are we still talking about diversity in classifier ensembles? Ludmila I Kuncheva School of Computer Science Bangor University, UK Completely irrelevant to your Workshop...
3
Let’s talk instead of: Multi-view and classifier ensembles
4
classifier feature values (object description) classifier class label “combiner” A classifier ensemble
5
feature values (object description) class label a neural network classifier combiner classifier ensemble?
6
classifier feature values (object description) class label classifier ensemble? a fancy combiner
7
classifier? a fancy feature extractor classifier feature values (object description) classifier class label “combiner”
8
a. because we like to complicate entities beyond necessity (anti-Occam’s razor) b. because we are lazy and stupid and can’t be bothered to design and train one single sophisticated classifier c. because democracy is so important to our society, it must be important to classification Why classifier ensembles then?
9
combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96] mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91] committees of neural networks [Bishop95,Drucker94] consensus aggregation [Benediktsson92,Ng92,Benediktsson97] voting pool of classifiers [Battiti94] dynamic classifier selection [Woods97] composite classifier systems [Dasarathy78] classifier ensembles [Drucker94,Filippi94,Sharkey99] bagging, boosting, arcing, wagging [Sharkey99] modular systems [Sharkey99] collective recognition [Rastrigin81,Barabash83] stacked generalization [Wolpert92] divide-and-conquer classifiers [Chiang94] pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93] etc. fanciest oldest
10
combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96] mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91] committees of neural networks [Bishop95,Drucker94] consensus aggregation [Benediktsson92,Ng92,Benediktsson97] voting pool of classifiers [Battiti94] dynamic classifier selection [Woods97] composite classifier systems [Dasarathy78] classifier ensembles [Drucker94,Filippi94,Sharkey99] bagging, boosting, arcing, wagging [Sharkey99] modular systems [Sharkey99] collective recognition [Rastrigin81,Barabash83] stacked generalization [Wolpert92] divide-and-conquer classifiers [Chiang94] pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93] etc. Out of fashion Subsumed
11
Congratulations! The Netflix Prize sought to substantially improve the accuracy of predictions about how much someone is going to enjoy a movie based on their movie preferences. On September 21, 2009 we awarded the $1M Grand Prize to team “BellKor’s Pragmatic Chaos”. Read about their algorithm, checkout team scores on the Leaderboard, and join the discussions on the Forum.their algorithmLeaderboard Forum We applaud all the contributors to this quest, which improves our ability to connect people to the movies they love. classifier feature values (object description) classifier class label combiner classifier ensemble
12
cited 7194 times by 28 July 2013 (Google Scholar) classifier feature values (object description) classifier class label combiner classifier ensemble
13
Saso Dzeroski David Hand S. Dzeroski, and B. Zenko. (2004) Is combining classifiers better than selecting the best one? Machine Learning, 54, 255-273. David J. Hand (2006) Classifier technology and the illusion of progress, Statist. Sci. 21 (1), 1-14. Classifier combination? Hmmmm….. We are kidding ourselves; there is no real progress in spite of ensemble methods. Chances are that the single best classifier will be better than the ensemble.
14
Quo Vadis? "combining classifiers" OR "classifier combination" OR "classifier ensembles" OR "ensemble of classifiers" OR "combining multiple classifiers" OR "committee of classifiers" OR "classifier committee" OR "committees of neural networks" OR "consensus aggregation" OR "mixture of experts" OR "bagging predictors" OR adaboost OR (( "random subspace" OR "random forest" OR "rotation forest" OR boosting) AND "machine learning")
15
Gartner’s Hype Cycle: a typical evolution pattern of a new technology Where are we?...
16
(6) IEEE TPAMI = IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE TSMC = IEEE Transactions on Systems, Man and Cybernetics JASA = Journal of the American Statistical Association IJCV = International Journal of Computer Vision JTB = Journal of Theoretical Biology (2) PPL = Protein and Peptide Letters JAE = Journal of Animal Ecology PR = Pattern Recognition (4) ML = Machine Learning NN = Neural Networks CC = Cerebral Cortex top cited paper is from… application paper
18
International Workshop on Multiple Classifier Systems 2000 – 2013 - continuing
19
Combiner Features Classifier 2Classifier 1Classifier L… Data set A Combination level selection or fusion? voting or another combination method? trainable or non-trainable combiner? B Classifier level same or different classifiers? decision trees, neural networks or other? how many? C Feature level all features or subsets of features? random or selected subsets? D Data level independent/dependent bootstrap samples? selected data sets? Levels of questions
20
50 diverse linear classifiers 50 non-diverse linear classifiers
21
Number of classifiers L 1 The perfect classifier 3-8 classifiers heterogeneous trained combiner (stacked generalisation) 100+ classifiers same model non-trained combiner (bagging, boosting, etc.) Large ensemble of nearly identical classifiers - REDUNDANCY Small ensembles of weak classifiers - INSUFFICIENCY ? ? Must engineer diversity… Strength of classifiers How about here? 30-50 classifiers same or different models? trained or non-trained combiner? selection or fusion? IS IT WORTH IT?
22
Number of classifiers L 1 The perfect classifier 3-8 classifiers heterogeneous trained combiner (stacked generalisation) 100+ classifiers same model non-trained combiner (bagging, boosting, etc.) Large ensemble of nearly identical classifiers - REDUNDANCY Small ensembles of weak classifiers - INSUFFICIENCY Must engineer diversity… Strength of classifiers 30-50 classifiers same or different models? trained or non-trained combiner? selection or fusion? IS IT WORTH IT?
23
classifier feature values (object description) classifier class label “combiner” A classifier ensemble one view
24
classifier feature values (object description) classifier class label “combiner” A classifier ensemble multiple views feature values (object description) feature values (object description)
25
1998
26
“distinct” is what you call “late fusion” “shared” is what you call “early fusion”
27
EXPRESSION OF EMOTION - MODALITIES facial expression posture behavioural physiological peripheral nervous system central nervous system EEG fMRI Galvanic skin response blood pressure skin t o respiration EMG speech gesture interaction with the computer pressure on mouse drag-click speed eye tracking fNIRS pulse rate pulse variation dialogue with tutor
28
Data Classification Strategies modality 1 modality 2 modality 3 (1) Concatenate the features from all modalities (2) Feature extraction and concatenation (3) Straight ensemble classification ensemble “early fusion” “late fusion” “mid-fusion” And many combinations thereof...
29
Data Classification Strategies modality 1 modality 2 modality 3 (1) Concatenate the features from all modalities (2) Feature extraction and concatenation (3) Straight ensemble classification ensemble “early fusion” “late fusion” “mid-fusion” We capture all dependencies but can’t handle the complexity We lose the dependencies but can handle the complexity
30
Multiview early and mid-fusion Ensemble Feature Selection By the ensemble (RANKERS) For the ensemble Decision tree ensembles Ensembles of different rankers Bootstrap ensembles of rankers Random approach Systematic approach Uniform (Random subspace) Non- uniform (GA) Incremental or iterative Feature selection Multiview late fusion Greedy
31
Multiview early and mid-fusion Uniform (Random subspace) Non- uniform (GA) Incremental or iterative Feature selection Greedy
32
This is what I think: 1.Deciding which approach to take is rather art than science 2.This choice is, crucially, CONTEX-SPECIFIC.
33
Where does diversity come to this? Hmm... Nowhere...
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.