Co-operative Training in Classifier Ensembles Rozita Dara PAMI Lab University of Waterloo.

Slides:



Advertisements
Similar presentations
An Adaptive Learning Method for Target Tracking across Multiple Cameras Kuan-Wen Chen, Chih-Chuan Lai, Yi-Ping Hung, Chu-Song Chen National Taiwan University.
Advertisements

Ensemble Methods An ensemble method constructs a set of base classifiers from the training data Ensemble or Classifier Combination Predict class label.
Longin Jan Latecki Temple University
Modeling Human Reasoning About Meta-Information Presented By: Scott Langevin Jingsong Wang.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Ensemble Learning what is an ensemble? why use an ensemble?
Speaker Adaptation for Vowel Classification
A Brief Introduction to Adaboost
Information Fusion in Continuous Assurance Discussed by Dr. Graham Gal University of Massachusetts at Amherst University of Waterloo Conference on Information.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
CSE 300: Software Reliability Engineering Topics covered: Software metrics and software reliability Software complexity and software quality.
1 Ensembles of Nearest Neighbor Forecasts Dragomir Yankov, Eamonn Keogh Dept. of Computer Science & Eng. University of California Riverside Dennis DeCoste.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Information Fusion Yu Cai. Research Article “Comparative Analysis of Some Neural Network Architectures for Data Fusion”, Authors: Juan Cires, PA Romo,
For Better Accuracy Eick: Ensemble Learning
3 ème Journée Doctorale G&E, Bordeaux, Mars 2015 Wei FENG Geo-Resources and Environment Lab, Bordeaux INP (Bordeaux Institute of Technology), France Supervisor:
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Hashed Samples Selectivity Estimators for Set Similarity Selection Queries.
Machine Learning CS 165B Spring 2012
Predicting Income from Census Data using Multiple Classifiers Presented By: Arghya Kusum Das Arnab Ganguly Manohar Karki Saikat Basu Subhajit Sidhanta.
1 Secure Cooperative MIMO Communications Under Active Compromised Nodes Liang Hong, McKenzie McNeal III, Wei Chen College of Engineering, Technology, and.
Information Fusion in Continuous Assurance Johan Perols University of San Diego Uday Murthy University of South Florida UWCISA Symposium October 2, 2009.
Active Learning for Class Imbalance Problem
Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection Takafumi Kanamori Shohei Hido NIPS 2008.
WEMAREC: Accurate and Scalable Recommendation through Weighted and Ensemble Matrix Approximation Chao Chen ⨳ , Dongsheng Li
Boosting Neural Networks Published by Holger Schwenk and Yoshua Benggio Neural Computation, 12(8): , Presented by Yong Li.
CS 391L: Machine Learning: Ensembles
Rotation Invariant Neural-Network Based Face Detection
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Data Dependence in Combining Classifiers Mohamed Kamel PAMI Lab University of Waterloo.
ECE 8443 – Pattern Recognition Objectives: Bagging and Boosting Cross-Validation ML and Bayesian Model Comparison Combining Classifiers Resources: MN:
Ensemble Based Systems in Decision Making Advisor: Hsin-His Chen Reporter: Chi-Hsin Yu Date: IEEE CIRCUITS AND SYSTEMS MAGAZINE 2006, Q3 Robi.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Ensemble Methods: Bagging and Boosting
Ensembles. Ensemble Methods l Construct a set of classifiers from training data l Predict class label of previously unseen records by aggregating predictions.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
CS Ensembles1 Ensembles. 2 A “Holy Grail” of Machine Learning Automated Learner Just a Data Set or just an explanation of the problem Hypothesis.
CLASSIFICATION: Ensemble Methods
ISQS 6347, Data & Text Mining1 Ensemble Methods. ISQS 6347, Data & Text Mining 2 Ensemble Methods Construct a set of classifiers from the training data.
Presentation Title Department of Computer Science A More Principled Approach to Machine Learning Michael R. Smith Brigham Young University Department of.
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
Cooperative Classifiers Rozita Dara Supervisor: Prof. Kamel Pattern Analysis and Machine Intelligence Lab University of Waterloo.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
AISTATS 2010 Active Learning Challenge: A Fast Active Learning Algorithm Based on Parzen Window Classification L.Lan, H.Shi, Z.Wang, S.Vucetic Temple.
I-SMOOTH FOR IMPROVED MINIMUM CLASSIFICATION ERROR TRAINING Haozheng Li, Cosmin Munteanu Pei-ning Chen Department of Computer Science & Information Engineering.
AUTOMATIC TARGET RECOGNITION AND DATA FUSION March 9 th, 2004 Bala Lakshminarayanan.
Neural Text Categorizer for Exclusive Text Categorization Journal of Information Processing Systems, Vol.4, No.2, June 2008 Taeho Jo* 報告者 : 林昱志.
The Restricted Matched Filter for Distributed Detection Charles Sestok and Alan Oppenheim MIT DARPA SensIT PI Meeting Jan. 16, 2002.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Classification Ensemble Methods 1
1 January 24, 2016Data Mining: Concepts and Techniques 1 Data Mining: Concepts and Techniques — Chapter 7 — Classification Ensemble Learning.
Decision Trees IDHairHeightWeightLotionResult SarahBlondeAverageLightNoSunburn DanaBlondeTallAverageYesnone AlexBrownTallAverageYesNone AnnieBlondeShortAverageNoSunburn.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bagging and Boosting Cross-Validation ML.
Dijkstra-Scholten and Shavit-Francez termination algorithms
An unsupervised conditional random fields approach for clustering gene expression time series Chang-Tsun Li, Yinyin Yuan and Roland Wilson Bioinformatics,
1 Machine Learning Lecture 8: Ensemble Methods Moshe Koppel Slides adapted from Raymond J. Mooney and others.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Ensemble Classifiers.
Chapter 3: Maximum-Likelihood Parameter Estimation
OPTIMIZATION OF MODELS: LOOKING FOR THE BEST STRATEGY
Boosted Augmented Naive Bayes. Efficient discriminative learning of
Chapter 13 – Ensembles and Uplift
Machine Learning: Ensembles
A “Holy Grail” of Machine Learing
network of simple neuron-like computing elements
Ensemble learning.
Evolutionary Ensembles with Negative Correlation Learning
CS 391L: Machine Learning: Ensembles
Information Sciences and Systems Lab
Presentation transcript:

Co-operative Training in Classifier Ensembles Rozita Dara PAMI Lab University of Waterloo

Introduction Sharing Training Resources Sharing Training Patterns Sharing Training Patterns Sharing Training Algorithms Sharing Training Algorithms Sharing Training Information Sharing Training Information Sharing Training Information: An Algorithm Experimental Study Discussion and Conclusions Outline

IFusion 2004Co-operative Training in Classifier Ensembles Introduction Multiple Classifier Systems provide Improved Performance Improved Performance Better Reliability and Generalization Better Reliability and Generalization Multiple Classifier Systems motivations include Empirical Observation Empirical Observation Problem decomposed naturally from using various sensors Problem decomposed naturally from using various sensors Avoid making commitments to arbitrary initial conditions or parameters Avoid making commitments to arbitrary initial conditions or parameters

IFusion 2004Co-operative Training in Classifier Ensembles Introduction (cntd…) “Combining identical classifiers will not lead to improved performance.” Importance of creating diverse classifiers How does the amount of “sharing” between classifiers affect the performance?

IFusion 2004Co-operative Training in Classifier Ensembles Sharing Training Resources A measure of the degree of co-operation between various classifiers. Sharing Training Patterns Sharing Training Patterns Sharing Training Algorithms Sharing Training Algorithms Sharing Training Information Sharing Training Information

IFusion 2004Co-operative Training in Classifier Ensembles Sharing Training Patterns

IFusion 2004Co-operative Training in Classifier Ensembles Sharing Training Algorithms

IFusion 2004Co-operative Training in Classifier Ensembles Sharing Training Information

IFusion 2004Co-operative Training in Classifier Ensembles Training Training each component independently Optimize individual components, may not lead to overall improvement Optimize individual components, may not lead to overall improvement Collinearity, high correlation between classifiers Collinearity, high correlation between classifiers Components, under-trained or over-trained Components, under-trained or over-trained

IFusion 2004Co-operative Training in Classifier Ensembles Training (cntd…) Adaptive training Selective: Reducing correlation between components Selective: Reducing correlation between components Focused: Re-training focuses on misclassified patterns. Focused: Re-training focuses on misclassified patterns. Efficient: Determined the duration of training Efficient: Determined the duration of training

IFusion 2004Co-operative Training in Classifier Ensembles Adaptive Training: Main loop Share Training Information between members of the ensemble Incremental learning Evaluation of training to determine the re-training set

IFusion 2004Co-operative Training in Classifier Ensembles Adaptive Training: Training Save classifier if it performs well on the evaluation set Determine when to terminate training for each module

IFusion 2004Co-operative Training in Classifier Ensembles Adaptive Training: Evaluation Train aggregation modules Evaluate training sets for each classifier Compose new training data

IFusion 2004Co-operative Training in Classifier Ensembles Adaptive Training: Data Selection New training data are composed by concatenating Error i : Misclassified entries of training data for classifier i. Error i : Misclassified entries of training data for classifier i. Correct i : Random choice of  R*(P*δ_i)  correctly classified entries of the training data for classifier i. Correct i : Random choice of  R*(P*δ_i)  correctly classified entries of the training data for classifier i.

IFusion 2004Co-operative Training in Classifier Ensembles Results Five one-hidden layer BP classifiers Training used partially disjoint data sets No optimization is performed for the trained networks The parameters of all the networks are maintained for all the classifiers that are trained Three data sets 20 Class Gaussian 20 Class Gaussian Satimages Satimages

IFusion 2004Co-operative Training in Classifier Ensembles Results (cntd…) 20 ClassSatimages Without SharingWith SharingWithout SharingWith Sharing Majority     0.90 Maximum     1.54 Average     0.93 Nash     4.44 Borda     1.03 Weighted Average     0.91 Bayesian     1.03 Choquet Integral     1.43 Best Classifier     1.03 Oracle 3.74     0.23

IFusion 2004Co-operative Training in Classifier Ensembles Conclusions Exchange of information during training would allow for a informed fusion process Enhance diversity amongst classifiers Algorithms that share training information can improve overall classification accuracy.

IFusion 2004Co-operative Training in Classifier Ensembles Conclusions (cntd…)

IFusion 2004Co-operative Training in Classifier Ensembles References