Presentation is loading. Please wait.

Presentation is loading. Please wait.

RESULTS OF THE WCCI 2006 PERFORMANCE PREDICTION CHALLENGE Isabelle Guyon Amir Reza Saffari Azar Alamdari Gideon Dror.

Similar presentations


Presentation on theme: "RESULTS OF THE WCCI 2006 PERFORMANCE PREDICTION CHALLENGE Isabelle Guyon Amir Reza Saffari Azar Alamdari Gideon Dror."— Presentation transcript:

1 RESULTS OF THE WCCI 2006 PERFORMANCE PREDICTION CHALLENGE Isabelle Guyon Amir Reza Saffari Azar Alamdari Gideon Dror

2 Part I INTRODUCTION

3 Model selection Selecting models (neural net, decision tree, SVM, …) Selecting hyperparameters (number of hidden units, weight decay/ridge, kernel parameters, …) Selecting variables or features (space dimensionality reduction.) Selecting patterns (data cleaning, data reduction, e.g by clustering.)

4 Performance prediction How good are you at predicting how good you are? Practically important in pilot studies. Good performance predictions render model selection trivial.

5 Why a challenge? Stimulate research and push the state-of- the art. Move towards fair comparisons and give a voice to methods that work but may not be backed up by theory (yet). Find practical solutions to true problems. Have fun…

6 History USPS/NIST. Unipen (with Lambert Schomaker): 40 institutions share 5 million handwritten characters. KDD cup, TREC, CASP, CAMDA, ICDAR, etc. NIPS challenge on unlabeled data. Feature selection challenge (with Steve Gunn): success! ~75 entrants, thousands of entries. Pascal challenges. Performance prediction challenge … 1980 1990 2000 2001 2002 2003 2004 2005

7 Challenge Date started: Friday September 30, 2005. Date ended: Monday March 1, 2006 Duration: 21 weeks. Estimated number of entrants: 145. Number of development entries: 4228. Number of ranked participants: 28. Number of ranked submissions: 117.

8 Datasets Dataset Domain Type Feat- ures Training Examples Validation Examples Test Examples ADA Marketing Dense 484147 415 41471 GINA Digits Dense 9703153 315 31532 HIVA Drug discovery Dense 16173845 384 38449 NOVA Text classif. Sparse binary 169691754 175 17537 SYLVA Ecology Dense 21613086 1308 130858 http://www.modelselect.inf.ethz.ch/

9 BER distribution Test BER

10 Results Overall winners for ranked entries: Ave rank: Roman Lutz with LB tree mix cut adapted Ave score: Gavin Cawley with Final #2 ADA: Marc Boullé with SNB(CMA)+10k F(2D) tv or SNB(CMA) + 100k F(2D) tv GINA: Kari Torkkola & Eugene Tuv with ACE+RLSC HIVA: Gavin Cawley with Final #3 (corrected) NOVA: Gavin Cawley with Final #1 SYLVA: Marc Boullé with SNB(CMA) + 10k F(3D) tv Best AUC: Radford Neal with Bayesian Neural Networks

11 Part II PROTOCOL and SCORING

12 Protocol Data split: training/validation/test. Data proportions: 10/1/100. Online feed-back on validation data. Validation label release one month before end of challenge. Final ranking on test data using the five last complete submissions for each entrant.

13 Performance metrics Balanced Error Rate (BER): average of error rates of positive class and negative class. Guess error:  BER = abs(testBER – guessedBER) Area Under the ROC Curve (AUC).

14 Optimistic guesses ADA GINA HIVA NOVA SYLVA

15 Scoring method E = testBER +  BER [1-exp(-   BER/  )]  BER = abs(testBER – guessedBER)  Guessed BER Challenge score Test BER

16  BER/  Test BER E  testBER +  BER ADA GINA HIVA NOVA SYLVA

17 Score E testBER testBER+  BER E = testBER +  BER [1-exp(-   BER/  )]

18 Score (continued) ADA GINA SYLVA HIVA NOVA

19 Part III RESULT ANALYSIS

20 What did we expect? Learn about new competitive machine learning techniques. Identify competitive methods of performance prediction, model selection, and ensemble learning (theory put into practice.) Drive research in the direction of refining such methods (on-going benchmark.)

21 Method comparison  BER Test BER

22 Danger of overfitting 020406080100120140160 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 BER Time (days) ADA GINA HIVA NOVA SYLVA Full line: test BER Dashed line: validation BER

23 How to estimate the BER? Statistical tests (Stats): Compute it on training data; compare with a “null hypothesis” e.g. the results obtained with a random permutation of the labels. Cross-validation (CV): Split the training data many times into training and validation set; average the validation data results. Guaranteed risk minimization (GRM): Use of theoretical performance bounds.

24 Stats / CV / GRM ???

25 Top ranking methods Performance prediction: –CV with many splits 90% train / 10% validation –Nested CV loops Model selection: –Use of a single model family –Regularized risk / Bayesian priors –Ensemble methods –Nested CV loops, computationally efficient with with VLOO

26 Other methods Use of training data only: –Training BER. –Statistical tests. Bayesian evidence. Performance bounds. Bilevel optimization.

27 Part IV CONCLUSIONS AND FURTHER WORK

28 Open problems Bridge the gap between theory and practice… What are the best estimators of the variance of CV? What should k be in k-fold? Are other cross-validation methods better than k- fold (e.g bootstrap, 5x2CV)? Are there better “hybrid” methods? What search strategies are best? More than 2 levels of inference?

29 Future work Game of model selection. JMLR special topic on model selection. IJCNN 2007 challenge!

30 Benchmarking model selection? Performance prediction: Participants just need to provide a guess of their test performance. If they can solve that problem, they can perform model selection efficiently. Easy and motivating. Selection of a model from a finite toolbox: In principle a more controlled benchmark, but less attractive to participants.

31 CLOP CLOP=Challenge Learning Object Package. Based on the Spider developed at the Max Planck Institute. Two basic abstractions: –Data object –Model object http://clopinet.com/isabelle/Projects/modelselect/MFAQ.html

32 CLOP tutorial  D=data(X,Y);  hyper = {'degree=3', 'shrinkage=0.1'};  model = kridge(hyper);  [resu, model] = train(model, D);  tresu = test(model, testD);  model = chain({standardize,kridge(hyper)}); At the Matlab prompt:

33 Conclusions Twice as much volume of participation as in the feature selection challenge Top methods as before (different order): –Ensembles of trees –Kernel methods (RLSC/LS-SVM, SVM) –Bayesian neural networks –Naïve Bayes. Danger of overfitting. Triumph of cross-validation?


Download ppt "RESULTS OF THE WCCI 2006 PERFORMANCE PREDICTION CHALLENGE Isabelle Guyon Amir Reza Saffari Azar Alamdari Gideon Dror."

Similar presentations


Ads by Google