Download presentation
Presentation is loading. Please wait.
Published byReynard Gibson Modified over 9 years ago
1
Novel representations and methods in text classification Manuel Montes, Hugo Jair Escalante Instituto Nacional de Astrofísica, Óptica y Electrónica, México. http://ccc.inaoep.mx/~mmontesg/ http://ccc.inaoep.mx/~hugojair/ {mmontesg, hugojair}@inaoep.mx 7th Russian Summer School in Information Retrieval Kazan, Russia, September 2013
2
Overview of the course Day 1: Introduction to text classification Day 2: Bag of concepts representations Day 3: Representations that incorporate sequential and syntactic information Day 4: Methods for non-conventional classification Day 5: Automated construction of classification models 2
3
AUTOMATED CONSTRUCTION OF CLASSIFICATION MODELS: FULL MODEL SELECTION Novel represensations and methods in text classification 3
4
Outline Pattern classification Model selection in a broad sense: FMS Related works PSO for full model selection Experimental results and applications Conclusions and future work directions 4
5
Pattern classification Trained machine 1 (is a car) -1 (is not a car) Queries Answers Learning machine Training data 5
6
Applications Natural language processing, Computer vision, Robotics, Information technology, Medicine, Science, Entertainment, …. 6
7
Example: galaxy classification from images Data Preprocessing Feature selection Classification method Training model Model evaluation ERROR [ ] ? … Instances Features Data 7
8
Some issues with the cycle of design How to preprocess the data? What method to use? Data Preprocessing Feature selection Classification method Training model Model evaluation Data 8
9
Some issues with the cycle of design What feature selection method to use? How many features are enough? How to set the hyperparameters of the FS method? Data Preprocessing Feature selection Classification method Training model Model evaluation Data 9
10
Some issues with the cycle of design What learning machine to use? How to set the hyperparameters of the chosen method? Data Preprocessing Feature selection Classification method Training model Model evaluation Data 10
11
Some issues with the cycle of design What combination of methods works better? How to evaluate model’s performance How to avoid overfitting? Data Preprocessing Feature selection Classification method Training model Model evaluation Data 11
12
Some issues with the cycle of design The above issues are usually fixed manually: – Domain expert’s knowledge – Machine learning specialists’ knowledge – Trial and error The design/development of a pattern classification system relies on the knowledge and biases of humans, which may be risky, expensive and time consuming Automated solutions are available but only for particular processes (e.g., either feature selection, or classifier selection but not both) Is it possible to automate the whole process? 12
13
Full model selection Data Full model selection H. J. Escalante. Towards a Particle Swarm Model Selection algorithm. Multi-level inference workshop and model selection game, NIPS, Whistler, VA, BC, Canada, 2006. H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009. 13
14
Full model selection Given a set of methods for data preprocessing, feature selection and classification select the combination of methods (together with their hyperparameters) that minimizes an estimate of classification performance H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009. 14
15
Full model selection Full model: A model composed of methods for data preprocessing, feature selection and classification Example: chain { 1:normalize center=0 2:relief f_max=Inf w_min=-Inf k_num=4 3:neural units=10 shrinkage=1e-014 balance=0 maxiter=100 } chain { 1:standardize center=1 2:svcrfe svc kernel linear coef0=0 degree=1 gamma=0 shrinkage=0.001 f_max=Inf 3:pc_extract f_max=2000 4:svm kernel linear C=Inf ridge=1e-013 balanced_ridge=0 nu=0 alpha_cutoff=-1 b0=0 nob=0 } 15
16
Full model selection Pros – The job of the data analyst is considerably reduced – Neither knowledge on the application domain nor on machine learning is required – Different methods for preprocessing, feature selection and classification are considered – It can be used in any classification problem Cons – It is real function + combinatoric optimization problem – Computationally expensive – Risk of overfitting 16
17
OVERVIEW OF RELATED WORKS Novel represensations and methods in text classification 17
18
Model selection via heuristic optimization A single model is considered and their hyperparameters are optimized via heuristic optimization: – Swarm optimization, – Genetic algorithms, – Pattern search, – Genetic programming – … 18
19
GEMS GEMS (Gene Expression Model Selection) is a system for automated development and valuation of high- quality cancer diagnostic models and biomarker discovery from microarray gene expression data A. Statnikov, I. Tsamardinos, Y. Dosbayev, C.F. Aliferis. GEMS: A System for Automated Cancer Diagnosis and Biomarker Discovery from Microarray Gene Expression Data. International Journal of Medical Informatics, 2005 Aug;74(7-8):491-503. N. Fanananapazir, A. Statnikov, C.F. Aliferis. The Fast-AIMS Clinical Mass Spectrometry Analysis System. Advaces in Bioinformatics, 2009, Article ID 598241. 19
20
GEMS The user specifies the models, and methods to be considered GEMS explores all of the combinations of methods, using grid search to optimize hyperparameters 20
21
GEMS The user specifies the models, and methods to be considered GEMS explores all of the combinations of methods, using grid search to optimize hyperparameters 21
22
Model type selection for regression Genetic algorithms are used for the selection of model type (learning method, feature selection, preprocessing) and parameter optimization for regression problems D. Gorissen, T. Dhaene, F. de Turck. Evolutionary Model Type Selection for Global Surrogate Modeling. In Journal of Machine Learning Research, 10(Jul):2039-2078, 2009 http://www.sumo.intec.ugent.be/ 22
23
Meta-learning: learning to learn Evaluates and compares the application of learning algorithms on (many) previous tasks/domains to suggest learning algorithms (combinations, rankings) for new tasks Focuses on the relation between tasks/domains and learning algorithms Accumulating experience on the performance of multiple applications of learning methods Brazdil P., Giraud-Carrier C., Soares C., Vilalta R. Metalearning: Applications to Data Mining. Springer Verlag. ISBN: 978-3-540-73262-4, 2008. Brazdil P., Vilalta R, Giraud-Carrier C., Soares C.. Metalearning. Encyclopedia of Machine learning. Springer, 2010. 23
24
Meta-learning: learning to learn Brazdil P., Giraud-Carrier C., Soares C., Vilalta R. Metalearning: Applications to Data Mining. Springer Verlag. ISBN: 978-3-540-73262-4, 2008. Brazdil P., Vilalta R, Giraud-Carrier C., Soares C.. Metalearning. Encyclopedia of Machine learning. Springer, 2010. 24
25
Google prediction API “Machine learning as a service in the cloud” Upload your data, train a model and perform queries Nothing is for free! https://developers.google.com/prediction/ 25
26
Google prediction API Your data become property of Google! 26
27
IBM’s SPSS modeler 27
28
$$$$$ 28
29
Who cares? 29
30
PSMS: OUR APPROACH TO FULL MODEL SELECTION Novel represensations and methods in text classification 30
31
Full model selection Data Full model selection 31
32
PSMS: Our approach to full model selection Particle swarm model selection: Use particle swarm optimization for exploring the search space of full models in a particular ML-toolbox H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009. H. J. Escalante. Towards a Particle Swarm Model Selection algorithm. Multi-level inference workshop and model selection game, NIPS, Whistler, VA, BC, Canada, 2006. Normalize + RBF-SVM (γ = 0.01) PCA + Neural- Net (10 h. units) Relief (feat. Sel.) +Naïve Bayes Normalize + PolySVM (d= 2) Neural Net ( 3 units) s2n (feat. Sel.) +K-ridge 32
33
Particle swarm optimization Population-based search heuristic Inspired on the behavior of biological communities that exhibit local and social behaviors A. P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. Wiley,2006 33
34
Particle swarm optimization Each individual (particle) i has: – A position in the search space (X i t ), which represents a solution to the problem at hand, – A velocity vector (V i t ), which determines how a particle explores the search space After random initialization, particles update their positions according to: A. P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. Wiley,2006 34
35
Particle swarm optimization 1.Randomly initialize a population of particles (i.e., the swarm) 2.Repeat the following iterative process until stop criterion is meet: a)Evaluate the fitness of each particle b)Find personal best (p i ) and global best (p g ) c)Update particles d)Update best solution found (if needed) 3.Return the best particle (solution) Fitness value... Initt=1t=2t=max_it... 35
36
PSMS : PSO for full model selection Set of methods (not restricted to this set) Classification Feature selection Preprocessing http://clopinet.com/CLOP 36
37
PSMS : PSO for full model selection Codification of solutions as real valued vectors Choice of methods for preprocessing, feature selection and classification Hyperparameters for the selected methods Preprocessing before feature selection? 37
38
PSMS : PSO for full model selection Fitness function: – K-fold cross-validation balanced error rate – K-fold cross-validation area under the ROC curve Training data Train Test Error fold 1 Error fold 2 Error fold 3 Error fold 4 Error fold 5 CV estimate 38
39
SOME EXPERIMENTAL RESULTS OF PSMS Novel represensations and methods in text classification 39
40
PSMS in the ALvsPK challenge Five data sets for binary classification Goal: to obtain the best classification model for each data set Two tracks: – Prior knowledge – Agnostic learning http://www.agnostic.inf.ethz.ch/ 40
41
PSMS in the ALvsPK challenge Best configuration of PSMS: EntryDescriptionAdaGinaHivaNovaSylvaOverallRank Interim-all-priorBest PK17.02.3327.14.710.5910.351 st psmsx_jmlr_run_IPSMS16.862.4128.015.270.6210.632 nd Logitboost-treesBest AL16.63.5330.14.690.7811.158 th Comparison of the performance of models selected with PSMS with that obtained by other techniques in the ALvsPK challenge Models selected with PSMS for the different data sets 41 http://www.agnostic.inf.ethz.ch/
42
PSMS in the ALvsPK challenge Official ranking: 42 http://www.agnostic.inf.ethz.ch/
43
Some results in benchmark data Comparison of PSMS and pattern search 43
44
Some results in benchmark data Comparison of PSMS and pattern search 44
45
Some results in benchmark data Comparison of PSMS and pattern search 45
46
PSMS: Interactive demo http://clopinet.com/CLOP Isabelle Guyon, Amir Saffari, Hugo Jair Escalante, Gokan Bakir, and Gavin Cawley, CLOP: a Matlab Learning Object Package. NIPS 2007 Demonstrations, Vancouver, British Columbia, Canada 2007. 46
47
PSMS: APPLICATIONS AND EXTENSIONS Novel represensations and methods in text classification 47
48
Authorship verification
49
The task of deciding whether given text documents were or were not written by a certain author (i.e., a binary classification problem) Applications include: fraud detection, spam filtering, computer forensics and plagiarism detection 49
50
Authorship verification 50 Dear Mary, I have been busy in the last few weeks, however, I would like to ….…. This paper describes an novel approach to region labeling by using a Markov random field model …. Hereby I am sending my application for the PhD position offered in your research group … Hi Jane, how things are going on in Mexico; I hope you are not having problems because … Sample documents from author “X” Our paper introduces a novel method for … Unseen document Model for author “X” Feature extractionModel construction and testing This document was not written by “X” Prediction Today in Brazil an Air France plane crashed down, during a storm…. Sample documents from other authors than “X” … … … …
51
PSMS for authorship verification Documents are represented by their (binary) bag- of-words We apply straight PSMS for selecting the classification model for each author We report average performance over several authors Iberoamerican congress on pattern recognition IberoamericanAccommodationCongressPatternRecognition 00101110000 …… ……… Zorro Hugo Jair Escalante, Manuel Montes, and Luis Villaseñor. Particle Swarm Model Selection for Authorship Verification. LNCS 5856, pp. 563--570, Springer, 2009. 51
52
Experimental results Two collections were considered for experiments: Evaluation measures: – Balanced error rate (BER) – F 1 -measure We compare PSMS to results obtained with individual classification models CollectionTrainingTestingFeaturesAuthorsLanguageDomain MX-PO281728,9705SpanishPoetry CCAT2,500 3,40050EnglishNews 52
53
Experimental results Col. / Class. zarbinaïvelboostneuralsvckridgerflssvmFMS/1FMS/0FMS/-1 MX-PO34.64 30.2429.0828.5930.8131.9048.0133.5226.1823.6826.88 CCAT14.2426.2115.1241.5029.1827.6947.0136.6435.3413.5416.39 Average (over the authors) of balanced error rate obtained by each technique in the test sets 53
54
Experimental results Full models selected for the MX-PO data set PoetPreprocessingFeature Selection ClassifierBER Efrain Huertastandardize(1)-zarbi10.28 % Jaime Sabines--zarbi26.79 % Octavio Paznormalize(0)Zfilter(3070)zarbi25.09 % Rosario Castellanos normalize(0)Zfilter(3400)Kridge (γ=0.44; s= 0.407) 33.04% Ruben Bonifazshift-scale (log)-neural (u=3;s=1.81;i=15;b=1);35.71% 54
55
Experimental results ClassifiersFeature selection Preprocessing zarbinaiveneuralsvckridgelssvmWWOW Frequency of selection 68%10% 2% 8%76%24%88%12% Balanced error rate (BER) 14.229.8823.423.8244.0133.5114.1422.2815.6416.50 Statistics on the selection of methods when using PSMS (FMS/1) for the CCAT collection 55
56
Experimental results Per-author performance by models selected with FMS/1 for the CCA data set 96.91% [normalize+standardize+shift-scale] + [Ftest (104)] + [naïve] Most relevant features (words) for this author: china, chinesse, beijing, official diplomat 56
57
Experimental results Per-author performance by models selected with FMS/1 for the CCA data set 14.81% [normalize] + [] + [lssvm] [normalize+standardize+shift-scale] + [Zfilter (234)] + [zarbi] When using FMS/0 we got F1= 45.71% 57
58
Lessons learned PSMS selected very effective models for each author, Allowing the user to provide/fix a classifier resulted beneficial We were not able to outperform baseline methods in the multiclass classification task of authorship attribution 58
59
ENSEMBLE PSMS Novel represensations and methods in text classification 59
60
Ensemble PSMS Many models are evaluated during the search process of PSMS; although a single model is selected 60
61
Ensemble PSMS Idea: taking advantage of the large number of models that are evaluated during the search for building ensemble classifiers Problem: How to select the partial solutions from PSMS so that they are accurate and diverse to each other Motivation: The success of ensemble classifiers depends mainly in two key aspects of individual models: Accuracy and diversity 61
62
Ensemble PSMS How to select potential models for building ensembles? BS: store the global best model in each iteration BI: the best model in each iteration SE: combine the outputs of the final swarm How to fuse the outputs of the selected models? Simple (un-weighted) voting Fitness value... InitI=1I=4I=max_it... I=2I=3 62
63
How to select potential models for building ensembles? BS: store the global best model in each iteration BI: the best model in each iteration SE: combine the outputs of the final swarm How to fuse the outputs of the selected models? Simple (un-weighted) voting Fitness value... InitI=1I=4I=max_it... I=2I=3 63 Ensemble PSMS
64
How to select potential models for building ensembles? BS: store the global best model in each iteration BI: the best model in each iteration SE: combine the outputs of the final swarm How to fuse the outputs of the selected models? Simple (un-weighted) voting Fitness value... InitI=1I=4I=max_it... I=2I=3 64 Ensemble PSMS
65
Experimental results Data: – 9 Benchmark machine learning data sets (binary classification) – 1 Object recognition data set (multiclass, 10 classes) IDData setTrainingTestingFeatures 1Breast-cancer200779 2Diabetes4683008 3Flare solar6664009 4German70030020 5Heart17010013 6Image1300101020 7Splice1000217560 8Thyroid140755 9Titanic15020513 ORSCEF2378330050 Hugo Jair Escalante, Manuel Montes and Enrique Sucar. Ensemble Particle Swarm Model Selection. Proceedings of the International Joint Conference on Neural Networks (IJCNN2010 – WCCI2010), pp. 1814--1821, IEEE,, 2010
66
Experimental results Evaluation: – Average of area under the ROC curve (performance) – Coincident failure diversity (ensemble diversity) If p 0 < 1 If p 0 = 1 W. Wang. Some fundamental issues in ensemble methods. Proc. Of IJCNN07, pp. 2244–2251, IEEE, July 2007. 66
67
Experimental results: performance Benchmark data sets: better performance is obtained by ensemble methods IDPSMSEPSMS-BSEPSMS-SEEPSMS-BI 172.03±2.2473.40±0.7874.05±0.9174.35±0.49 282.11±1.2982.60±1.5274.07±13.783.42±0.46 368.81±4.3169.38±4.5370.13±7.4872.16±1.42 473.92±1.2373.84±1.5374.70±0.7274.77±0.69 585.55±5.4887.40±2.0187.07±0.7588.36±0.88 697.21±3.1598.85±1.4595.27±3.0499.58±0.33 797.26±0.5598.02±0.6496.99±1.2198.84±0.26 896.00±4.7598.18±0.9497.29±1.5499.22±0.45 973.24±1.1673.50±0.9575.37±1.0574.40±0.91 Avg.82.90±2.6883.91±1.5982.77±3.3885.01±0.65 Average accuracy over 10-trials of PSMS and EPSMS in benchmark data 67 Hugo Jair Escalante, Manuel Montes and Enrique Sucar. Ensemble Particle Swarm Model Selection. Proceedings of the International Joint Conference on Neural Networks (IJCNN2010 – WCCI2010), pp. 1814--1821, IEEE,, 2010
68
Experimental results: Diversity of ensemble Diversity results IDEPSMS-BSEPSMS-SEEPSMS-BI 10.2055±0.14980.5422±0.05500.5017±0.1149 20.3547±0.17110.6241±0.06190.5081±0.0728 30.1295±0.17040.4208±0.13570.4012±0.1071 40.3019±0.17320.5159±0.05960.4296±0.0490 50.2733±0.17140.5993±0.09250.5647±0.0655 60.7801±0.08180.7555±0.05240.8427±0.0408 70.5427±0.32300.7807±0.05850.8050±0.0294 80.6933±0.15580.8173±0.06260.8514±0.0403 90.7473±0.0089 Avg.0.4476±0.15620.6448±0.06030.6280±0.0588 EPSMS-SE models are more diverse 68 Hugo Jair Escalante, Manuel Montes and Enrique Sucar. Ensemble Particle Swarm Model Selection. Proceedings of the International Joint Conference on Neural Networks (IJCNN2010 – WCCI2010), pp. 1814--1821, IEEE,, 2010
69
Experimental results: region labeling Per-concept improvement of EPSMS variants over straight PSMS IDPSMSEPSMS-BSEPSMS-SEEPSMS-BI AUC91.53±6.893.27±5.692.79±7.494.05±5.3 MCC69.58%76.59%79.13%81.49% 69
70
Lessons learned Ensembles generated with EPSMS outperformed individual classifiers; including those selected with PSMS Models evaluated by PSMS are diverse to each other and accurate The extension of PSMS for generating ensembles is promising 70
71
Summary Full model selection is a broad view of the model selection problem in supervised learning There is an increasing interest for this type of methods The search space is huge and multiple minima may exist There is no guarantee of avoiding overfitting Yet, we showed evidence that it is possible to attempt to automate the cycle of design of pattern classification systems 71
72
Summary PSMS / EPSMS an automatic tool for the selection of classification models for any classification task PSMS / EPSMS has been successfully applied in several domains Disclaimer: We do not expect PSMS / EPSMS to perform well in every application it is tested, although we recommend it as a first option when building a classifier 72
73
Summary For multiclass classification straight OVA strategies did not work Alternative methods that do not use the output of classifiers are a good option to explore as future work 73
74
Research opportunities Multi-Swarm PSMS for building ensembles Multiclass PSMS/EPSMS: – Bi-level optimization (i.e., individual and multiclass performance) – Learning to fuse the outputs (e.g., using genetic programming) Meta-ensembles 74
75
Research opportunities Other search strategies for full model selection (e.g., Genetic programming, GRASP, Tabu search) Other toolboxes (e.g., weka) Meta-learning + PSMS Combination of different full model selection techniques 75
76
Applications Any classification problem where specific classifiers are required for each class 76
77
References Isabelle Guyon. A Practical Guide to Model Selection. In Jeremie Marie, editor, Machine Learning Summer School 2008,Springer Texts in Statistics, 2011. (slide from I.Guyon’s) A. Statnikov, I. Tsamardinos, Y. Dosbayev, C.F. Aliferis. GEMS: A System for Automated Cancer Diagnosis and Biomarker Discovery from Microarray Gene Expression Data. International Journal of Medical Informatics, 2005 Aug;74(7-8):491-503. D. Gorissen, T. Dhaene, F. de Turck. Evolutionary Model Type Selection for Global Surrogate Modeling. In Journal of Machine Learning Research, 10(Jul):2039-2078, 2009 Brazdil P., Giraud-Carrier C., Soares C., Vilalta R. Metalearning: Applications to Data Mining. Springer Verlag. ISBN: 978-3-540-73262-4, 2008. H. J. Escalante. Towards a Particle Swarm Model Selection algorithm. Multi-level inference workshop and model selection game, NIPS, Whistler, VA, BC, Canada, 2006. Isabelle Guyon, Amir Saffari, Hugo Jair Escalante, Gokan Bakir, and Gavin Cawley, CLOP: a Matlab Learning Object Package. NIPS 2007 Demonstrations, Vancouver, British Columbia, Canada 2007. H. J. Escalante, E. Sucar, M. Montes. Particle Swarm Model Selection, In Journal of Machine Learning Research, 10(Feb):405--440, 2009. Hugo Jair Escalante, Jesus A. Gonzalez, Manuel Montes-y-Gomez, Pilar Gómez, Carolina Reta, Carlos A. Reyes, Alejandro Rosales. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, In press, Artificial Intelligence in Medicine, Available online 15 April 2012. Hugo Jair Escalante, Manuel Montes, and Luis Villaseñor. Particle Swarm Model Selection for Authorship Verification. LNCS 5856, pp. 563--570, Springer, 2009. 77
78
Questions? 7th Russian Summer School in Information Retrieval Kazan, Russia, September 2013 78
79
79
80
EPSMS for acute leukemia classification Acute leukemia: a malignant disease that affects a considerable portion of the world population There are different types and subtypes of acute leukemia, requiring different treatments. 80
81
EPSMS for acute leukemia classification Different types/subtypes: – Acute Lymphocytic Leukemia (ALL): L1, L2, L3 – Acute Myelogenous Leukemia (AML): M0, M1, M2, M3, M4, M5, M6, M7 Considered tasks: – Binary ALL vs AML L1 vs L2 M2 vs M3, M5, M3 vs. M2, M5, M3 vs M2, M5, – Multiclass M1 vs M2 vs M3 L1 vs L2 vs M1 vs M2 vs M3 81
82
EPSMS for acute leukemia classification Despite the fact that there are advanced and precise methods to identify leukemia types, they are very expensive and unavailable in most of hospitals of third world countries A Cheaper alternative: morphological acute leukemia classification from bone marrow cell images 82
83
EPSMS for acute leukemia classification Morphological classification: – Image registration – Image segmentation – Feature extraction (Morphological, statistical, texture) – Classifier construction CellNucleusCytoplasm J. A. Gonzalez et al. Leukemia Identification from Bone Marrow Cells Images using a Machine Vision and Data Mining Strategy, Intelligent Data Analysis, Vol 15(3):443—462, 2011. 83
84
EPSMS for acute leukemia classification In previous work either: – the same classification model has been considered for different problems, or – Models have been manually selected by trial and error Proposal: Using EPSMS for automatically selecting specific classifiers for the different tasks Hugo Jair Escalante, et al.. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163- 175 (2012) 84
85
EPSMS for acute leukemia classification Experiments were performed with real data collected by a Mexican health institution (IMSS), using 10-fold CV 85
86
EPSMS for acute leukemia classification In general ensembles generated with EPSMS outperformed previous work Models selected with EPSMS were more effective than those selected with straight PSMS 86 Hugo Jair Escalante, et al.. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163- 175 (2012)
87
EPSMS for ALC - binary tasks 87
88
EPSMS for ALC – Multiclass MeasureRegionRef.PSMSE1E2E3 M2 vs M3 vs M5 Avg. AUCC78.6692.3793.94 93.82 AccuracyC66.1383.3781.3279.7678.79 Avg. AUCN&C92.8092.3693.9493.9293.28 AccuracyN&C84.8781.8481.8782.3479.34 L1 vs L2 vs M2 vs M3 vs M5 Avg. AUCC84.0391.1393.7893.7683.40 AccuracyC55.8672.8675.8376.0673.92 Avg. AUCN&C92.3390.6294.2194.0986.09 AccuracyN&C77.4871.7274.5075.6574.03 88 Hugo Jair Escalante, et al.. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163- 175 (2012)
89
Models selected with EPSMS 89 Hugo Jair Escalante, et al.. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163- 175 (2012)
90
Models selected with EPSMS 90 Hugo Jair Escalante, et al.. Acute Leukemia Classiffcation with Ensemble Particle Swarm Model Selection, Artificial Intelligence in Medicine, 55(3): 163- 175 (2012)
91
Lessons learned Models selected with PSMS / EPSMS outperformed those manually selected after a careful study Different settings of EPMS outperformed both PSMS and the baseline, although we did not found a better strategy for all of the tasks The multiclass classification performance was acceptable, although there is still much work to do 91
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.