Feature selection methods Isabelle Guyon IPAM summer school on Mathematics in Brain Imaging. July 2008.

Slides:



Advertisements
Similar presentations
Alexander Statnikov1, Douglas Hardin1,2, Constantin Aliferis1,3
Advertisements

Linear Regression.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
Molecular Biomedical Informatics 分子生醫資訊實驗室 Machine Learning and Bioinformatics 機器學習與生物資訊學 Machine Learning & Bioinformatics 1.
R OBERTO B ATTITI, M AURO B RUNATO The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 13 Nonlinear and Multiple Regression.
Minimum Redundancy and Maximum Relevance Feature Selection
COMPUTER AIDED DIAGNOSIS: FEATURE SELECTION Prof. Yasser Mostafa Kadah –
Lecture 5: Causality and Feature Selection Isabelle Guyon
Lecture 4: Embedded methods
The loss function, the normal equation,
Feature Selection Presented by: Nafise Hatamikhah
Lecture 3: Introduction to Feature Selection Isabelle Guyon
Feature Selection for Regression Problems
Statistical Methods Chichang Jou Tamkang University.
Feature selection methods from correlation to causality Isabelle Guyon NIPS 2008 workshop on kernel learning.
Lecture 2: Introduction to Feature Selection Isabelle Guyon
Feature Selection Lecture 5
Feature Selection Bioinformatics Data Analysis and Tools
Lecture 6: Causal Discovery Isabelle Guyon
Statistical Learning: Pattern Classification, Prediction, and Control Peter Bartlett August 2002, UC Berkeley CIS.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Feature selection and causal discovery fundamentals and applications Isabelle Guyon
Classification and Prediction: Regression Analysis
Ensemble Learning (2), Tree and Forest
Jeff Howbert Introduction to Machine Learning Winter Machine Learning Feature Creation and Selection.
A Multivariate Biomarker for Parkinson’s Disease M. Coakley, G. Crocetti, P. Dressner, W. Kellum, T. Lamin The Michael L. Gargano 12 th Annual Research.
Baseline Methods for the Feature Extraction Class Isabelle Guyon Best BER=1.26  0.14% - n0=1000 (20%) – BER0=1.80% GISETTE Best BER=1.26  0.14% - n0=1000.
Learning with Positive and Unlabeled Examples using Weighted Logistic Regression Wee Sun Lee National University of Singapore Bing Liu University of Illinois,
Introduction to variable selection I Qi Yu. 2 Problems due to poor variable selection: Input dimension is too large; the curse of dimensionality problem.
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
Prediction model building and feature selection with SVM in breast cancer diagnosis Cheng-Lung Huang, Hung-Chang Liao, Mu- Chen Chen Expert Systems with.
Feature Selection and Causal discovery Isabelle Guyon, Clopinet André Elisseeff, IBM Zürich Constantin Aliferis, Vanderbilt University.
315 Feature Selection. 316 Goals –What is Feature Selection for classification? –Why feature selection is important? –What is the filter and what is the.
Boris Babenko Department of Computer Science and Engineering University of California, San Diego Semi-supervised and Unsupervised Feature Scaling.
Feature Selection in Nonlinear Kernel Classification Olvi Mangasarian & Edward Wild University of Wisconsin Madison Workshop on Optimization-Based Data.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Chapter 10. Sampling Strategy for Building Decision Trees from Very Large Databases Comprising Many Continuous Attributes Jean-Hugues Chauchat and Ricco.
Usman Roshan Machine Learning, CS 698
MINING MULTI-LABEL DATA BY GRIGORIOS TSOUMAKAS, IOANNIS KATAKIS, AND IOANNIS VLAHAVAS Published on July, 7, 2010 Team Members: Kristopher Tadlock, Jimmy.
CSE 446 Logistic Regression Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
A A R H U S U N I V E R S I T E T Faculty of Agricultural Sciences Introduction to analysis of microarray data David Edwards.
Lecture 5: Causality and Feature Selection Isabelle Guyon
Chapter 4: Introduction to Predictive Modeling: Regressions
NIPS 2001 Workshop on Feature/Variable Selection Isabelle Guyon BIOwulf Technologies.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
1 Chapter 4: Introduction to Predictive Modeling: Regressions 4.1 Introduction 4.2 Selecting Regression Inputs 4.3 Optimizing Regression Complexity 4.4.
Data Mining and Decision Support
Competition II: Springleaf Sha Li (Team leader) Xiaoyan Chong, Minglu Ma, Yue Wang CAMCOS Fall 2015 San Jose State University.
Feature Selction for SVMs J. Weston et al., NIPS 2000 오장민 (2000/01/04) Second reference : Mark A. Holl, Correlation-based Feature Selection for Machine.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Bump Hunting The objective PRIM algorithm Beam search References: Feelders, A.J. (2002). Rule induction by bump hunting. In J. Meij (Ed.), Dealing with.
Machine Learning – Classification David Fenyő
Boosting and Additive Trees (2)
Usman Roshan Machine Learning
regression Can BMI explain Y? Can BMI predict Y?
10701 / Machine Learning.
Data Mining (and machine learning)
Roberto Battiti, Mauro Brunato
Machine Learning Feature Creation and Selection
Lecture 3: Causality and Feature Selection
Feature selection Usman Roshan.
Combining HMMs with SVMs
Feature Selection Ioannis Tsamardinos Machine Learning Course, 2006
Usman Roshan Machine Learning
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Feature Selection, Projection and Extraction
Chapter 7: Transformations
Feature Selection Methods
Presentation transcript:

Feature selection methods Isabelle Guyon IPAM summer school on Mathematics in Brain Imaging. July 2008

Feature Selection Thousands to millions of low level features: select the most relevant one to build better, faster, and easier to understand learning machines. X n m n’

Applications Bioinformatics Quality control Machine vision Customer knowledge variables/features examples OCR HWR Market Analysis Text Categorization System diagnosis

Nomenclature Univariate method: considers one variable (feature) at a time. Multivariate method: considers subsets of variables (features) together. Filter method: ranks features or feature subsets independently of the predictor (classifier). Wrapper method: uses a classifier to assess features or feature subsets.

Univariate Filter Methods

Univariate feature ranking Normally distributed classes, equal variance  2 unknown; estimated from data as  2 within. Null hypothesis H 0 :  + =  - T statistic: If H 0 is true, t= (  + -  -)/(  within  m + +1/m -  Student  m + +m - -  d.f.  -- ++ -- ++ P(X i |Y=-1) P(X i |Y=1) xixi

H 0 : X and Y are independent. Relevance index  test statistic. Pvalue  false positive rate FPR = n fp / n irr Multiple testing problem: use Bonferroni correction pval  n pval False discovery rate: FDR = n fp / n sc  FPR n/n sc Probe method: FPR  n sp /n p pval r0r0 r Null distribution Statistical tests ( chap. 2)

Univariate Dependence Independence: P(X, Y) = P(X) P(Y) Measure of dependence: MI(X, Y) =  P(X,Y) log dX dY = KL ( P(X,Y) || P(X)P(Y) ) P(X,Y) P(X)P(Y)

A choice of feature selection ranking methods depending on the nature of: the variables and the target (binary, categorical, continuous) the problem (dependencies between variables, linear/non- linear relationships between variables and target) the available data (number of examples and number of variables, noise in data) the available tabulated statistics. Other criteria ( chap. 3)

Multivariate Methods

Univariate selection may fail Guyon-Elisseeff, JMLR 2004; Springer 2006

Filters,Wrappers, and Embedded methods All features Filter Feature subset Predictor All features Wrapper Multiple Feature subsets Predictor All features Embedded method Feature subset Predictor

Relief nearest hit nearest miss D hit D miss Relief= D hit D miss Kira and Rendell, 1992

Wrappers for feature selection N features, 2 N possible feature subsets! Kohavi-John, 1997

Exhaustive search. Simulated annealing, genetic algorithms. Beam search: keep k best path at each step. Greedy search: forward selection or backward elimination. PTA(l,r): plus l, take away r – at each step, run SFS l times then SBS r times. Floating search (SFFS and SBFS): One step of SFS (resp. SBS), then SBS (resp. SFS) as long as we find better subsets than those of the same size obtained so far. Any time, if a better subset of the same size was already found, switch abruptly. n-k g Search Strategies ( chap. 4)

Forward Selection (wrapper) n n-1 n-2 1 … Start Also referred to as SFS: Sequential Forward Selection

Guided search: we do not consider alternative paths. Typical ex.: Gram-Schmidt orthog. and tree classifiers. Forward Selection (embedded) … Start n n-1 n-2 1

Backward Elimination (wrapper) 1 n-2 n-1 n … Start Also referred to as SBS: Sequential Backward Selection

Backward Elimination (embedded) … Start 1 n-2 n-1 n Guided search: we do not consider alternative paths. Typical ex.: “recursive feature elimination” RFE-SVM.

Scaling Factors Idea: Transform a discrete space into a continuous space. Discrete indicators of feature presence:  i  {0, 1} Continuous scaling factors:  i  IR  =[  1,  2,  3,  4 ] Now we can do gradient descent!

Many learning algorithms are cast into a minimization of some regularized functional: Empirical error Regularization capacity control Formalism ( chap. 5) Next few slides: André Elisseeff Justification of RFE and many other embedded methods.

Embedded method Embedded methods are a good inspiration to design new feature selection techniques for your own algorithms: –Find a functional that represents your prior knowledge about what a good model is. –Add the  weights into the functional and make sure it’s either differentiable or you can perform a sensitivity analysis efficiently –Optimize alternatively according to  and  –Use early stopping (validation set) or your own stopping criterion to stop and select the subset of features Embedded methods are therefore not too far from wrapper techniques and can be extended to multiclass, regression, etc…

The l 1 SVM A version of SVM where  (w)=||w|| 2 is replaced by the l 1 norm  (w)=  i |w i | Can be considered an embedded feature selection method: –Some weights will be drawn to zero (tend to remove redundant features) –Difference from the regular SVM where redundant features are included Bi et al 2003, Zhu et al, 2003

Replace the regularizer ||w|| 2 by the l 0 norm Further replace by  i log(  + |w i |) Boils down to the following multiplicative update algorithm: The l 0 SVM Weston et al, 2003

Causality

What can go wrong? Guyon-Aliferis-Elisseeff, 2007

What can go wrong?

Guyon-Aliferis-Elisseeff, 2007 X2X2 Y X1X1 Y X1X1 X2X2

Wrapping up

Bilevel optimization 1) For each feature subset, train predictor on training data. 2) Select the feature subset, which performs best on validation data. –Repeat and average if you want to reduce variance (cross- validation). 3) Test on test data. N variables/features M samples m1m1 m2m2 m3m3 Split data into 3 sets: training, validation, and test set.

Complexity of Feature Selection Generalization_error  Validation_error +  (C/m 2 ) m 2 : number of validation examples, N: total number of features, n: feature subset size. With high probability: n Error Try to keep C of the order of m 2.

CLOP CLOP=Challenge Learning Object Package. Based on the Matlab® Spider package developed at the Max Planck Institute. Two basic abstractions: –Data object –Model object Typical script: –D = data(X,Y); % Data constructor –M = kridge; % Model constructor –[R, Mt] = train(M, D); % Train model=>Mt –Dt = data(Xt, Yt); % Test data constructor –Rt = test(Mt, Dt); % Test the model

NIPS 2003 FS challenge

Conclusion Feature selection focuses on uncovering subsets of variables X 1, X 2, … predictive of the target Y. Multivariate feature selection is in principle more powerful than univariate feature selection, but not always in practice. Taking a closer look at the type of dependencies in terms of causal relationships may help refining the notion of variable relevance.

1)Feature Extraction, Foundations and Applications I. Guyon et al, Eds. Springer, ) Causal feature selection I. Guyon, C. Aliferis, A. Elisseeff To appear in “Computational Methods of Feature Selection”, Huan Liu and Hiroshi Motoda Eds., Chapman and Hall/CRC Press, Acknowledgements and references