Download presentation
Presentation is loading. Please wait.
2
Feature selection methods from correlation to causality Isabelle Guyon isabelle@clopinet.com NIPS 2008 workshop on kernel learning
3
1)Feature Extraction, Foundations and Applications I. Guyon, S. Gunn, et al. Springer, 2006. http://clopinet.com/fextract-book 2) Causal feature selection I. Guyon, C. Aliferis, A. Elisseeff To appear in “Computational Methods of Feature Selection”, Huan Liu and Hiroshi Motoda Eds., Chapman and Hall/CRC Press, 2007. http://clopinet.com/causality Acknowledgements and references
4
http://clopinet.com/causality Constantin AliferisAlexander Statnikov André ElisseeffJean-Philippe Pellet Gregory F. CooperPeter Spirtes
5
Introduction
6
Feature Selection Thousands to millions of low level features: select the most relevant one to build better, faster, and easier to understand learning machines. X n m n’
7
Applications Bioinformatics Quality control Machine vision Customer knowledge variables/features examples 10 10 2 10 3 10 4 10 5 OCR HWR Market Analysis Text Categorization System diagnosis 1010 2 10 3 10 4 10 5 10 6
8
Nomenclature Univariate method: considers one variable (feature) at a time. Multivariate method: considers subsets of variables (features) together. Filter method: ranks features or feature subsets independently of the predictor (classifier). Wrapper method: uses a classifier to assess features or feature subsets.
9
Univariate Filter Methods
10
Univariate feature ranking Normally distributed classes, equal variance 2 unknown; estimated from data as 2 within. Null hypothesis H 0 : + = - T statistic: If H 0 is true, t= ( + - -)/( within m + +1/m - Student m + +m - - d.f. -- ++ -- ++ P(X i |Y=-1) P(X i |Y=1) xixi
11
H 0 : X and Y are independent. Relevance index test statistic. Pvalue false positive rate FPR = n fp / n irr Multiple testing problem: use Bonferroni correction pval n pval False discovery rate: FDR = n fp / n sc FPR n/n sc Probe method: FPR n sp /n p pval r0r0 r Null distribution Statistical tests ( chap. 2) ( Guyon, Dreyfus, 2006, )
12
Univariate Dependence Independence: P(X, Y) = P(X) P(Y) Measure of dependence: MI(X, Y) = P(X,Y) log dX dY = KL ( P(X,Y) || P(X)P(Y) ) P(X,Y) P(X)P(Y)
13
A choice of feature selection ranking methods depending on the nature of: the variables and the target (binary, categorical, continuous) the problem (dependencies between variables, linear/non- linear relationships between variables and target) the available data (number of examples and number of variables, noise in data) the available tabulated statistics. Other criteria ( chap. 3) ( Wlodzislaw Duch, 2006)
14
Multivariate Methods
15
Univariate selection may fail Guyon-Elisseeff, JMLR 2004; Springer 2006
16
Filters,Wrappers, and Embedded methods All features Filter Feature subset Predictor All features Wrapper Multiple Feature subsets Predictor All features Embedded method Feature subset Predictor
17
Relief nearest hit nearest miss D hit D miss Relief= D hit D miss Kira and Rendell, 1992
18
Wrappers for feature selection N features, 2 N possible feature subsets! Kohavi-John, 1997
19
Exhaustive search. Stochastic search (simulated annealing, genetic algorithms) Beam search: keep k best path at each step. Greedy search: forward selection or backward elimination. Floating search: Alternate forward and backward strategies. Search Strategies ( chap. 4) (Juha Reunanen, 2006)
20
Forward Selection (wrapper) n n-1 n-2 1 … Start Also referred to as SFS: Sequential Forward Selection
21
Guided search: we do not consider alternative paths. Typical ex.: Gram-Schmidt orthog. and tree classifiers. Forward Selection (embedded) … Start n n-1 n-2 1
22
Backward Elimination (wrapper) 1 n-2 n-1 n … Start Also referred to as SBS: Sequential Backward Selection
23
Backward Elimination (embedded) … Start 1 n-2 n-1 n Guided search: we do not consider alternative paths. Typical ex.: “recursive feature elimination” RFE-SVM.
24
Scaling Factors Idea: Transform a discrete space into a continuous space. Discrete indicators of feature presence: i {0, 1} Continuous scaling factors: i IR =[ 1, 2, 3, 4 ] Now we can do gradient descent!
25
Many learning algorithms are cast into a minimization of some regularized functional: Empirical error Regularization capacity control Formalism ( chap. 5) (Lal, Chapelle, Weston, Elisseeff, 2006) Justification of RFE and many other embedded methods.
26
Embedded method Embedded methods are a good inspiration to design new feature selection techniques for your own algorithms: –Find a functional that represents your prior knowledge about what a good model is. –Add the weights into the functional and make sure it’s either differentiable or you can perform a sensitivity analysis efficiently –Optimize alternatively according to and –Use early stopping (validation set) or your own stopping criterion to stop and select the subset of features Embedded methods are therefore not too far from wrapper techniques and can be extended to multiclass, regression, etc…
27
Causality
28
What can go wrong? Guyon-Aliferis-Elisseeff, 2007
29
What can go wrong?
30
Guyon-Aliferis-Elisseeff, 2007 X2X2 Y X1X1
31
Lung Cancer SmokingGenetics Coughing Attention Disorder Allergy AnxietyPeer Pressure Yellow Fingers Car Accident Born an Even Day Fatigue Local causal graph
32
What works and why?
33
Bilevel optimization 1) For each feature subset, train predictor on training data. 2) Select the feature subset, which performs best on validation data. –Repeat and average if you want to reduce variance (cross- validation). 3) Test on test data. N variables/features M samples m1m1 m2m2 m3m3 Split data into 3 sets: training, validation, and test set.
34
Complexity of Feature Selection Generalization_error Validation_error + (C/m 2 ) m 2 : number of validation examples, N: total number of features, n: feature subset size. With high probability: n Error Try to keep C of the order of m 2.
35
Insensitivity to irrelevant features Simple univariate predictive model, binary target and features, all relevant features correlate perfectly with the target, all irrelevant features randomly drawn. With 98% confidence, abs(feat_weight) < w and i w i x i < v. n g number of “ good ” (relevant) features n b number of “ bad ” (irrelevant) features m number of training examples.
36
Conclusion Feature selection focuses on uncovering subsets of variables X 1, X 2, … predictive of the target Y. Multivariate feature selection is in principle more powerful than univariate feature selection, but not always in practice. Taking a closer look at the type of dependencies in terms of causal relationships may help refining the notion of variable relevance. Feature selection and causal discovery may be more harmful than useful. Causality can help ML but ML can also help causality
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.