Jeff Howbert Introduction to Machine Learning Winter 2012 1 Machine Learning Feature Creation and Selection.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Chapter 5 Multiple Linear Regression
ECG Signal processing (2)
Random Forest Predrag Radenković 3237/10
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
R OBERTO B ATTITI, M AURO B RUNATO The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
Jeff Howbert Introduction to Machine Learning Winter Collaborative Filtering Nearest Neighbor Approach.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Multiple Linear Regression
Minimum Redundancy and Maximum Relevance Feature Selection
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Feature Selection Presented by: Nafise Hatamikhah
CSci 8980: Data Mining (Fall 2002)
Feature Selection for Regression Problems
Dimensional reduction, PCA
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Active Appearance Models Suppose we have a statistical appearance model –Trained from sets of examples How do we use it to interpret new images? Use an.
Feature Selection Lecture 5
Radial Basis Function Networks
Chapter 6-2 Radial Basis Function Networks 1. Topics Basis Functions Radial Basis Functions Gaussian Basis Functions Nadaraya Watson Kernel Regression.
Jeff Howbert Introduction to Machine Learning Winter Classification Bayesian Classifiers.
COSC 4335 DM: Preprocessing Techniques
General Linear Model & Classical Inference
Collaborative Filtering Matrix Factorization Approach
A REVIEW OF FEATURE SELECTION METHODS WITH APPLICATIONS Alan Jović, Karla Brkić, Nikola Bogunović {alan.jovic, karla.brkic,
A Multivariate Biomarker for Parkinson’s Disease M. Coakley, G. Crocetti, P. Dressner, W. Kellum, T. Lamin The Michael L. Gargano 12 th Annual Research.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
ArrayCluster: an analytic tool for clustering, data visualization and module finder on gene expression profiles 組員:李祥豪 謝紹陽 江建霖.
Boris Babenko Department of Computer Science and Engineering University of California, San Diego Semi-supervised and Unsupervised Feature Scaling.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
Jeff Howbert Introduction to Machine Learning Winter Regression Linear Regression.
Data Reduction Strategies Why data reduction? A database/data warehouse may store terabytes of data Complex data analysis/mining may take a very long time.
Chapter 4: Introduction to Predictive Modeling: Regressions
Jeff Howbert Introduction to Machine Learning Winter Regression Linear Regression Regression Trees.
COT6930 Course Project. Outline Gene Selection Sequence Alignment.
1 Chapter 4: Introduction to Predictive Modeling: Regressions 4.1 Introduction 4.2 Selecting Regression Inputs 4.3 Optimizing Regression Complexity 4.4.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Dimensionality Reduction in Unsupervised Learning of Conditional Gaussian Networks Authors: Pegna, J.M., Lozano, J.A., Larragnaga, P., and Inza, I. In.
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Data Mining and Decision Support
Machine Learning ICS 178 Instructor: Max Welling Supervised Learning.
Feature Selction for SVMs J. Weston et al., NIPS 2000 오장민 (2000/01/04) Second reference : Mark A. Holl, Correlation-based Feature Selection for Machine.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
Advanced Gene Selection Algorithms Designed for Microarray Datasets Limitation of current feature selection methods: –Ignores gene/gene interaction: single.
3/13/2016Data Mining 1 Lecture 1-2 Data and Data Preparation Phayung Meesad, Ph.D. King Mongkut’s University of Technology North Bangkok (KMUTNB) Bangkok.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Data statistics and transformation revision Michael J. Watts
BINARY LOGISTIC REGRESSION
Data Transformation: Normalization
School of Computer Science & Engineering
Noisy Data Noise: random error or variance in a measured variable.
regression Can BMI explain Y? Can BMI predict Y?
Data Mining (and machine learning)
Advanced Artificial Intelligence Feature Selection
Roberto Battiti, Mauro Brunato
Machine Learning Feature Creation and Selection
K Nearest Neighbor Classification
Nearest Neighbors CSC 576: Data Mining.
Model generalization Brief summary of methods
Support Vector Machines
Machine learning overview
Introduction to Sensor Interpretation
Feature Selection Methods
Statistical Models and Machine Learning Algorithms --Review
Introduction to Sensor Interpretation
Presentation transcript:

Jeff Howbert Introduction to Machine Learning Winter Machine Learning Feature Creation and Selection

Jeff Howbert Introduction to Machine Learning Winter Well-conceived new features can sometimes capture the important information in a dataset much more effectively than the original features. l Three general methodologies: –Feature extraction  typically results in significant reduction in dimensionality  domain-specific –Map existing features to new space –Feature construction  combine existing features Feature creation

Jeff Howbert Introduction to Machine Learning Winter Scale invariant feature transform (SIFT) Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters. SIFT features

Jeff Howbert Introduction to Machine Learning Winter Extraction of power bands from EEG Multi-channel EEG recording (time domain) Multi-channel power spectrum (frequency domain) 1.Select time window 2.Fourier transform on each channel EEG to give corresponding channel power spectrum 3.Segment power spectrum into bands 4.Create channel-band feature by summing values in band time window

Jeff Howbert Introduction to Machine Learning Winter l Fourier transform –Eliminates noise present in time domain Map existing features to new space Two sine wavesTwo sine waves + noiseFrequency

Jeff Howbert Introduction to Machine Learning Winter Attribute transformation l Simple functions –Examples of transform functions: x k log( x )e x | x | –Often used to make the data more like some standard distribution, to better satisfy assumptions of a particular algorithm.  Example: discriminant analysis explicitly models each class distribution as a multivariate Gaussian log( x )

Jeff Howbert Introduction to Machine Learning Winter l Reduces dimensionality of data without creating new features l Motivations: –Redundant features  highly correlated features contain duplicate information  example: purchase price and sales tax paid –Irrelevant features  contain no information useful for discriminating outcome  example: student ID number does not predict student’s GPA –Noisy features  signal-to-noise ratio too low to be useful for discriminating outcome  example: high random measurement error on an instrument Feature subset selection

Jeff Howbert Introduction to Machine Learning Winter l Benefits: –Alleviate the curse of dimensionality –Enhance generalization –Speed up learning process –Improve model interpretability Feature subset selection

Jeff Howbert Introduction to Machine Learning Winter l As number of features increases: –Volume of feature space increases exponentially. –Data becomes increasingly sparse in the space it occupies. –Sparsity makes it difficult to achieve statistical significance for many methods. –Definitions of density and distance (critical for clustering and other methods) become less useful.  all distances start to converge to a common value Curse of dimensionality

Jeff Howbert Introduction to Machine Learning Winter Curse of dimensionality l Randomly generate 500 points l Compute difference between max and min distance between any pair of points

Jeff Howbert Introduction to Machine Learning Winter l Filter approaches: –Features selected before machine learning algorithm is run l Wrapper approaches: –Use machine learning algorithm as black box to find best subset of features l Embedded: –Feature selection occurs naturally as part of the machine learning algorithm  example: L1-regularized linear regression Approaches to feature subset selection

Jeff Howbert Introduction to Machine Learning Winter l Both filter and wrapper approaches require: –A way to measure the predictive quality of the subset –A strategy for searching the possible subsets  exhaustive search usually infeasible – search space is the power set (2 d subsets) Approaches to feature subset selection

Jeff Howbert Introduction to Machine Learning Winter l Most common search strategy: 1.Score each feature individually for its ability to discriminate outcome. 2.Rank features by score. 3.Select top k ranked features. l Common scoring metrics for individual features –t-test or ANOVA(continuous features) –  -square test (categorical features) –Gini index –etc. Filter approaches

Jeff Howbert Introduction to Machine Learning Winter l Other strategies look at interaction among features –Eliminate based on correlation between pairs of features –Eliminate based on statistical significance of individual coefficients from a linear model fit to the data  example: t-statistics of individual coefficients from linear regression Filter approaches

Jeff Howbert Introduction to Machine Learning Winter l Most common search strategies are greedy: –Random selection –Forward selection –Backward elimination l Scoring uses some chosen machine learning algorithm –Each feature subset is scored by training the model using only that subset, then assessing accuracy in the usual way (e.g. cross- validation) Wrapper approaches

Jeff Howbert Introduction to Machine Learning Winter Assume d features available in dataset: | FUnsel | == d Optional: target number of selected features k Set of selected features initially empty: FSel =  Best feature set score initially 0: ScoreBest = 0 Do Best next feature initially null: FBest =  For each feature F  FUnsel Form a trial set of features FTrial = FSel + F Run wrapper algorithm, using only features Ftrial If score( FTrial ) > scoreBest FBest = F; scoreBest = score( FTrial ) If FBest   FSel = FSel + F; FUnsel = FUnsel – F Until FBest ==  or FUnsel ==  or | FSel | == k Return FSel Forward selection

Jeff Howbert Introduction to Machine Learning Winter Number of features available in dataset d Target number of selected features k Target number of random trials T Set of selected features initially empty: FSel =  Best feature set score initially 0: ScoreBest = 0. Number of trials conducted initially 0: t = 0 Do Choose trial subset of features FTrial randomly from full set of d available features, such that | FTrial | == k Run wrapper algorithm, using only features Ftrial If score( FTrial ) > scoreBest FSel = FTrial; scoreBest = score( FTrial ) t = t + 1 Until t == T Return FSel Random selection

Jeff Howbert Introduction to Machine Learning Winter l If d and k not too large, can check all possible subsets of size k. –This is essentially the same as random selection, but done exhaustively. Other wrapper approaches