Download presentation
Presentation is loading. Please wait.
Published byPrince Eley Modified over 10 years ago
1
Feature Selection 1 Feature Selection for Image Retrieval By Karina Zapién Arreola January 21th, 2005
2
Feature Selection 2 Introduction Variable and feature selection have become the focus of much research in areas of applications for datasets with many variables are available Text processing Gene expression Combinatorial chemistry
3
Feature Selection 3 Motivation The objective of feature selection is three- fold: Improving the prediction performance of the predictors Providing a faster and more cost-effective predictors Providing a better understanding of the underlying process that generated the data
4
Feature Selection 4 Why use feature selection in CBIR Different users may need different features for image retrieval From each selected sample, a specific feature set can be chosen
5
Feature Selection 5 Boosting Method for improving the accuracy of any learning algorithm Use of “weak algorithms” for single rules Weighting of the weak algorithms Combination of weak rules into a strong learning algorithm
6
Feature Selection 6 Adaboost Algorithm Is a iterative boosting algorithm Notation Samples (x 1,y 1 ),…,(x m,y m ), where, y i = -1,1 There are m positive samples, and l negative samples Weak classifiers h i For iteration t, the error is defined as: ε t = min (½)Σ i ω i |h i (x i ) – y i | where ω i is a weight for x i.
7
Feature Selection 7 Adaboost Algorithm Given samples (x 1,y 1 ),…,(x m,y m ), where y i = -1,1 Initialize ω 1,i =1/(2m), 1/(2l), for y i = 1,-1 For t=1,…,T Normalize ω t,i = ω t,i /(Σ j ω t,j ) Train base learner h t,i using distribution ω i,j Choose h t that minimize ε t with error e i Update ω t+1,i = ω t,i β t 1-e i Set β t = (ε t )/(1- ε t ) and α t = log(1/ β t ) Output the final classifier H(x) = sign( Σ t α t h t (x) )
8
Feature Selection 8 Adaboost Application Searching similar groups A particular image class is chosen A positive sample of this group is given randomly A negative sample of the rest of the images is given randomly
9
Feature Selection 9 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually Dirty data Predictor – linear predictor Comparison Stable solution
10
Feature Selection 10 Domain knowledge Features used colordb_sumRGB _entropy_d1 col_gpd_hsv col_gpd_lab col_gpd_rgb col_hu_hsv2 col_hu_lab2 col_hu_lab col_hu_rgb2 col_hu_rgb col_hu_seg2_hsv col_hu_seg2_lab col_hu_seg2_rgb Features used col_hu_seg_hsv col_hu_seg_lab col_hu_seg_rgb col_hu_yiq col_ngcm_rgb col_sm_hsv col_sm_lab col_sm_rgb col_sm_yiq text_gabor text_tamura edgeDB waveletDB Features used hist_phc_hsv hist_phc_rgb Hist_Grad_RGB haar_RGB haar_HSV haar_rgb haar_hmmd
11
Feature Selection 11 Check list Feature Selection Domain knowledge Commensurate features Normalize features between an appropriated range Adaboost takes each feature independent so it is not necessary to normalize them
12
Feature Selection 12 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually Dirty data Predictor – linear predictor Comparison Stable solution
13
Feature Selection 13 Feature construction and space dimensionality reduction Clustering Correlation coefficient Supervised feature selection Filters
14
Feature Selection 14 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Features with the same value for all samples (variance=0) were eliminated From 4912 Linear Features 3583 were selected
15
Feature Selection 15 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually When there is no asses method, use Variable Ranking method. In Adaboost this is not necessary
16
Feature Selection 16 Variable Ranking Preprocessing step Independent of the choice of the predictor Correlation criteria It can only detect linear dependencies Single variable classifiers
17
Feature Selection 17 Variable Ranking Noise reduction and better classification may be obtained by adding variables that are presumable redundant Perfectly correlated variables are truly redundant in the sense that no additional information is gained by adding them. It doesn’t mean absence of variable complementarily Two variables that are useless by themselves can be useful together
18
Feature Selection 18 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually Dirty data Predictor – linear predictor Comparison Stable solution
19
Feature Selection 19 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually Dirty data Predictor – linear predictor Comparison Stable solution
20
Feature Selection 20 Adaboost Algorithm Given samples (x 1,y 1 ),…,(x m,y m ), where x i, y i -1,1 Initialize ω 1,i =1/(2m), 1/(2l), for y i = -1,1 For t=1,…,T Normalize ω t,i = ω t,i /(Σ j ω t,j ) Train base learner h t,i using distribution ω i,j Choose h t that minimize ε t with error e i Update ω t+1,i = ω t,i β t 1-e i Set β t = (ε t )/(1- ε t ) and α t = log(1/ β t ) Output the final classifier H(x) = sign( Σ t α t h t (x) )
21
Feature Selection 21 Weak classifier Each weak classifier h i is defined as follows: h i.pos_mean – mean value for positive samples h i.neg_mean – mean value for negative sample A sample is classified as: 1 if it is closer to h i.pos_mean -1 if it is closer to h i.neg_mean
22
Feature Selection 22 Weak classifier h i.pos_mean – mean value for positive samples h i.neg_mean – mean value for negative sample A Linear Classifier was used h i.neg_meanh i.pos_mean
23
Feature Selection 23 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually Dirty data Predictor – linear predictor Comparison Stable solution
24
Feature Selection 24 Adaboost experiments and results 4 positives 10 positives
25
Feature Selection 25 Few positive samples Use of 4 positive samples
26
Feature Selection 26 More positive samples False Positive Use of 10 positive samples
27
Feature Selection 27 Training data Test data False negative Use of 10 positive samples
28
Feature Selection 28 Changing number of Training Iterations The number of iterations Used was from 5 to 50 Iterations = 30 was set
29
Feature Selection 29 Changing Sample Size 5 pos10 pos 15 pos 20 pos25 pos 30 pos 35 pos
30
Feature Selection 30 Few negative samples Use of 15 negative samples
31
Feature Selection 31 More negative samples Use of 75 negative samples
32
Feature Selection 32 Check list Feature Selection Domain knowledge Commensurate features Interdependence of features Prune of input variables Asses features individually Dirty data Predictor – linear predictor Comparison (ideas, time, comp. resources, examples) Stable solution
33
Feature Selection 33 Stable solution For Adaboost is important to have a representative sample Chosen parameters: Positives samples: 15 Negative samples: 100 Iteration number: 30
34
Feature Selection 34 Stable solution with more samples and iterations Beaches Dinosaurs Mountains Elephants Buildings Humans Roses Buses Horses Food
35
Feature Selection 35 Stable solution for Dinosaurs Use of: 15 Positive samples 100 Negative samples 30 Iterations
36
Feature Selection 36 Stable solution for Roses Use of: 15 Positive samples 100 Negative samples 30 Iterations
37
Feature Selection 37 Stable solution for Buses Use of: 15 Positive samples 100 Negative samples 30 Iterations
38
Feature Selection 38 Stable solution for Beaches Use of: 15 Positive samples 100 Negative samples 30 Iterations
39
Feature Selection 39 Stable solution for Food Use of: 15 Positive samples 100 Negative samples 30 Iterations
40
Feature Selection 40 Unstable Solution
41
Feature Selection 41 Unstable solution for Roses Use of: 5 Positive samples 10 Negative samples 30 Iterations
42
Feature Selection 42 Best features for classification Humans Beaches Buildings Buses Dinosaurs Elephants Roses Horses Mountains Food
43
Feature Selection 43 And the winner is…
44
Feature Selection 44 Feature frequency
45
Feature Selection 45 Extensions Searching similar images Pairs of images are built The difference for each feature is calculated Each difference is classified as: 1 if both images belong to the same class -1 if both images belong to different classes Multiclass adaboost
46
Feature Selection 46 Extensions Use of another weak classifier Design weak classifier using multiple features → classifier fusion Use different weak classifier such as SVM, NN, threshold function, etc. Different feature selection method: SVM
47
Feature Selection 47 Discussion Is important to add feature Selection for Image retrieval A good methodology for selecting features should be used Adaboost is a learning algorithm → data dependent It is important to have representative samples Adaboost can help to improve the classification potential of simple algorithms
48
Feature Selection 48 Thank you !
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.