Download presentation
Presentation is loading. Please wait.
Published byDaisy Reynolds Modified over 8 years ago
1
High resolution product by SVM. L’Aquila experience and prospects for the validation site R. Anniballe DIET- Sapienza University of Rome
2
Outline SVM theory main concepts Data Fusion by SVMs Data Fusion by SVMs: L’Aquila Test case Methodologies Results Prospects for the validation site
3
SVM for data classification A Support Vector Machine (SVM) is a supervised machine learning algorithm developed for solving binary classification problems. During the training phase, a SVM maps the input patterns into a higher dimension feature space finds an optimal hyperplane for separating patterns belonging to different classes in the high dimensional feature space. In the test phase, unknown samples are mapped into the higher dimensional feature space classified based on their position wrt the optimal hyperplane.
4
The Optimal Separating Hyperplane A hyperplane in the feature space is described by the following equation: w ∙ Φ (x)+ b= 0 where: Φ : ℝ d → ℝ H is a vector function which maps the d-dimensional input vector x into a H-dimensional (with H>d) feature space, w ℝ H is a vector perpendicular to the hyperplane and b is a bias term. The SVM algorithm determines the hyperplane that: minimize the training error (measured through the sum of the slack variables ξ ) separates the rest of the elements with the maximum margin. Among all the hyperplanes separating training samples without error, the SVM algorithm determines the one that maximizes the margin, i.e.the distance between the hyperplane and the closest training vectors of each class. Given a set of N labeled training sample, the SVM algorithm determines the Optimal Hyperplane as solution of the constrained optimization problem : Training samples not Linearly separable in the feature space
5
Data Fusion by SVMs In order to integrate information coming from different data sources by SVMs mainly two approaches can be used: 1.Feature Level Data fusion 2.Decision Level Data fusion Data Source 2 Feature Extraction Data Source 1 Data Source N SVM Input Space SVM Final class Feature Extraction A unique SVM classifies data based on a input space generated combining features from different data source in a unique feature vector. 1.Feature Level Data Fusion
6
Data Fusion by SVMs 2.Decision Level Data Fusion Feature Extraction Data Source 1 SVM 1 f 1 (x) SVM 1 Input Space Feature Extraction Data Source 2 SVM 2 f 2 (x) SVM 2 Input Space Feature Extraction Data Source N SVM 3 f N (x) SVM N Input Space Decision fusion Final class Distinct SVMs used for independently classifying each dataset Resulting rule images, representing the distance of the sample from the optimal hyperplane, combined based on a decision fusion strategy: Using an additional SVM Keeping the final decision based on the SVM which provide the rule image with the maximum absolute value.
7
Data Fusion by SVMs: L’Aquila test case Input Data: features from Optical, Geotechnical and Structural Modules OPTICAL FEATURES MIpanKLDpanMIpshKLDpsh ∆Contrast∆Correlation∆Energy∆Homogeneity∆Entropy ∆Hue∆Saturation∆IntensityDifference GEOTECHNICAL FEATURES soil resonant period at the building site (Tsoil) STRUCTURAL FEATURES Building HeightEarthquake Resistant Design (ERD) EMS98 Vulnerability Class 13 features 1 feature 3 features
8
Data Fusion by SVMs: L’Aquila test case Data Integration approaches 1.Features level: features from Optical (13), Geotechnical (1) and Structural (3) Modules combined in a unique feature vector used as input of a SVM. Two SVMs for independently classifying OPTICAL data and information from both STRUCTURAL and GEOTECHNICAL modules. An additional SVM used for integrating the resulting rules images 2.Decision level: Building having Structural and/or Geotechnical features missing classified using a SVM trained with only OPTICAL features, for both the integration strategies.
9
Data Fusion by SVMs: L’Aquila test case Results Classification performances assessed by a K-fold cross validation approach with K=10 Confusion Matrices Optical features SVM Data Fusion Multisource feature vector SVM Rule fusion SVM aDetected collapsed 302833 bFalse alarms 573556 cMisdetections 454742 dDetected uncollapsed 154515671546 performances Cohen's kappa 33,86%38,05%37,20% Normalized Cohen's kappa 36,44%35,15%40,50% Overall Accuracy 93,92%95,11% 94,2% Feature level data integration: decreasing of the false alarm rate achieved at the expense of a slight reduction of the sensitivity to damage Decision level data integration: the results are not very much different with respect to those obtained using only optical features. However, both kappa coefficient and its normalized version increase due to a better sensitivity to damage (more detections) and a slight reduction of the false alarm rate.
10
Prospects for the validation site SVMs trained on the L’Aquila data set could be tested on the validation site From an operational point of view, It could be interesting to investigate the use of Unsupervised approaches, such as Support Vector based Clustering algorithms Semi – Supervised Support Vector Machine
12
The Optimal Separating Hyperplane Training samples linearly separable in the feature space Training samples not linearly separable in the feature space The SVM algorithm determines the hyperplane that: minimize the training error, measured through the sum of the slack variables ξ separates the rest of the elements with the maximum margin. Among all the separating hyperplanes, the SVM algorithm determines the one that maximizes the margin, i.e the distance between the hyperplane and the closest training vectors of each class.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.