A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing.

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
An Overview of Machine Learning
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
A Comprehensive Study on Third Order Statistical Features for Image Splicing Detection Xudong Zhao, Shilin Wang, Shenghong Li and Jianhua Li Shanghai Jiao.
São Paulo Advanced School of Computing (SP-ASC’10). São Paulo, Brazil, July 12-17, 2010 Looking at People Using Partial Least Squares William Robson Schwartz.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Implementing a reliable neuro-classifier
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Hyperspectral Imaging Alex Chen 1, Meiching Fong 1, Zhong Hu 1, Andrea Bertozzi 1, Jean-Michel Morel 2 1 Department of Mathematics, UCLA 2 ENS Cachan,
Noise-Robust Spatial Preprocessing Prior to Endmember Extraction from Hyperspectral Data Gabriel Martín, Maciel Zortea and Antonio Plaza Hyperspectral.
Chenghai Yang 1 John Goolsby 1 James Everitt 1 Qian Du 2 1 USDA-ARS, Weslaco, Texas 2 Mississippi State University Applying Spectral Unmixing and Support.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Minimum Phoneme Error Based Heteroscedastic Linear Discriminant Analysis for Speech Recognition Bing Zhang and Spyros Matsoukas BBN Technologies Present.
1 Remote Sensing Laboratory Dept. of Information Engineering and Computer Science University of Trento Via Sommarive, 14, I Povo, Trento, Italy 2.
Part 2: Change detection for SAR Imagery (based on Chapter 18 of the Book. Change-detection methods for location of mines in SAR imagery, by Dr. Nasser.
1 Particle Swarm Optimization-based Dimensionality Reduction for Hyperspectral Image Classification He Yang, Jenny Q. Du Department of Electrical and Computer.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
CSE 185 Introduction to Computer Vision Face Recognition.
IMPROVING ACTIVE LEARNING METHODS USING SPATIAL INFORMATION IGARSS 2011 Edoardo Pasolli Univ. of Trento, Italy Farid Melgani Univ.
H. Lexie Yang1, Dr. Melba M. Crawford2
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
Point Distribution Models Active Appearance Models Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Linear Classifiers Dept. Computer Science & Engineering, Shanghai Jiao Tong University.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Date of download: 6/23/2016 Copyright © 2016 SPIE. All rights reserved. Five sampling types with P=8, R=1: (a) X=0, (b) X=1, (c) X=2, (d) X=3, (e) X=4.
An Automatic Method for Selecting the Parameter of the RBF Kernel Function to Support Vector Machines Cheng-Hsuan Li 1,2 Chin-Teng.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
High resolution product by SVM. L’Aquila experience and prospects for the validation site R. Anniballe DIET- Sapienza University of Rome.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Principal Component Analysis (PCA)
Mapping of Coastal Wetlands via Hyperspectral AVIRIS Data
LECTURE 11: Advanced Discriminant Analysis
Background on Classification
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
LECTURE 10: DISCRIMINANT ANALYSIS
Orthogonal Subspace Projection - Matched Filter
Amin Zehtabian, Hassan Ghassemian CONCLUSION & FUTURE WORKS
Recognition: Face Recognition
Machine Learning Dimensionality Reduction
Announcements Project 1 artifact winners
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
REMOTE SENSING Multispectral Image Classification
REMOTE SENSING Multispectral Image Classification
Sparse Regression-based Hyperspectral Unmixing
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Feature space tansformation methods
Generally Discriminant Analysis
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
Feature Selection Methods
What is Artificial Intelligence?
Presentation transcript:

A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing Laboratory University of Extremadura, Cáceres, Spain 2 Instituto de Telecomunicaçoes, Instituto Superior Técnico, TULisbon, Portugal Contact s: {junli,

Talk Outline: 1. Challenges in hyperspectral image classification 2. Subspace projection 2.1. Subspace projection-based framework 2.2. Considered subspace projection techniques: PCA versus HySime 2.3. Integration with different classifiers (LDA, SVM, MLR) 3. Experimental results 3.1. Experiments with AVIRIS Indian Pines hyperspectral data 3.2. Experiments with ROSIS Pavia University hyperspectral 4. Conclusions and future research lines 1. Challenges in hyperspectral image classification 2. Subspace projection 2.1. Subspace projection-based framework 2.2. Considered subspace projection techniques: PCA versus HySime 2.3. Integration with different classifiers (LDA, SVM, MLR) 3. Experimental results 3.1. Experiments with AVIRIS Indian Pines hyperspectral data 3.2. Experiments with ROSIS Pavia University hyperspectral 4. Conclusions and future research lines A New Subspace Approach for Hyperspectral Classification IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011

Concept of hyperspectral imaging using NASA Jet Propulsion Laboratory’s Airborne Visible Infra-Red Imaging Spectrometer 1 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Challenges in Hyperspectral Image Classification

Panchromatic Hyperspectral (100’s of bands) Hyperspectral (100’s of bands) Multispectral (10’s of bands) Multispectral (10’s of bands) Challenges in Hyperspectral Image Classification Ultraspectral (1000’s of bands) Ultraspectral (1000’s of bands) 2 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Challenges in hyperspectral image classification Imbalance between dimensionality and training samples, presence of mixed pixels

Challenges in hyperspectral image classification The special characteristics of hyperspectral data pose several processing problems: 1.The high-dimensional nature of hyperspectral data introduces important limitations in supervised classifiers, such as the limited availability of training samples or the inherently complex structure of the data 2.There is a need to address the presence of mixed pixels resulting from insufficient spatial resolution and other phenomena in order to properly model the hyperspectral data 3.There is a need to develop computationally efficient algorithms, able to provide a response in a reasonable time and thus address the computational requirements of time-critical remote sensing applications In this work, we evaluate the impact of using subspace projection techniques prior to supervised classification of hyperspectral image data while analyzing each of the aforementioned items Challenges in Hyperspectral Image Classification 3 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011

Talk Outline: 1. Challenges in hyperspectral image classification 2. Subspace projection 2.1. Subspace projection-based framework 2.2. Considered subspace projection techniques: PCA versus HySime 2.3. Integration with different classifiers (LDA, SVM, MLR) 3. Experimental results 3.1. Experiments with AVIRIS Indian Pines hyperspectral data 3.2. Experiments with ROSIS Pavia University hyperspectral 4. Conclusions and future research lines 1. Challenges in hyperspectral image classification 2. Subspace projection 2.1. Subspace projection-based framework 2.2. Considered subspace projection techniques: PCA versus HySime 2.3. Integration with different classifiers (LDA, SVM, MLR) 3. Experimental results 3.1. Experiments with AVIRIS Indian Pines hyperspectral data 3.2. Experiments with ROSIS Pavia University hyperspectral 4. Conclusions and future research lines A New Subspace Approach for Hyperspectral Classification IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011

Subspace Projection-Based Framework 4 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Subspace projection-based framework.- Hyperspectral image data generally lives in a lower-dimensional subspace compared with the input feature dimensionality This can be exploited to address ill-posed problems given by limited training samples The projection into such subspaces allows us to specifically avoid spectral confusion due to mixed pixels, thus reducing their impact in the subsequent classification process J. Li, J. M. Bioucas-Dias and A. Plaza, “Spectral-spatial hyperspectral image segmentation using sub- space multinomial logistic regression and Markov random fields,” IEEE Transactions on Geoscience and Remote Sensing, in press, 2011.

Considered Subspace Projection Techniques: PCA versus HySime 5 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Principal Component Analysis (PCA).- High-dimensional data can be transformed effectively according to its distribution in feature space (e.g. by finding the most important directions or axes, establishing those axes as the references of a new coordinate system which takes into account data distribution) Orders the resulting components in decreasing order of variance

6 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Principal Component Analysis (PCA).- High-dimensional data can be transformed effectively according to its distribution in feature space (e.g. by finding the most important directions or axes, establishing those axes as the references of a new coordinate system which takes into account data distribution) Orders the resulting components in decreasing order of variance Considered Subspace Projection Techniques: PCA versus HySime

7 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Hyperspectral Signal Identification by Minimum Error (HySime).- A recently developed method for subspace identification in remotely sensed hyperspectral data, which offers several additional features with regards to principal component analysis and other subspace projection techniques J. M. Bioucas-Dias and J. M. P Nascimento, “Hyperspectral subspace identification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 8, pp , Principal Component Analysis Seeks for the projection that best represents the original hyperspectral data in least square sense Reduces the original signal into subset of eigenvectors without computing any noise statistics The difficulty in getting reliable noise estimates from the resulting eigenvalues is that these eigenvalues still represent mixtures of signal sources and noise HySime HySime finds the subset of eigenvectors and the correspondent eigenvalues by minimizing the mean square error between the original signal and its projection onto the eigenvector subspace Uses multiple regressions for the estimation of the noise and signal covariance matrices Optimally represents the original signal with minimum error Considered Subspace Projection Techniques: PCA versus HySime

Supervised Classification Framework Tested in this Work 8 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Test classification accuracy PCA, HySime Supervised Classification Framework.- Includes subspace projection and supervised classification based on training samples: Subspace projection Supervised classifier Training Samples Test Samples Randomly selected

9 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Integration of subspace-based framework with different classifiers.- Three different supervised classifiers tested in this work: 1.Linear discriminant analysis (LDA): find a linear combination of features which separate two or more classes; the resulting combination may be used as a linear classifier (only linearly separable classes will remain separable after applying LDA) 2.Support vector machine (SVM): constructs a set of hyperplanes in high-dimensional space; a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class 3.Multinomial logistic regression (MLR): models the posterior class distributions in a Bayesian framework, thus supplying (in addition to the boundaries between the classes) a degree of plausibility for such classes Integration with different classifiers (LDA, SVM, MLR)

Talk Outline: 1. Challenges in hyperspectral image classification 2. Subspace projection 2.1. Classic techniques for subspace projection: PCA versus HySime 2.2. Subspace projection-based framework 2.3. Integration with different classifiers (LDA, SVM, MLR) 3. Experimental results 3.1. Experiments with AVIRIS Indian Pines hyperspectral data 3.2. Experiments with ROSIS Pavia University hyperspectral 4. Conclusions and future research lines 1. Challenges in hyperspectral image classification 2. Subspace projection 2.1. Classic techniques for subspace projection: PCA versus HySime 2.2. Subspace projection-based framework 2.3. Integration with different classifiers (LDA, SVM, MLR) 3. Experimental results 3.1. Experiments with AVIRIS Indian Pines hyperspectral data 3.2. Experiments with ROSIS Pavia University hyperspectral 4. Conclusions and future research lines A New Subspace Approach for Hyperspectral Classification IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011

AVIRIS Indian Pines data set.- Challenging classification scenario due to spectrally similar classes Early growth stage of the agricultural features (only around 5% coverage of soil) 145x145 pixels, 202 spectral bands, 16 ground-truth classes labeled pixels (random training subsets evenly distributed among classes) Experimental Results Using Real Hyperspectral Data Sets 10 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 False color compositionGround-truth

AVIRIS Indian Pines data set.- Classification results using 160 training samples (10 training samples per class) For the SVM classifier we used the Gaussian RBF kernel after testing other kernels The mean accuracies (after 10 Monte Carlo runs) and processing times are reported 11 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Experimental Results Using Real Hyperspectral Data Sets

AVIRIS Indian Pines data set.- Classification results using 240 training samples (15 training samples per class) For the SVM classifier we used the Gaussian RBF kernel after testing other kernels The mean accuracies (after 10 Monte Carlo runs) and processing times are reported 12 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Experimental Results Using Real Hyperspectral Data Sets

AVIRIS Indian Pines data set.- Classification results using 320 training samples (20 training samples per class) For the SVM classifier we used the Gaussian RBF kernel after testing other kernels The mean accuracies (after 10 Monte Carlo runs) and processing times are reported 13 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Experimental Results Using Real Hyperspectral Data Sets

14 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 AVIRIS Indian Pines data set.- Classification results using 320 training samples (20 training samples per class) SVM (OA=65.36%) Subspace SVM (OA=70.33%) LDA (OA=50.74%) Subspace LDA (OA=54.90%) Linear MLR (OA=60.38%) Subspace MLR (OA=67.53%) Ground-truth Experimental Results Using Real Hyperspectral Data Sets

ROSIS Pavia University data set.- 15 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011 Experimental Results Using Real Hyperspectral Data Sets Overall classification accuracies and kappa coefficient (in the parentheses) using different training sets for the ROSIS Pavia University

Conclusions and Future Lines.- We have evaluated the impact of subspace projection on supervised classification of remotely sensed hyperspectral image data sets Two dimensionality reduction methods have been used: PCA and HySime, although many others are available and could be used: MNF, OSP, VD Three different supervised classifiers considered: LDA, SVM, MLR Experimental results indicate that different approaches for hyperspectral image classification approaches can benefit from subspace projection, particularly when very limited training samples are available Subspace projection can be naturally integrated with multinomial logistic regression (MLR) classifiers, which greatly benefit from dimensionality reduction Future work will focus on the evaluation of other subspace projection approaches and hyperspectral data sets Conclusions and Hints at Plausible Future Research 16 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011

IEEE J-STARS Special Issue on Hyperspectral Image and Signal Processing 17 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2011), Vancouver, Canada, July 24 – 29, 2011

A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing Laboratory University of Extremadura, Cáceres, Spain 2 Instituto de Telecomunicaçoes, Instituto Superior Técnico, TULisbon, Portugal Contact s: {junli,