Orthogonal Subspace Projection - Matched Filter

Slides:



Advertisements
Similar presentations
8.2 Kernel And Range.
Advertisements

Component Analysis (Review)
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Independent Component Analysis (ICA) and Factor Analysis (FA)
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Multidimensional Data Analysis : the Blind Source Separation problem. Outline : Blind Source Separation Linear mixture model Principal Component Analysis.
Chapter 12 Spatial Sharpening of Spectral Image Data.
Principles of Pattern Recognition
A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing.
EEG Classification Using Maximum Noise Fractions and spectral classification Steve Grikschart and Hugo Shi EECS 559 Fall 2005.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Page 146 Chapter 3 True False Questions. 1. The image of a 3x4 matrix is a subspace of R 4 ? False. It is a subspace of R 3.
Digital Imaging and Remote Sensing Laboratory NAPC 1 Noise Adjusted Principal Component Transform (NAPC) Data are first preprocessed to remove system bias.
Chapter 21 R(x) Algorithm a) Anomaly Detection b) Matched Filter.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
USE OF KERNELS FOR HYPERSPECTRAL TRAGET DETECTION Nasser M. Nasrabadi Senior Research Scientist U.S. Army Research Laboratory, Attn: AMSRL-SE-SE 2800 Powder.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Digital Imaging and Remote Sensing Laboratory Maximum Noise Fraction Transform 1 Generate noise covariance matrix Use this to decorrelate (using principle.
Principal Component Analysis (PCA)
Spectrum Sensing In Cognitive Radio Networks
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
An Improved Approach For Image Matching Using Principle Component Analysis(PCA An Improved Approach For Image Matching Using Principle Component Analysis(PCA.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Feature Extraction 主講人:虞台文.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
4 Vector Spaces 4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Introduction to Vectors and Matrices
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
8.2 Kernel And Range.
Ch 12. Continuous Latent Variables ~ 12
Probability Theory and Parameter Estimation I
Background on Classification
University of Ioannina
LECTURE 10: DISCRIMINANT ANALYSIS
Brain Electrophysiological Signal Processing: Preprocessing
Matrices and Vectors Review Objective
Hourglass Processing Approach (AIG)
Machine Learning Dimensionality Reduction
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
REMOTE SENSING Multispectral Image Classification
The General Linear Model (GLM)
SVD: Physical Interpretation and Applications
OVERVIEW OF LINEAR MODELS
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Chapter 4, Regression Diagnostics Detection of Model Violation
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
OVERVIEW OF LINEAR MODELS
Feature space tansformation methods
I.4 Polyhedral Theory (NW)
The loss function, the normal equation,
LECTURE 09: DISCRIMINANT ANALYSIS
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Spectral Transformation
Introduction to Vectors and Matrices
I.4 Polyhedral Theory.
Feature Selection Methods
Topic 11: Matrix Approach to Linear Regression
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Orthogonal Subspace Projection - Matched Filter Reduce dimensionality by projecting pixel vectors onto subspace orthogonal to the undesired features. After all undesired features are removed, projecting the residuals onto the signature of interest yields a maximized S/N ratio and single component image that represents a class map for that signature. Method is extensible to more than one signature. Harsanyi, J.C., & Chang, C. (1994). Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach, IEEE Transactions on Geoscience and Remote Sensing, Vol. 32, No. 4, pp. 779-785 Orthogonal Subspace Projection Chapter 19

Orthogonal Subspace Projection Definition of terms (N.B. all terms are a function of pixel location (x,y). (1) is an x 1 vector representing the pixel (mixed) in an band image, is an x p matrix whose columns (assumed linearly independent) are the p end member vectors included in the analysis are background and d is the target. Orthogonal Subspace Projection

Orthogonal Subspace Projection is a p x 1 vector of end member fractions, is an x 1 vector representing random noise which is assumed to be independent, identically distributed (i·i-d) Gaussian with zero mean and covariance . (N.B. this may be forced by preprocessing to orthogonalize and whiten the noise). Orthogonal Subspace Projection

Orthogonal Subspace Projection We can rewrite this as (2) where is the x 1 target end member vector, p is the fraction of the target in the pixel, is the x (p-1) matrix containing the end members other than d, is the (p-1) x 1 matrix of fractions for the backgrounds Orthogonal Subspace Projection

Orthogonal Subspace Projection We desire an operator that will minimize the effects of the backgrounds which are represented by the columns of . This is accomplished by projecting onto a subspace that is orthogonal to the columns of . The resulting vector should only contain energy (variance) associated with the target signature and random noise. Orthogonal Subspace Projection

Orthogonal Subspace Projection From least squares theory, the projection that minimizes energy (variance) from the signatures in the matrix is achieved with the operator: (3) where is the pseudo inverse of . Operating on the image with yields (4) reducing the contribution of U to zero in the new projection space (i.e., we’ve minimized the interference. Orthogonal Subspace Projection

Orthogonal Subspace Projection In addition, we seek to maximize the signal, or more precisely, we seek to maximize the S/N energy in the scene. We seek an 1 x operator which when applied to our background suppressed vectors (5) will maximize the S/N ratio·energy (l) expressed as (6) Orthogonal Subspace Projection

Orthogonal Subspace Projection Where E is the expected value and l is a scalar. Maximization of with an operator is a classic eigen vector problem which in this case yields the convenient result (7) where k is an arbitrary scalar. Thus, the overall operator is the 1 x vector having the form (8) i.e., We first null the background with and then apply a match filter to maximize the SNR. Orthogonal Subspace Projection

Orthogonal Subspace Projection This problem can be extended to multiple target vectors by generating the k x matrix operator. (9) where each vector is formed from the desired and undesired signature vectors. Orthogonal Subspace Projection

Orthogonal Subspace Projection Note some limitations of this approach are pointed out in Farrand and Harsanyi 1995, Journal of Geophysical Research, Vol. 100, No. E1, pp. 1565-1578. 1. You need to know the end member vectors that make up the U matrix. 2. If your target looks like an end member, you may have suppressed it in the suppression step. Orthogonal Subspace Projection

Orthogonal Subspace Projection 3. Since the noise is assumed Gaussian i.i.d. you have to work in a space where this is approximately true, e.g., image DC or radiance space. They suggest that transforming to reflectance space may distort noise and invalidate this approach (Is this a valid concern? Will gains and biases remove identically distributed assumptions?). Is data more likely to be Gaussian i.i.d. in reflectance, radiance, or DC space? Comments: The end members don’t need to be identified, they can be image derived (e.g., using the PPI algorithm). This still leaves us with problem #2 if our target represents a large portion of the image. Orthogonal Subspace Projection

Orthogonal Subspace Projection Notes on Harsanyi could be in DN, radiance or reflectance depending on how end members and noise are defined. where fi is the fraction for the ith end member Orthogonal Subspace Projection

Orthogonal Subspace Projection

Orthogonal Subspace Projection