A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui 2013.1.16.

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
A CTION R ECOGNITION FROM V IDEO U SING F EATURE C OVARIANCE M ATRICES Kai Guo, Prakash Ishwar, Senior Member, IEEE, and Janusz Konrad, Fellow, IEEE.
Sparse Modeling for Finding Representative Objects Ehsan Elhamifar Guillermo Sapiro Ren´e Vidal Johns Hopkins University University of Minnesota Johns.
Machine learning continued Image source:
AGE ESTIMATION: A CLASSIFICATION PROBLEM HANDE ALEMDAR, BERNA ALTINEL, NEŞE ALYÜZ, SERHAN DANİŞ.
COMPUTER AIDED DIAGNOSIS: FEATURE SELECTION Prof. Yasser Mostafa Kadah –
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban.
Large-Scale, Real-World Face Recognition in Movie Trailers Week 2-3 Alan Wright (Facial Recog. pictures taken from Enrique Gortez)
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots Chao-Yeh Chen and Kristen Grauman University of Texas at Austin.
Principal Component Analysis
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul Reference: "Nonlinear dimensionality reduction by locally.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Summarized by Soo-Jin Kim
Enhancing Tensor Subspace Learning by Element Rearrangement
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Cs: compressed sensing
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
A Tensorial Approach to Access Cognitive Workload related to Mental Arithmetic from EEG Functional Connectivity Estimates S.I. Dimitriadis, Yu Sun, K.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Pattern Recognition April 19, 2007 Suggested Reading: Horn Chapter 14.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Linear Discrimination Reading: Chapter 2 of textbook.
单击此处编辑母版标题样式 Class-oriented Regression Embedding 报告人:陈 燚 2011 年 8 月 25 日.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
CSE 185 Introduction to Computer Vision Face Recognition.
Manifold learning: MDS and Isomap
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Feature extraction using fuzzy complete linear discriminant analysis The reporter : Cui Yan
2D-LDA: A statistical linear discriminant analysis for image matrix
Ultra-high dimensional feature selection Yun Li
Learning Kernel Classifiers 1. Introduction Summarized by In-Hee Lee.
Neural Network Approximation of High- dimensional Functions Peter Andras School of Computing and Mathematics Keele University
SUPPORT VECTOR MACHINES Presented by: Naman Fatehpuria Sumana Venkatesh.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Nonlinear Dimension Reduction: Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
High resolution product by SVM. L’Aquila experience and prospects for the validation site R. Anniballe DIET- Sapienza University of Rome.
Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. The workflows of the two types of joint sparse representation methods. (a) The workflow.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Spectral Methods for Dimensionality
Unsupervised Riemannian Clustering of Probability Density Functions
کاربرد نگاشت با حفظ تنکی در شناسایی چهره
Recognition: Face Recognition
Machine Learning Dimensionality Reduction
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Object Modeling with Layers
Lecture 22 Clustering (3).
Domingo Mery Department of Computer Science
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
Nonlinear Dimension Reduction:
Using Manifold Structure for Partially Labeled Classification
Machine Learning – a Probabilistic Perspective
Domingo Mery Department of Computer Science
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui

1.The related work 2. The integration algorithm framework 3. Experiments

The related work  Locally linear embedding  Sparse representation-based classifier  K-SVD dictionary learning

Locally linear embedding LLE is an unsupervised learning algorithm that computers low-dimensional, neighbor-hood-preserving embedding of high-dimensional inputs.

Specifically, we expect data point and its neighbors to lie on or close to a locally linear patch of the manifold and the local reconstruction errors of these patches are measured by

Sparse representation-based classifier The sparse representation-based classifier can be considered a generalization of nearest neighbor (NN) and nearest subspace (NS), it adaptively chooses the minimal number of training samples needed to represent each test sample.

(4)

(5)

K-SVD dictionaries learning  The original training samples have much redundancy as well as noise and trivial information that can be negative to the recognition.  If the training samples are huge, the computation of SR will be time consuming, so an optimal dictionary is needed for the sparse representation and classification.

The K-SVD algorithm

The dictionary update stage:

Let be the training data matrix, is the -th class training samples matrix, a test data can be well approximated by the linear combination of the training data, i.e. The integration algorithm for supervised learning

Let be the representation coefficient vector with respect to -class. To make SRC achieve good performance on all training samples, we expect the within class residual minimized, while the between class residual maximized, simultaneously. Therefore we redefine the following optimization problem: (15)

Restrict _by choosing only the columns corresponding to _ and obtain _^ (16)

Let is the representation coefficient vector with respect to -th class, so the optimization problem in Eq. (16) is turned to (17)

In order to obtain the sparse representation coefficients, we want to learn an embedding map to reduce the dimensionality of and preserve the spare reconstruction. So the optimization problem in Eq. (17) is turned to

For a given test set, we can adaptively learn the embedding map, the optimal dictionary and the sparse reconstruction coefficients by the following optimization problem

The feature extraction and classification algorithm

Experiments for unsupervised learning  The effect of dictionary selection  Compare with pure feature extraction

Databases descriptions UCI databases: the Gas Sensor Array Drift Data set and the Synthetic Control Chart Time Series Date Set.

The effect of dictionary selection

Compare with pure feature extraction

Experiments  The effect of dictionary selection  Compare with pure classification  Compare with pure feature extraction

Databases descriptions

The effect of dictionary selection

Compare with pure classification

Compare with pure feature extraction

Thanks!

Question & suggestion?