Bielefeld University, Germany

Slides:



Advertisements
Similar presentations
Learning visual representations for unfamiliar environments Kate Saenko, Brian Kulis, Trevor Darrell UC Berkeley EECS & ICSI.
Advertisements

Text mining Gergely Kótyuk Laboratory of Cryptography and System Security (CrySyS) Budapest University of Technology and Economics
1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.
Automatic Image Annotation Using Group Sparsity
Learning an Attribute Dictionary for Human Action Classification
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
A CTION R ECOGNITION FROM V IDEO U SING F EATURE C OVARIANCE M ATRICES Kai Guo, Prakash Ishwar, Senior Member, IEEE, and Janusz Konrad, Fellow, IEEE.
An Introduction of Support Vector Machine
Structured Sparse Principal Component Analysis Reading Group Presenter: Peng Zhang Cognitive Radio Institute Friday, October 01, 2010 Authors: Rodolphe.
Machine learning continued Image source:
Patch to the Future: Unsupervised Visual Prediction
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction Atul Kanaujia, CBIM, Rutgers Cristian Sminchisescu, TTI-C Dimitris Metaxas,CBIM, Rutgers.
Video Face Recognition: A Literature Review Hao Zhang Computer Science Department 1.
4/15/2017 Using Gaussian Process Regression for Efficient Motion Planning in Environments with Deformable Objects Barbara Frank, Cyrill Stachniss, Nichola.
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Ilias Theodorakopoulos PhD Candidate
Transferable Dictionary Pair based Cross-view Action Recognition Lin Hong.
Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban.
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Object-centric spatial pooling for image classification Olga Russakovsky, Yuanqing Lin, Kai Yu, Li Fei-Fei ECCV 2012.
Discriminative Segment Annotation in Weakly Labeled Video Kevin Tang, Rahul Sukthankar Appeared in CVPR 2013 (Oral)
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Discriminative and generative methods for bags of features
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Technion - Israel Institute of Technology Department of Electrical Engineering Advanced Topics in Computer Vision Course Presentation By Stav Shapiro.
Cs: compressed sensing
IIIT Hyderabad Synthesizing Classifiers for Novel Settings Viresh Ranjan CVIT,IIIT-H Adviser: Prof. C. V. Jawahar, IIIT-H Co-Adviser: Dr. Gaurav Harit,
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Yao, B., and Fei-fei, L. IEEE Transactions on PAMI(2012)
Learning a Kernel Matrix for Nonlinear Dimensionality Reduction By K. Weinberger, F. Sha, and L. Saul Presented by Michael Barnathan.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Non-local Sparse Models for Image Restoration Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro and Andrew Zisserman ICCV 2009 Presented by: Mingyuan.
Mingyang Zhu, Huaijiang Sun, Zhigang Deng Quaternion Space Sparse Decomposition for Motion Compression and Retrieval SCA 2012.
Interactive Learning of the Acoustic Properties of Objects by a Robot
Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
H. Lexie Yang1, Dr. Melba M. Crawford2
Optimal Dimensionality of Metric Space for kNN Classification Wei Zhang, Xiangyang Xue, Zichen Sun Yuefei Guo, and Hong Lu Dept. of Computer Science &
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
1/18 New Feature Presentation of Transition Probability Matrix for Image Tampering Detection Luyi Chen 1 Shilin Wang 2 Shenghong Li 1 Jianhua Li 1 1 Department.
Skeleton Based Action Recognition with Convolutional Neural Network
Iterative similarity based adaptation technique for Cross Domain text classification Under: Prof. Amitabha Mukherjee By: Narendra Roy Roll no: Group:
Sparse Granger Causality Graphs for Human Action Classification Saehoon Yi and Vladimir Pavlovic Rutgers, The State University of New Jersey.
Optimal Reverse Prediction: Linli Xu, Martha White and Dale Schuurmans ICML 2009, Best Overall Paper Honorable Mention A Unified Perspective on Supervised,
Using decision trees to build an a framework for multivariate time- series classification 1 Present By Xiayi Kuang.
A Kernel Approach for Learning From Almost Orthogonal Pattern * CIS 525 Class Presentation Professor: Slobodan Vucetic Presenter: Yilian Qin * B. Scholkopf.
Out of sample extension of PCA, Kernel PCA, and MDS WILSON A. FLORERO-SALINAS DAN LI MATH 285, FALL
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
Multi-View Discriminant Analysis 多视判别分析
BRAIN Alliance Research Team Annual Progress Report (Jul – Feb
Deep Compositional Cross-modal Learning to Rank via Local-Global Alignment Xinyang Jiang, Fei Wu, Xi Li, Zhou Zhao, Weiming Lu, Siliang Tang, Yueting.
Learning Mid-Level Features For Recognition
Unsupervised Riemannian Clustering of Probability Density Functions
Restricted Boltzmann Machines for Classification
LINEAR AND NON-LINEAR CLASSIFICATION USING SVM and KERNELS
Raviteja Vemulapalli University of Maryland, College Park.
Learning with information of features
CS 2750: Machine Learning Support Vector Machines
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
CNN-based Action Recognition Using Adaptive Multiscale Depth Motion Maps And Stable Joint Distance Maps Junyou He, Hailun Xia, Chunyan Feng, Yunfei Chu.
Learned Convolutional Sparse Coding
Presentation transcript:

Bielefeld University, Germany Confident Kernel Sparse Coding and Dictionary Learning Babak Hosseini Prof. Dr. Barbara Hammer Singapore, 20 Nov. 2018 bhosseini@techfak.uni-bielefeld.de Cognitive Interaction Technology Centre of Excellence (CITEC) Bielefeld University, Germany Dissimilarity based Metric Learning for Classification of Motion Data 1

Outlines Introduction Confident Dictionary Learning Experiments Conclusion 2 2

Introduction Introduction Confident Dictionary Learning Experiments Conclusion 3 3

Introduction Dictionary learning and sparse coding X: Input signals U: Dictionary matrix Γ: Sparse codes Reconstructing X using sparse resources from U 4 4

Introduction Dictionary learning and sparse coding X: Input signals U: Dictionary matrix Γ: Sparse codes Reconstructing X using sparse resources from U x U γ 5 5

Introduction Dictionary learning and sparse coding X: Input signals U: Dictionary matrix Γ: Sparse codes Estimating Γ  Sparse coding Estimating U  Dictionary learning 6 6

Introduction Kernel Dictionary learning/sparse coding Non-vectorial data Time-series Representation: Kernel matrix Pairwise-similarity/distance (x1,x2) 7 7

Introduction Kernel Dictionary learning/sparse coding Implicit mapping  Dictionary: Linear combination of training data 8 8

Introduction Discriminative dictionary learning (DDL) Goal:   Discriminative dictionary learning (DDL) Classification setting Label matrix Goal: Learn a Dictionary U that reconstructs X via Γ Mapping Γ: X L 9 9

Introduction Discriminative dictionary learning (DDL) Goal:   Discriminative dictionary learning (DDL) Goal: Learn a Dictionary U that reconstructs X via Γ Mapping Γ: X L : Ensures the discriminative mapping for training data : Has access to label information L 10 10

Introduction What is the problem? 11 11

Introduction What is the problem? The test and train models are not consistent! 12 12

Introduction ! What is the problem? The test and train models are not consistent! Reconstruction of test data doesn’t follow the discriminative mapping. ! 13 13

Outlines Introduction Confident Dictionary Learning Experiments Conclusion 14 14

Confident Dictionary Learning A new discriminant objective What is the point of that?! 15 15

Confident Dictionary Learning A new discriminant objective Reconstruction model: : Its entries show the share of each class in the reconstruction of x. 16 16

Confident Dictionary Learning A new discriminant objective Reconstruction model: Therefore, Mapping to label space: 17 17

Confident Dictionary Learning A new discriminant objective Minimizing  each x is reconstructed mostly by its own class. Flexible term: x still can use other classes (minor share) 18 18

Confident Dictionary Learning Consistency? 19 19

Confident Dictionary Learning Classification of test data: z: test data Label : Class j: highest contribution in the reconstruction 20 20

Confident Dictionary Learning Classification of test data: z: test data Class j: highest contribution in the reconstruction Minimizing  Forcing to reconstruct z using less number of classes. Flexible: can still use small share of other classes (if required). Confident toward one class. 21 21

Confident Dictionary Learning Convexity? Not convex! 22 22

Confident Dictionary Learning Convexity? Not convex! β: -{most negative eigenvalue of V} Once before the training! 23 23

Confident Dictionary Learning Consistent? Recall uses L too. Recall objective term similar to the train’s Flexible contributions Confidence criteria More consistent 24 24

Outlines Introduction Confident Dictionary Learning Experiments Conclusion 25 25

Experiments Datasets: Multi-dimensional time-series Cricket Umpire [1]: Articulatory Words [2] Schunk Dexterous [3]: UTKinect Actions [4]: DynTex++[5]: [1] M. H. Ko, et al. G. W. West, S. Venkatesh, and M. Kumar, “Online context recognition in multisensor systems using dynamic time warping,” in ISSNIP’05. IEEE, 2005, pp. 283–288. [2] J. Wang, A. Samal, and J. Green, “Preliminary test of a real-time, interactive silent speech interface based on electromagnetic articulograph,” in SLPAT’14, 2014, pp. 38–45. [3] A. Drimus, G. Kootstra, A. Bilberg, and D. Kragic, “Design of a flexible tactile sensor for classification of rigid and deformable objects,” Robotics and Autonomous Systems, vol. 62, no. 1, pp. 3–15, 2014. [4] M. Madry, L. Bo, D. Kragic, and D. Fox, “St-hmp: Unsupervised spatiotemporal feature learning for tactile data,” in ICRA’14. IEEE, 2014, pp. 2262–2269. [5] L. Xia, C.-C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3d joints,” in CVPRW’12 Workshops. IEEE, 2012, pp. 20–27. [6] B. Ghanem and N. Ahuja, “Maximum margin distance learning for dynamic texture recognition,” in ECCV’10. Springer, 2010, pp. 223–236. 26 26

Experiments Baselines: K-KSVD: Kernel K-SVD LC-NNKSC: Predecessor of CKSC EKDL: Equiangular kernel dictionary learning KGDL: Dictionary learning on grassmann manifolds LP-KSVD: Locality preserving K-SVD 27 27

Experiments Baselines: K-KSVD: Kernel K-SVD LC-NNKSC: Predecessor of CKSC EKDL: Equiangular kernel dictionary learning KGDL: Dictionary learning on grassmann manifolds LP-KSVD: Locality preserving K-SVD Classification Accuracy (%): “Dictionary discrimination power” 28 28

Experiments Interpretability of atoms (IP): Dictionary: In range [1/c 1] atom i 1 if atom i belongs only to one class 1/c  if atom i is related to all classes 29 29

Experiments Interpretability of atoms (IP): In range [1/c 1] 1 if atom i belongs only to one class 1/c  if atom i is related to all classes Good discrimination Good interpretation 30 30

Conclusion Consistency is important for DDL models. Proposed a flexible discriminant terms for DDL. Proposed a more consistent training-recall framework. Increase in: Discriminative performance Interpretability of dictionary atoms. 31 31

Thank you very much! Questions? 32 32

Bielefeld University, Germany Confident Kernel Sparse Coding and Dictionary Learning Babak Hosseini Prof. Dr. Barbara Hammer Singapore, 20 Nov. 2018 bhosseini@techfak.uni-bielefeld.de Cognitive Interaction Technology (CITEC), Bielefeld University, Germany Dissimilarity based Metric Learning for Classification of Motion Data 33