Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bielefeld University, Germany

Similar presentations


Presentation on theme: "Bielefeld University, Germany"— Presentation transcript:

1 Bielefeld University, Germany
Confident Kernel Sparse Coding and Dictionary Learning Babak Hosseini Prof. Dr. Barbara Hammer Singapore, 20 Nov. 2018 Cognitive Interaction Technology Centre of Excellence (CITEC) Bielefeld University, Germany Dissimilarity based Metric Learning for Classification of Motion Data 1

2 Outlines Introduction Confident Dictionary Learning Experiments
Conclusion 2 2

3 Introduction Introduction Confident Dictionary Learning Experiments
Conclusion 3 3

4 Introduction Dictionary learning and sparse coding X: Input signals
U: Dictionary matrix Γ: Sparse codes Reconstructing X using sparse resources from U 4 4

5 Introduction Dictionary learning and sparse coding X: Input signals
U: Dictionary matrix Γ: Sparse codes Reconstructing X using sparse resources from U x U γ 5 5

6 Introduction Dictionary learning and sparse coding X: Input signals
U: Dictionary matrix Γ: Sparse codes Estimating Γ  Sparse coding Estimating U  Dictionary learning 6 6

7 Introduction Kernel Dictionary learning/sparse coding
Non-vectorial data Time-series Representation: Kernel matrix Pairwise-similarity/distance (x1,x2) 7 7

8 Introduction Kernel Dictionary learning/sparse coding
Implicit mapping  Dictionary: Linear combination of training data 8 8

9 Introduction Discriminative dictionary learning (DDL) Goal:
Discriminative dictionary learning (DDL) Classification setting Label matrix Goal: Learn a Dictionary U that reconstructs X via Γ Mapping Γ: X L 9 9

10 Introduction Discriminative dictionary learning (DDL) Goal:
Discriminative dictionary learning (DDL) Goal: Learn a Dictionary U that reconstructs X via Γ Mapping Γ: X L : Ensures the discriminative mapping for training data : Has access to label information L 10 10

11 Introduction What is the problem? 11 11

12 Introduction What is the problem?
The test and train models are not consistent! 12 12

13 Introduction ! What is the problem?
The test and train models are not consistent! Reconstruction of test data doesn’t follow the discriminative mapping. ! 13 13

14 Outlines Introduction Confident Dictionary Learning Experiments
Conclusion 14 14

15 Confident Dictionary Learning
A new discriminant objective What is the point of that?! 15 15

16 Confident Dictionary Learning
A new discriminant objective Reconstruction model: : Its entries show the share of each class in the reconstruction of x. 16 16

17 Confident Dictionary Learning
A new discriminant objective Reconstruction model: Therefore, Mapping to label space: 17 17

18 Confident Dictionary Learning
A new discriminant objective Minimizing  each x is reconstructed mostly by its own class. Flexible term: x still can use other classes (minor share) 18 18

19 Confident Dictionary Learning
Consistency? 19 19

20 Confident Dictionary Learning
Classification of test data: z: test data Label : Class j: highest contribution in the reconstruction 20 20

21 Confident Dictionary Learning
Classification of test data: z: test data Class j: highest contribution in the reconstruction Minimizing  Forcing to reconstruct z using less number of classes. Flexible: can still use small share of other classes (if required). Confident toward one class. 21 21

22 Confident Dictionary Learning
Convexity? Not convex! 22 22

23 Confident Dictionary Learning
Convexity? Not convex! β: -{most negative eigenvalue of V} Once before the training! 23 23

24 Confident Dictionary Learning
Consistent? Recall uses L too. Recall objective term similar to the train’s Flexible contributions Confidence criteria More consistent 24 24

25 Outlines Introduction Confident Dictionary Learning Experiments
Conclusion 25 25

26 Experiments Datasets: Multi-dimensional time-series
Cricket Umpire [1]: Articulatory Words [2] Schunk Dexterous [3]: UTKinect Actions [4]: DynTex++[5]: [1] M. H. Ko, et al. G. W. West, S. Venkatesh, and M. Kumar, “Online context recognition in multisensor systems using dynamic time warping,” in ISSNIP’05. IEEE, 2005, pp. 283–288. [2] J. Wang, A. Samal, and J. Green, “Preliminary test of a real-time, interactive silent speech interface based on electromagnetic articulograph,” in SLPAT’14, 2014, pp. 38–45. [3] A. Drimus, G. Kootstra, A. Bilberg, and D. Kragic, “Design of a flexible tactile sensor for classification of rigid and deformable objects,” Robotics and Autonomous Systems, vol. 62, no. 1, pp. 3–15, 2014. [4] M. Madry, L. Bo, D. Kragic, and D. Fox, “St-hmp: Unsupervised spatiotemporal feature learning for tactile data,” in ICRA’14. IEEE, 2014, pp. 2262–2269. [5] L. Xia, C.-C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3d joints,” in CVPRW’12 Workshops. IEEE, 2012, pp. 20–27. [6] B. Ghanem and N. Ahuja, “Maximum margin distance learning for dynamic texture recognition,” in ECCV’10. Springer, 2010, pp. 223–236. 26 26

27 Experiments Baselines: K-KSVD: Kernel K-SVD
LC-NNKSC: Predecessor of CKSC EKDL: Equiangular kernel dictionary learning KGDL: Dictionary learning on grassmann manifolds LP-KSVD: Locality preserving K-SVD 27 27

28 Experiments Baselines: K-KSVD: Kernel K-SVD
LC-NNKSC: Predecessor of CKSC EKDL: Equiangular kernel dictionary learning KGDL: Dictionary learning on grassmann manifolds LP-KSVD: Locality preserving K-SVD Classification Accuracy (%): “Dictionary discrimination power” 28 28

29 Experiments Interpretability of atoms (IP): Dictionary:
In range [1/c 1] atom i 1 if atom i belongs only to one class 1/c  if atom i is related to all classes 29 29

30 Experiments Interpretability of atoms (IP): In range [1/c 1]
1 if atom i belongs only to one class 1/c  if atom i is related to all classes Good discrimination Good interpretation 30 30

31 Conclusion Consistency is important for DDL models.
Proposed a flexible discriminant terms for DDL. Proposed a more consistent training-recall framework. Increase in: Discriminative performance Interpretability of dictionary atoms. 31 31

32 Thank you very much! Questions? 32 32

33 Bielefeld University, Germany
Confident Kernel Sparse Coding and Dictionary Learning Babak Hosseini Prof. Dr. Barbara Hammer Singapore, 20 Nov. 2018 Cognitive Interaction Technology (CITEC), Bielefeld University, Germany Dissimilarity based Metric Learning for Classification of Motion Data 33


Download ppt "Bielefeld University, Germany"

Similar presentations


Ads by Google