Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sparse Learning Based on L2,1-norm

Similar presentations


Presentation on theme: "Sparse Learning Based on L2,1-norm"— Presentation transcript:

1 Sparse Learning Based on L2,1-norm
Xiaohong Chen

2 Outline Review of sparse learning
Efficient and robust feature selection via joint l2,1-norm minimzation Exploiting the entire feature space with sparsity for automatic image annotation Further works

3 Outline Review of sparse learning
Efficient and robust feature selection via joint l2,1-norm minimzation Exploiting the entire feature space with sparsity for automatic image annotation Further works

4 Review of Sparse Learning

5

6 Some examples LeastR LeastC GlLeastR

7 Shortcoming of Sparse Learning
The projection matrix W is optimized one by one, and their sparsity patterns are independent, so it can’t reflect the sparsity of the original features, e.g.,

8 Matrix norm

9 Outline Review of sparse learning
Efficient and robust feature selection via joint l2,1-norm minimzation Exploiting the entire feature space with sparsity for automatic image annotation Further works

10 Efficient and robust feature selection via joint l2,1-norm minimzation

11 Robust Feature Selection Based on l21-norm
Given training data {x1, x2,…, xn} and the associated class labels {y1,y2,…, yn} Least square regression solves the following optimizaiton problem to obtain the projection matrix W Add a regularization R(W) to the robust version of LS,

12 Robust Feature Selection Based on l21-norm
Possible regularizations Ridge regularization Lasso regularization Penalize all c regression coefficients corresponding to a single feature as a whole

13 Robust Feature Selection Based on l21-norm

14 Robust Feature Selection Based on l21-norm
Denote (14)

15 Robust Feature Selection Based on l21-norm
Then we have (19)

16 The iterative algorithm to solve problem (14)
Theorem1: The algorithm will monotonically decrease the objective of the problem in Eq.(14) in each iteration, and converge to the global optimum of the problem.

17 Proof of theorem1 u

18 Proof of theorem1

19 (1) (2) (1)+(2)

20 Experimental results-1

21 Experimental results-2

22 Experimental results-3

23 Outline Review of sparse learning
Efficient and robust feature selection via joint l2,1-norm minimzation Exploiting the entire feature space with sparsity for automatic image annotation Further works

24 Exploiting the entire feature space with sparsity for automatic image annotation

25 The illustration of image annotation

26 The illustration of image annotation

27 The illustration of image annotation

28 Formulation The algorithm can be generalized as the following problem
Applying manifold learning and semi-supervised learning to define the loss function, then obtain the optimization problem

29 Formulation The definition of A and B are shown in the paper. With the Lagrange technique, we have

30 The SFSS Algorithm

31 Because Wt+1 is the minimum of

32 Owing to The fact that And above inequality Incorporating (19) to (18), we can get: The objective of the framework is convex, so the proposed approach converges to the global optimum.

33 Experimental results-1
MAP: Mean Average precision

34 Experimental results-2

35 Experimental results-3

36 Experimental results-4

37 Outline Review of sparse learning
Efficient and robust feature selection via joint l2,1-norm minimzation Exploiting the entire feature space with sparsity for automatic image annotation Further works

38 Future works-1 Incorporate sparse learning based on L21-norm into multi-view dimensionality reduction, e.g., A risk: a degenerate solution! How to avoid?

39 Future works-2 (2) Incorporate the space structural information of the features to preserving the continuity of the features

40 Reference [1]F.Nie, D.Xu, X.Cai, and C.Ding. Efficient and robust feature selection via joint l2,1-norm minimzation. NIPS 2010. [2]Z.Ma, Y.Yang, F.Nie, J.Uijlings, and N.Sebe. Exploiting the entire feature space with sparsity for automatic image annotation. Proceedings of the 19th ACM international conference on Multimedia: [3]Y.Yang,H.Shen, Z.Ma, Z.Huang, and X.Zhou. L2,1-norm regularization discriminative feature selection for unsupervised learning. [4] DBLP:Feiping Nie

41 Thanks! Q&A


Download ppt "Sparse Learning Based on L2,1-norm"

Similar presentations


Ads by Google