Progress Report #2 Alvaro Velasquez. Project Selection I chose to work with Nasim Souly on the project titled “Subspace Clustering via Graph Regularized.

Slides:



Advertisements
Similar presentations
Learning Riemannian metrics for motion classification Fabio Cuzzolin INRIA Rhone-Alpes Computational Imaging Group, Pompeu Fabra University, Barcellona.
Advertisements

Partitional Algorithms to Detect Complex Clusters
Least-squares Meshes Olga Sorkine and Daniel Cohen-Or Tel-Aviv University SMI 2004.
Prediction with Regression
Structured Sparse Principal Component Analysis Reading Group Presenter: Peng Zhang Cognitive Radio Institute Friday, October 01, 2010 Authors: Rodolphe.
Image Congealing (batch/multiple) image (alignment/registration) Advanced Topics in Computer Vision (048921) Boris Kimelman.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Ilias Theodorakopoulos PhD Candidate
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Large-Scale, Real-World Face Recognition in Movie Trailers Week 2-3 Alan Wright (Facial Recog. pictures taken from Enrique Gortez)
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Coefficient Path Algorithms Karl Sjöstrand Informatics and Mathematical Modelling, DTU.
Lecture 21: Spectral Clustering
A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts Dhillon, Inderjit S., Yuqiang Guan, and Brian Kulis.
Lecture 16 Graphs and Matrices in Practice Eigenvalue and Eigenvector Shang-Hua Teng.
Dual Problem of Linear Program subject to Primal LP Dual LP subject to ※ All duality theorems hold and work perfectly!
A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts
An Introduction to Kernel-Based Learning Algorithms K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda and B. Scholkopf Presented by: Joanna Giforos CS8980: Topics.
Matching a 3D Active Shape Model on sparse cardiac image data, a comparison of two methods Marleen Engels Supervised by: dr. ir. H.C. van Assen Committee:
Unconstrained Optimization Problem
Agenda The Subspace Clustering Problem Computer Vision Applications
Lecture 16 Cramer’s Rule, Eigenvalue and Eigenvector Shang-Hua Teng.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
S I E M E N S C O R P O R A T E R E S E A R C H 1 1 Computing Exact Discrete Minimal Surfaces: Extending and Solving the Shortest Path Problem in 3D with.
Evolving Curves/Surfaces for Geometric Reconstruction and Image Segmentation Huaiping Yang (Joint work with Bert Juettler) Johannes Kepler University of.
Robotics Daniel Vasicek 2012/04/15.
Cs: compressed sensing
Tzu ming Su Advisor : S.J.Wang MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION 2013/1/28 L. Xu, J. Jia, and Y. Matsushita. Motion detail preserving optical.
Multi-task Low-rank Affinity Pursuit for Image Segmentation Bin Cheng, Guangcan Liu, Jingdong Wang, Zhongyang Huang, Shuicheng Yan (ICCV’ 2011) Presented.
BrainStorming 樊艳波 Outline Several papers on icml15 & cvpr15 PALM Information Theory Learning.
Sparse Matrix Factorizations for Hyperspectral Unmixing John Wright Visual Computing Group Microsoft Research Asia Sept. 30, 2010 TexPoint fonts used in.
Fast and incoherent dictionary learning algorithms with application to fMRI Authors: Vahid Abolghasemi Saideh Ferdowsi Saeid Sanei. Journal of Signal Processing.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Center for Evolutionary Functional Genomics Large-Scale Sparse Logistic Regression Jieping Ye Arizona State University Joint work with Jun Liu and Jianhui.
1 Robust Nonnegative Matrix Factorization Yining Zhang
Effective Optical Flow Estimation
Learning Spectral Clustering, With Application to Speech Separation F. R. Bach and M. I. Jordan, JMLR 2006.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
Ariadna Quattoni Xavier Carreras An Efficient Projection for l 1,∞ Regularization Michael Collins Trevor Darrell MIT CSAIL.
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016.
Quiz Week 8 Topical. Topical Quiz (Section 2) What is the difference between Computer Vision and Computer Graphics What is the difference between Computer.
Ultra-high dimensional feature selection Yun Li
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
REU Week 1 Presented by Christina Peterson. Edge Detection Sobel ◦ Convolve image with derivative masks:  x:  y: ◦ Calculate gradient magnitude ◦ Apply.
Regularized Least-Squares and Convex Optimization.
Wildlife Census via LSH-based animal tracking APOORV PATWARDHAN 1.
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
Krylov-Subspace Methods - I
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Progress Report: Week 6 Alvaro Velasquez.
Solver & Optimization Problems
A Unified Algebraic Approach to 2D and 3D Motion Segmentation
Structure from motion Input: Output: (Tomasi and Kanade)
Singular Value Decomposition
Lesson 13-3: Determinants & Cramer’s Rule
CSCI B609: “Foundations of Data Science”
Domingo Mery Department of Computer Science
Find the area of the Triangle
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Sparse and Redundant Representations and Their Applications in
 = N  N matrix multiplication N = 3 matrix N = 3 matrix N = 3 matrix
Systems of Equations Solve by Graphing.
Using Manifold Structure for Partially Labeled Classification
GRAPHING LINEAR EQUATIONS
Structure from motion Input: Output: (Tomasi and Kanade)
Non-Negative Matrix Factorization
An Efficient Projection for L1-∞ Regularization
CS249: Neural Language Model
Progress Report Alvaro Velasquez.
Presentation transcript:

Progress Report #2 Alvaro Velasquez

Project Selection I chose to work with Nasim Souly on the project titled “Subspace Clustering via Graph Regularized Sparse Coding”. I chose this topic because the mathematical aspect of it interested me and I believe that sparse coding in general is useful to many fields in computer science and possibly graph theory.

Papers Read Sparse Subspace Clustering via Group Sparse Coding.- Saha et al. Graph Regularized Sparse Coding for Image Representation.- Zheng et al. Least Squares Optimization with L1-Norm Regularization. - Mark Schmidt Robust Face Recognition via Sparse Representation.- Wright et al.

Papers Read A Discrete Chain Graph Model for 3d+t Cell Tracking with High Misdetection Robustness. - Kausler et al. Evaluation of Super-Voxel Methods for Early Video Processing.- Xu et al. Spectral Clustering of Linear Subspaces for Motion Segmentation. - Lauer et al. Graph Regularized Nonnegative Matrix Factorization for Data Representation. - Cai et al.

Topics learned L0, L1, L2, Lp norms as constraints. Conjugate and Laplacian matrices. Clustering methods. Sparse coding principles. Basic spectral graph theory (eigenvalues and eigen subspaces of adjacency matrix for image classification). Convex minimization (Gradient descent, subgradient method, etc.).

Work for this week I will be trying to implement the first steps of video segmentation using sparse coding (no graph regularization yet). To achieve this, I will solve the minimization problem ||y – DX|| 2 + lambda||X|| 1. Y is an image patch, D is the dictionary, and X is the coefficient matrix to be made sparse via L1 minimization. I will test my solution on the SegTrack data set.