Sparse RecoveryAlgorithmResults  Original signal x = x k + u, where x k has k large coefficients and u is noise.  Acquire measurements Ax = y. If |x|=n,

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
Ranking Multimedia Databases via Relevance Feedback with History and Foresight Support / 12 I9 CHAIR OF COMPUTER SCIENCE 9 DATA MANAGEMENT AND EXPLORATION.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic.
On the Power of Adaptivity in Sparse Recovery Piotr Indyk MIT Joint work with Eric Price and David Woodruff, 2011.
Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic.
Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.
Sparse Recovery (Using Sparse Matrices)
Overcoming the L 1 Non- Embeddability Barrier Robert Krauthgamer (Weizmann Institute) Joint work with Alexandr Andoni and Piotr Indyk (MIT)
Dimensionality Reduction PCA -- SVD
Data Structures and Functional Programming Algorithms for Big Data Ramin Zabih Cornell University Fall 2012.
La Parguera Hyperspectral Image size (250x239x118) using Hyperion sensor. INTEREST POINTS FOR HYPERSPECTRAL IMAGES Amit Mukherjee 1, Badrinath Roysam 1,
Uncertainty Principles, Extractors, and Explicit Embeddings of L 2 into L 1 Piotr Indyk MIT.
“Random Projections on Smooth Manifolds” -A short summary
Structure from motion.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Dimensionality Reduction
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Clustering In Large Graphs And Matrices Petros Drineas, Alan Frieze, Ravi Kannan, Santosh Vempala, V. Vinay Presented by Eric Anderson.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
Y. Weiss (Hebrew U.) A. Torralba (MIT) Rob Fergus (NYU)
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Sketching and Embedding are Equivalent for Norms Alexandr Andoni (Simons Inst. / Columbia) Robert Krauthgamer (Weizmann Inst.) Ilya Razenshteyn (MIT, now.
Alfredo Nava-Tudela John J. Benedetto, advisor
Compressed Sensing Compressive Sampling
Efficient Image Search and Retrieval using Compact Binary Codes
Sparse Fourier Transform
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Game Theory Meets Compressed Sensing
Cs: compressed sensing
Learning With Structured Sparsity
1 Streaming Algorithms for Geometric Problems Piotr Indyk MIT.
Efficient EMD-based Similarity Search in Multimedia Databases via Flexible Dimensionality Reduction / 16 I9 CHAIR OF COMPUTER SCIENCE 9 DATA MANAGEMENT.
Bart M. ter Haar Romeny.  Question: can top-points be used for object- retrieval tasks?
1 Embedding and Similarity Search for Point Sets under Translation Minkyoung Cho and David M. Mount University of Maryland SoCG 2008.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
EXAMPLE 1 Solve an equation with two real solutions Solve x 2 + 3x = 2. x 2 + 3x = 2 Write original equation. x 2 + 3x – 2 = 0 Write in standard form.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
GUIDED PRACTICE for Example – – 2 12 – 4 – 6 A = Use a graphing calculator to find the inverse of the matrix A. Check the result by showing.
Low Rank Approximation and Regression in Input Sparsity Time David Woodruff IBM Almaden Joint work with Ken Clarkson (IBM Almaden)
1 Approximations and Streaming Algorithms for Geometric Problems Piotr Indyk MIT.
Sketching and Embedding are Equivalent for Norms Alexandr Andoni (Columbia) Robert Krauthgamer (Weizmann Inst) Ilya Razenshteyn (MIT) 1.
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
Instructor: Mircea Nicolescu Lecture 9
n-pixel Hubble image (cropped)
S IMILARITY E STIMATION T ECHNIQUES FROM R OUNDING A LGORITHMS Paper Review Jieun Lee Moses S. Charikar Princeton University Advanced Database.
Compressive Sensing for Attitude Determination Rishi Gupta, Piotr Indyk, Eric Price, Yaron Rachlin May 12, 2011 Draper Laboratory.
Stream-based Geometric Algorithms
Lecture 22: Linearity Testing Sparse Fourier Transform
Parallel Algorithm Design using Spectral Graph Theory
Lecture 15 Sparse Recovery Using Sparse Matrices
Sublinear Algorithmic Tools 3
Basic Algorithms Christina Gallner
Lecture 11: Nearest Neighbor Search
Sketching and Embedding are Equivalent for Norms
Lecture 16: Earth-Mover Distance
Parallel Algorithms for Geometric Graph Problems
Y. Kotidis, S. Muthukrishnan,
Image Processing, Lecture #8
Image Processing, Lecture #8
Dimension versus Distortion a.k.a. Euclidean Dimension Reduction
Aishwarya sreenivasan 15 December 2006.
Lecture 15: Least Square Regression Metric Embeddings
Integrated Math Three – Quarter 1 Benchmark Review
Subspace Expanders and Low Rank Matrix Recovery
Presentation transcript:

Sparse RecoveryAlgorithmResults  Original signal x = x k + u, where x k has k large coefficients and u is noise.  Acquire measurements Ax = y. If |x|=n, A is an m x n matrix, and usually m = O(k log n) << n.  Recover vector x* from y. The usual guarantee is ||x* – x|| 1 < C ||u|| = C min k-sparse x’ ||x' – x|| 1, for some constant C. In other words, x* is close to the best possible k-sparse approximation of x. High Level Overview  Use a distance preserving embedding to map the problem under EMD to a problem under the L 1 norm.  Solve as a standard sparse recovery problem in the L 1 norm. The Pyramid Mapping  Use a distance preserving embedding to map the problem under EMD to a problem under the L 1 norm.  For each j = 0, 1, 2,..., form a grid with cell length d j = 2 j, and a mapping P j, where each entry of P j x corresponds to the total mass of a cell of the grid.  The mapping Px is the concatenation [d 0 P 0 x, d 1 P 1 x, d 2 P 2 x,...].  If the grids are shifted by a random vector, E[||Px 1 – Px 2 || 1 ] < O(log n) ||x 1 – x 2 || EMD, and ||Px 1 – Px 2 || 1 > ||x 1 – x 2 || EMD  for images x 1 and x 2 [3].  Hence P is a linear, distance preserving embedding from the L 1 norm to the EMD norm. Also, if x is k-sparse, Px is k log(n)-sparse. Full Algorithm  Let A be any matrix enabling k log(n)-sparse recovery, eg. from [1].  Measurements y = APx, where P is the pyramid mapping from above.  Recovery: Compute “A -1 y” using a sparse recovery algorithm, and then compute x* = P -1 A -1 y using an “inverse” pyramid mapping.  In our experiments, we use the following constraints during the sparse recovery portion of the algorithm to reduce the number of measurements |y|:  x is non-negative for real life images.  The large coefficients of Px form a “tree”. Theoretical  We find a set of m x n matrices B, m = O(k log n/k), such that for any x, given Bx, we can compute x* such that ||x* - x|| EMD < C min k-sparse x’ ||x' – x|| EMD, with high probability, in O(k log n) time, for a constant C. Experimental  We use Sequential Sparse Matching Pursuit (SSMP) [1] for the Sparse Recovery portion of the algorithm.  We test our algorithm on the images such as the one below to the left. After recovery, we attempt to locate five star-like objects in x*.  Each point on the graph on the right is the median over 15 trials of the average distance from recovered stars to the actual stars. If fewer than 5 stars are found, the distance is assumed to be infinite and is not displayed.  The EMD algorithm recovers the stars using substantially fewer measurements than standard SSMP. Earth Movers Distance (EMD)  x represents a 2 dimensional image.  If x 1 (green) and x 2 (purple) are binary images such that ||x 1 || 1 = ||x 2 || 1, then ||x 1 – x 2 || EMD is the minimum cost matching shown to the right.  There are natural generalizations to non-binary vectors with unequal L 1 mass. Problem Statement  We want to perform sparse recovery as above, but we would like to recover x* such that x* is close to x under EMD rather than L 1.  EMD corresponds well with our perceptual notion of image similarity, and is used in computer vision algorithms. By contrast, even small translations in an image result in almost maximal L 1 difference [4]. References [1]Berinde, Indyk. Sequential sparse matching pursuit. Allerton [2]Gupta, Indyk, Price. Sparse recovery for earth mover distance. Allerton [3]Indyk, Thaper. Fast image retrieval via embeddings. ICCV [4]Rubner, Tomasi, Guibas. The earth mover's distance as a metric for image retrieval. IJCV Sparse Recovery for Earth Mover Distance MADALGO – Center for Massive Data Algorithmics, a Center of the Danish National Research Foundation Eric Price MIT Piotr Indyk MIT Rishi Gupta MIT