Sparse and Redundant Representations and Their Applications in

Slides:



Advertisements
Similar presentations
1 MA 1128: Lecture 19 – 4/19/11 Quadratic Formula Solving Equations with Graphs.
Advertisements

PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Gateway Quiz Reminders: The next Gateway will be given in class next week or the week after )check your course calendar.) Before then, each student who.
Dynamic Presentation of Key Concepts Module 2 – Part 3 Meters Filename: DPKC_Mod02_Part03.ppt.
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Iterative Methods for Solving Linear Systems Leo Magallon & Morgan Ulloa.
§ 4.3 Equations and Inequalities Involving Absolute Value.
Block 4 Nonlinear Systems Lesson 14 – The Methods of Differential Calculus The world is not only nonlinear but is changing as well 1 Narrator: Charles.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
Sparse & Redundant Representation Modeling of Images Problem Solving Session 1: Greedy Pursuit Algorithms By: Matan Protter Sparse & Redundant Representation.
Robust Principal Components Analysis IT530 Lecture Notes.
Gateway Quiz Reminders: The next Gateway will be given in class next week (check your course calendar.) Before then, each student who did not score 8/8.
Geology 5670/6670 Inverse Theory 12 Jan 2015 © A.R. Lowry 2015 Read for Wed 14 Jan: Menke Ch 2 (15-37) Last time: Course Introduction (Cont’d) Goal is.
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
SOLVING QUADRATIC EQUATIONS A.4c: The student will solve multi-step linear and quadratic equations in two variables, including…solving quadratic equations.
1 Week 11 Numerical methods for ODEs 1.The basics: finite differences, meshes 2.The Euler method.
The Transportation and Assignment Problems
3.3 Dividing Polynomials.
The NP class. NP-completeness
Virtual University of Pakistan
Chap 10. Sensitivity Analysis
Large Margin classifiers
Solving Systems of Equations in Two Variables; Applications
P1 Chapter 8 :: Binomial Expansion
The minimum cost flow problem
Sparse and Redundant Representations and Their Applications in
Linear Algebra Lecture 4.
CALCULUS AND ANALYTIC GEOMETRY CS 001 LECTURE 02.
Functions Defined on General Sets
Confidence Intervals for a Population Proportion
Quantum One.
James B. Orlin Presented by Tal Kaminker
Confidence Interval Estimation and Statistical Inference
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Prof. Ramin Zabih Least squares fitting Prof. Ramin Zabih
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
1 FUNCTIONS AND MODELS.
Lectures on Graph Algorithms: searching, testing and sorting
Optimal sparse representations in general overcomplete bases
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
EECS150 Fall 2007 – Lab Lecture #4 Shah Bawany
Homework 9 Refer to the last example.
Chapter 8. General LP Problems
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Quantum Two.
I.4 Polyhedral Theory (NW)
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Sparse and Redundant Representations and Their Applications in
ECE 352 Digital System Fundamentals
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
FP1: Chapter 2 Numerical Solutions of Equations
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Chapter 8. General LP Problems
Sparse and Redundant Representations and Their Applications in
Simplex method (algebraic interpretation)
Chapter 8. General LP Problems
Straight Line Motion (continued)
ECE 576 POWER SYSTEM DYNAMICS AND STABILITY
Presentation transcript:

Sparse and Redundant Representations and Their Applications in Signal and Image Processing (236862) Section 4: From Exact to Approximate Sparse Solutions Winter Semester, 2018/2019 Michael (Miki) Elad

Meeting Plan Quick review of the material covered Addressing issues raised by other learners Answering questions from the students and getting their feedback Discussing a new material: Sub-gradients and LARS Administrative issues

Overview of the Material From Exact to Approximate Sparse Solution General Motivation – Why Approximate ? Pursuit Algorithms: OMP, and BP Extensions IRLS Solution of the Basis Pursuit IRLS Solution of the Basis Pursuit – A Demo The Unitary Case – A Source of Inspiration (Part 1) The Unitary Case – A Source of Inspiration (Part 2) ADMM Solution of the Basis Pursuit

Issues Raised by Other Learners OMP for P0 The lecturer mentions that the OMP needs some adjustment to solve P0-epsilon, but I don't see any change - the original algorithm already has a stopping condition for norm(residual)epsilon. He is correct – the very same algorithm is used in both cases, and the delicate difference is in the stopping criteria

Issues Raised by Other Learners Equivalence: Analyzing the THR Algorithm - experiment analysis The diagram shown in the video shows the L2-Error of the experiment. This means that the lower the graph, the less error in the estimations. If my understanding is correct, the diagram is saying that the higher the contrast in the true solution, the better the performance of the algorithm. This means the performance of THR improves a little bit at high contrast, and performance of OMP is worse a little bit at high contrast but still much better than that of THR. It looks like my understanding is contradict to the analysis?

Your Questions and Feedback

New Material? The LARS Algorithm Among the various problems presented in this section, we also met the one called (Q1): This is a convex problem and there various ways to solve it We present here an algorithm for getting the full-path of the solutions, i.e., solving this problem for all the ’s It sounds like too much to ask! Think about it – we do something close to this in OMP, when we add one non-zero at a time to the solution We stress: the proposed solution is an exact solver of Q1

Step 1: Have you Heard of Sub-Gradients? Example:

Step 2: What to do with Sub-Gradients? “smooth” Theorem: Given a convex function f(x) which is to be minimized, the optimal solution must satisfy Think about this: For smooth functions, where the sub-gradient in every location x is a single vector, this is just like requiring the gradient to be zero In the more general case, where some points have many “slopes”, the sub-gradient in these locations is a set, and now we require this set to include the null-vector

Sub-Gradients: A Small Exercise Optimal Value We should consider three options: 1. 2. 3.

Step 3: Back to Our Problem Thus, we characterize the optimal solution as the one that has the zero in the set of sub-gradients & This will become very clear as we start using this condition where

Step 4: Start from the Zero Solution For , the solution is simply zero: x*()=0 Our question: as we decrease the value of , when will the solution start changing? Answer: Lets look at the sub-gradient and the optimality condition: As long as 0 the optimal solution is the zero, and we get the value of z as a by-product

Step 5: Awakening As we get to =0 , one of the entries in z touches the boundaries (either +1 or -1) – Let’s assume that only one entry in z does that: zi=+1 or -1 If we plan to decrease , the solution MUST CHANGE, since otherwise z will get values outside [-1,1] The change: xi is awaken, while all the rest remain zero, and xi-s sign is chosen by Lets focus now on the following questions: What will be the value of xi as a function of  ? When the solution’s support will have to change again ?

Step 5: Awakening

Step 5: Awakening Lets look at the i-th row of this system. Recall: xi0, and thus zi is fixed Observe the linearity of xi w.r.t.  Lets look at all the other equations Plug in the value of xi(), and identify the next value that leads to a boundary value in zc

Step 6: Proceeding We are now into the process, and have several non-zeros denoted as the support S In our notations, xS are the non-zeros and all the rest are zeros. Thus zS=sign(xS) Following the same steps as before, we break the sub-gradient expression into two parts – on support and off-support:

Step 7: Next Break-Point What is the next valuek+1 where a change takes place? We seek the largest possible value of  (satisfying  <k) that brings one of these elements to +1 or -1 This can be tested for each entry:

LARS: Comments The process we have described should remind you very much of the OMP, due to two main features: We introduce one non-zero at a time to the solution as  is getting smaller The temporal solution looks almost like a LS: We relied on two simplifying assumption in our description: When the solution changes, only one non-zero is introduced. His can be easily fixed to manage any number of them We did not consider ‘death’ of non-zeros, which is a possibility

LARS: Comments If A is of size n×m, then LARS stops when the number of non-zeros is n – Why? When 0 the solution is such that Ax=b and then the problem we really solve is this: What do we know about solutions of such problems? … There MUST be an optimal solution to this problem with at most n non-zeros, which is exactly the claim made here

LARS: The Practice Run LARS-DEMO.m

Administrative Issues Registration to the second course – this is urgent You are required to conclude the projects in course 1: First project is due on Nov. 25th Second project is due on Dec. 16th Moving to the Second Course: Next week (29/11) – we discuss Section 5 We meet again on 20/12 to resume the course with the second half, with the assumption that you are after the first section in that course I remind you that the due date for the research project is April 30th 2019 – no delays are allowed