Sparse and Redundant Representations and Their Applications in

Slides:



Advertisements
Similar presentations
4.5: Linear Approximations and Differentials
Advertisements

PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Please CLOSE YOUR LAPTOPS, and turn off and put away your cell phones, and get out your note-taking materials. Today’s Gateway Test will be given during.
Today’s quiz on 8.2 B Graphing Worksheet 2 will be given at the end of class. You will have 12 minutes to complete this quiz, which will consist of one.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Please open your laptops, log in to the MyMathLab course web site, and open Daily Quiz 18. You will have 10 minutes for today’s quiz. The second problem.
Algebra 1 Ch 7.3 – Linear Systems by Combinations.
Gateway Quiz Reminders: The next Gateway will be given in class next week or the week after )check your course calendar.) Before then, each student who.
Dynamic Presentation of Key Concepts Module 2 – Part 3 Meters Filename: DPKC_Mod02_Part03.ppt.
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Iterative Methods for Solving Linear Systems Leo Magallon & Morgan Ulloa.
Mathematics Number: Logarithms Science and Mathematics Education Research Group Supported by UBC Teaching and Learning Enhancement Fund Department.
§ 4.3 Equations and Inequalities Involving Absolute Value.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Please CLOSE YOUR LAPTOPS, and turn off and put away your cell phones, and get out your note- taking materials.
Sparse & Redundant Representation Modeling of Images Problem Solving Session 1: Greedy Pursuit Algorithms By: Matan Protter Sparse & Redundant Representation.
Algebra – Solving Systems of Linear Inequalities Objective: Students will solve systems of linear inequalities by graphing.
Gateway Quiz Reminders: The next Gateway will be given in class next week (check your course calendar.) Before then, each student who did not score 8/8.
Geology 5670/6670 Inverse Theory 12 Jan 2015 © A.R. Lowry 2015 Read for Wed 14 Jan: Menke Ch 2 (15-37) Last time: Course Introduction (Cont’d) Goal is.
INTEGRALS 5. INTEGRALS In Chapter 3, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.
ConcepTest Section 1.6 Question 1 Graph y = x 2, y = x 3, y = x 4, y = x 5. List at least 3 observations. (Closed Book)
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
1 Week 11 Numerical methods for ODEs 1.The basics: finite differences, meshes 2.The Euler method.
Virtual University of Pakistan
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chap 10. Sensitivity Analysis
Large Margin classifiers
Solving Systems of Equations in Two Variables; Applications
The minimum cost flow problem
Sparse and Redundant Representations and Their Applications in
Gateway Quiz Reminders:
Linear Algebra Lecture 4.
Solving Linear Inequalities
Confidence Intervals for a Population Proportion
Quantum Two.
In this section you will:
James B. Orlin Presented by Tal Kaminker
Confidence Interval Estimation and Statistical Inference
Sparse and Redundant Representations and Their Applications in
Chapter 2: Representing Motion
Prof. Ramin Zabih Least squares fitting Prof. Ramin Zabih
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Problems With Assistance Module 4 – Problem 2
Lectures on Graph Algorithms: searching, testing and sorting
Sparse and Redundant Representations and Their Applications in
Optimal sparse representations in general overcomplete bases
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Homework 9 Refer to the last example.
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
I.4 Polyhedral Theory (NW)
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Sparse and Redundant Representations and Their Applications in
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
FP1: Chapter 2 Numerical Solutions of Equations
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Simplex method (algebraic interpretation)
Getting to the bottom, fast
Chapter 3 Techniques of Differentiation
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Straight Line Motion (continued)
ECE 576 POWER SYSTEM DYNAMICS AND STABILITY
CS249: Neural Language Model
Presentation transcript:

Sparse and Redundant Representations and Their Applications in Signal and Image Processing (236862) Section 4: From Exact to Approximate Sparse Solutions Winter Semester, 2017/2018 Michael (Miki) Elad

Meeting Plan Quick review of the material covered Answering questions from the students and getting their feedback Addressing issues raised by other learners Discussing a new material – LARS Administrative issues

Overview of the Material From Exact to Approximate Sparse Solution General Motivation – Why Approximate ? Pursuit Algorithms: OMP, and BP Extensions IRLS Solution of the Basis Pursuit IRLS Solution of the Basis Pursuit – A Demo The Unitary Case – A Source of Inspiration (Part 1) The Unitary Case – A Source of Inspiration (Part 2) ADMM Solution of the Basis Pursuit

Your Questions and Feedback

Issues Raised by Other Learners OMP for P0 The lecturer mentions that the OMP needs some adjustment to solve P0-epsilon, but I don't see any change - the original algorithm already has a stopping condition for norm(residual)epsilon.   Equivalence: Analyzing the THR Algorithm - experiment analysis The diagram shown in the video shows the L2-Error of the experiment. This means that the lower the graph, the less error in the estimations. If my understanding is correct, the diagram is saying that the higher the contrast in the true solution, the better the performance of the algorithm. This means the performance of THR improves a little bit at high contrast, and performance of OMP is worse a little bit at high contrast but still much better than that of THR. It looks like my understanding is contradict to the analysis of Mr. Michael Elad, please correct me. Where is my misunderstanding?

New Material? The LARS Algorithm Among the various problems presented in this section, we also met the one called (Q1): This is a convex problem and there various ways to solve it. We present here an algorithm for getting the full-path of the solutions, i.e., solving this problem for all the ’s It sounds like too much to ask, but think about it – we do something similar in OMP when we add one non-zero at a time to the solution – something similar will happen here We stress: the proposed solution is an exact solver of Q1

Step 1: Have you Heard of Sub-Gradients? Example:

Step 2: What to do with Sub-Gradients? Theorem: Given a convex function f(x) which is to be minimized, the optimal solution must satisfy Think about this: For smooth functions, where the sub-gradient in every location x is a single vector, this is just like requiring the gradient to be zero In the more general case, where some points have many “slopes”, the sub-gradient in these locations is a set, and now we require this set to include the null-vector

Step 3: Back to Our Problem Thus, we characterize the optimal solution as the one that has the zero in the set of sub-gradients & This will become very clear as we start using this condition where

Step 4: Start from the Zero Solution For , the solution is simply zero: x*()=0 Our question: as we decrease the value of , when will the solution start changing? Answer: Lets look at the sub-gradient and the optimality condition: As long as 0 the optimal solution is the zero, and we get the value of z as a by-product

Step 5: Awakening As we get to =0 , one of the entries in z touches the boundaries (either +1 or -1) – Let’s assume that only one entry in z does that: zi=+1 or -1 If we plan to decrease , the solution MUST CHANGE, since otherwise z will get values outside [-1,1] The change: xi is awaken, while all the rest remain zero, and xi-s sign is chosen by Lets focus now on the following questions: What will be the value of xi as a function of  ? When the solution’s support will have to change again ?

Step 5: Awakening

Step 5: Awakening Lets look at the i-th row of this system. Recall: xi0, and thus zi is fixed Observe the linearity of xi w.r.t.  Lets look at all the other equations Plug in the value of xi(), and identify the next value that leads to a boundary value in zc

Step 6: Proceeding We are now into the process, and have several non-zeros denoted as the support S In our notations, xS are the non-zeros and all the rest are zeros. Thus zS=sign(xS) Following the same steps as before, we break the sub-gradient expression into two parts – on support and off-support:

LARS: Comments The process we have described should remind you very much of the OMP, due to two main features: We introduce one non-zero at a time to the solution as  is getting smaller The temporal solution looks almost like a LS: We relied on two simplifying assumption in our description: When the solution changes, only one non-zero is introduced. His can be easily fixed to manage any number of them We did not consider ‘death’ of non-zeros, which is a possibility If A is of size n×m, then LARS stops when the number of non-zeros is n – Why?

LARS: The Practice Run LARS_DEMO.m

Administrative Issues The Final-Project Due Date is December 20th Moving to the Second Course: Next week (30/11) – we discuss Section 5 We meet again on 28/12 to resume the course with the second half Your Research-Project Assignment