+ Quadratic Programming and Duality Sivaraman Balakrishnan.

Slides:



Advertisements
Similar presentations
Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Advertisements

1 Outline relationship among topics secrets LP with upper bounds by Simplex method basic feasible solution (BFS) by Simplex method for bounded variables.
Introduction to Support Vector Machines (SVM)
Ordinary Least-Squares
Lecture 9 Support Vector Machines
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
L1 sparse reconstruction of sharp point set surfaces
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
MS&E 211 Quadratic Programming Ashish Goel. A simple quadratic program Minimize (x 1 ) 2 Subject to: -x 1 + x 2 ≥ 3 -x 1 – x 2 ≥ -2.
Support Vector Machines Instructor Max Welling ICS273A UCIrvine.
Support Vector Machines
Support vector machine
+ Convex Functions, Convex Sets and Quadratic Programs Sivaraman Balakrishnan.
A KTEC Center of Excellence 1 Convex Optimization: Part 1 of Chapter 7 Discussion Presenter: Brian Quanz.
by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright
Separating Hyperplanes
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Decomposable Optimisation Methods LCA Reading Group, 12/04/2011 Dan-Cristian Tomozei.
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
CSCI 3160 Design and Analysis of Algorithms Tutorial 6 Fei Chen.
1-norm Support Vector Machines Good for Feature Selection  Solve the quadratic program for some : min s. t.,, denotes where or membership. Equivalent.
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Duality Lecture 10: Feb 9. Min-Max theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum Cut Both.
Chebyshev Estimator Presented by: Orr Srour. References Yonina Eldar, Amir Beck and Marc Teboulle, "A Minimax Chebyshev Estimator for Bounded Error Estimation"
Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.
SVM QP & Midterm Review Rob Hall 10/14/ This Recitation Review of Lagrange multipliers (basic undergrad calculus) Getting to the dual for a QP.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
Background vs. foreground segmentation of video sequences = +
Support Vector Machines
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Support Vector Machines
Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.
Machine Learning Week 4 Lecture 1. Hand In Data Is coming online later today. I keep test set with approx test images That will be your real test.
Introduction to Geometric Programming. Basic Idea The Geometric Mean (1) (2) (3)
Cs: compressed sensing
Duality Theory 對偶理論.
Machine Learning Seminar: Support Vector Regression Presented by: Heng Ji 10/08/03.
CS Statistical Machine learning Lecture 18 Yuan (Alan) Qi Purdue CS Oct
An Introduction to Support Vector Machines (M. Law)
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
High-dimensional Error Analysis of Regularized M-Estimators Ehsan AbbasiChristos ThrampoulidisBabak Hassibi Allerton Conference Wednesday September 30,
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
1 New Horizon in Machine Learning — Support Vector Machine for non-Parametric Learning Zhao Lu, Ph.D. Associate Professor Department of Electrical Engineering,
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
© Eric CMU, Machine Learning Support Vector Machines Eric Xing Lecture 4, August 12, 2010 Reading:
Applications of Geometric Programming in Information Security Chris Ware University of Victoria.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Topics in Algorithms 2005 Linear Programming and Duality Ramesh Hariharan.
Linear Programming Piyush Kumar Welcome to CIS5930.
Approximation Algorithms based on linear programming.
Regularized Least-Squares and Convex Optimization.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Geometrical intuition behind the dual problem
Support Vector Machines
Nuclear Norm Heuristic for Rank Minimization
Statistical Learning Dong Liu Dept. EEIS, USTC.
CS5321 Numerical Optimization
Point-Line Duality Let
CS5321 Numerical Optimization
Support Vector Machines
Linear Constrained Optimization
Constraints.
Presentation transcript:

+ Quadratic Programming and Duality Sivaraman Balakrishnan

+ Outline Quadratic Programs General Lagrangian Duality Lagrangian Duality in QPs

+ Norm approximation Problem Interpretation Geometric – try to find projection of b into ran(A) Statistical – try to find solution to b = Ax + v v is a measurement noise (choose norm so that v is small in that norm) Several others

+ Examples -- Least Squares Regression -- Chebyshev -- Least Median Regression More generally can use *any* convex penalty function

+ Picture from BV

+ Least norm Perfect measurements Not enough of them  Heart of something known as compressed sensing Related to regularized regression in the noisy case

+ Smooth signal reconstruction S(x) is a smoothness penalty Least squares penalty Smooths out noise and sharp transitions Total variation (peak to valley intuition) Smooths out noise but preserves sharp transitions

+ Euclidean Projection Very fundamental idea in constrained minimization Efficient algorithms to project onto many many convex sets (norm balls, special polyhedra etc) More generally finding minimum distance between polyhedra is a QP

+ Quadratic Programming Duality

+ General recipe Form Lagrangian How to figure out signs?

+ Primal & Dual Functions Primal Dual

+ Primal & Dual Programs Primal Programs Constraints are now implicit in the primal Dual Program

+ Lagrangian Properties Can extract primal and dual problem Dual problem is always concave Proof Dual problem is always a lower bound on primal Proof Strong duality gives complementary slackness Proof

+ Some examples of QP duality Consider the example from class Lets try to derive dual using Lagrangian

+ General PSD QP Primal Dual

+ SVM – Lagrange Dual Primal SVM Dual SVM Recovering Primal Variables and Complementary Slackness