C&O 355 Mathematical Programming Fall 2010 Lecture 9

Slides:



Advertisements
Similar presentations
C&O 355 Lecture 15 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
Advertisements

10.4 Complex Vector Spaces.
Chapter 4 Euclidean Vector Spaces
Eigen Decomposition and Singular Value Decomposition
C&O 355 Lecture 9 N. Harvey TexPoint fonts used in EMF.
C&O 355 Mathematical Programming Fall 2010 Lecture 6 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
3D Geometry for Computer Graphics
C&O 355 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Latent Semantic Analysis
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
MS&E 211 Quadratic Programming Ashish Goel. A simple quadratic program Minimize (x 1 ) 2 Subject to: -x 1 + x 2 ≥ 3 -x 1 – x 2 ≥ -2.
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
C&O 355 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 20 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Mathematical Programming Fall 2010 Lecture 15 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
6.1 Eigenvalues and Diagonalization. Definitions A is n x n. is an eigenvalue of A if AX = X has non zero solutions X (called eigenvectors) If is an eigenvalue.
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
ENGG2013 Unit 19 The principal axes theorem
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
C&O 355 Mathematical Programming Fall 2010 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Elementary Linear Algebra Anton & Rorres, 9th Edition
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
C&O 355 Mathematical Programming Fall 2010 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
What is the determinant of What is the determinant of
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Chapter 4 Euclidean n-Space Linear Transformations from to Properties of Linear Transformations to Linear Transformations and Polynomials.
C&O 355 Lecture 24 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
MTH108 Business Math I Lecture 20.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
CS485/685 Computer Vision Dr. George Bebis
Derivative of scalar forms
Chapter 6 Eigenvalues Basil Hamed
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
The Elements of Linear Algebra
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
CSE 203B: Convex Optimization Week 2 Discuss Session
Presentation transcript:

C&O 355 Mathematical Programming Fall 2010 Lecture 9 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA

Topics Semi-Definite Programs (SDP) Solving SDPs by the Ellipsoid Method Finding vectors with constrained distances

LP is great, but… Some problems cannot be handled by LPs Example: Find vectors v1,…,v10 2 R10 s.t. All vectors have unit length: kvik = 1 8i Distance between vectors is: kvi-vjk 2 [1/3, 5/3] 8ij Sum-of-squared distances is maximized Not obvious how to solve it. Not obvious that LP can help. This problem is child’s play with SDPs

How can we make LPs more general? An (equational form) LP looks like: In English: Find a non-negative vector x Subject to some linear constraints on its entries While maximizing a linear function of its entries What object is “more general” than vectors? How about matrices?

Generalizing LPs: Attempt #1 How about this? In English: Find a “non-negative matrix” X Subject to some linear constraints on its entries While maximizing a linear function of its entries Does this make sense? Not quite… What is a “non-negative matrix”? Objective function cTX is not a scalar AX is not a vector

What is a “non-negative matrix”? Let’s define “non-negative matrix” by “symmetric, positive semi-definite matrix”. So our “variables” are the entries of an nxn symmetric matrix. The constraint “X¸0” is replaced by “X is PSD” Note: The constraint “X is PSD” is quite weird. We’ll get back to this issue.

Vectorizing the Matrix A dxd matrix can be viewed as a vector in Rd2. (Just write down the entries in some order.) A dxd symmetric matrix can be viewed as a vector in Rd(d+1)/2. Our notation: X is a dxd symmetric matrix, and x is the corresponding vector. 1 2 3 4 1 2 3 4 1 2 4 1 2 4 X = x =

Semi-Definite Programs This constraint looks suspiciously non-linear Where x2Rn is a vector A is a mxn matrix, c2Rn and b2Rm X is a dxd symmetric matrix, where n = d(d+1)/2, and x is vector corresponding to X. In English: Find a symmetric, positive semidefinite matrix X Subject to some linear constraints on its entries While maximizing a linear function of its entries

Review of Eigenvalues A complex dxd matrix M is diagonalizable if M = U D U-1, where D and U have size dxd, and D is diagonal. (This expression for M is called a “spectral decomposition”) The diagonal entries of D are called the eigenvalues of M, and the columns of U are corresponding eigenvectors. An eigenvector of M is a vector y0 s.t. My = ¸y, for some ¸2C. Fact: Every real symmetric matrix M is diagonalizable. In fact, we can write M = U D UT, where D and U are real dxd matrices, D is diagonal, and UT = U-1. (U is “orthogonal”) Fact: For real symmetric matrices, it is easy to compute the matrices U and D. (“Cholesky Factorization”, very similar to Gaussian Elim.) Summary: Real symmetric matrices have real eigenvalues and eigenvectors and they’re easily computed.

Positive Semidefinite Matrices (again) Assume M is a symmetric, d x d matrix Definition 1: M is PSD iff 9V s.t. M = VTV. Definition 2: M is PSD iff yTMy ¸ 0 8y2Rd. Definition 3: M is PSD iff all eigenvalues are ¸ 0. Claim: Definition 3 ) Definition 1. Proof: Since M symmetric, M = U D UT where D is diagonal and its diagonal entries are the eigenvalues. Let W be diagonal matrix W=D1/2, i.e., Then M = UT W W U = UT WT W U = (WU)T (WU) = VT V, where V = WU. ¤

Positive Semidefinite Matrices (again) Assume M is a symmetric, d x d matrix Definition 1: M is PSD iff 9V s.t. M = VTV. Definition 2: M is PSD iff yTMy ¸ 0 8y2Rd. Definition 3: M is PSD iff all eigenvalues are ¸ 0. Notation: Let M[S,T] denote submatrix of M with row-set S and column-set T. Definition 4: M is PSD iff det( M[S,S] )¸0 8S. Definitions 1-3 are very useful. Definition 4 is less useful.

The PSD Constraint is Convex Claim: The set C = { x : X is PSD } is a convex set. Proof: By Definition 2, X is PSD iff yTXy ¸ 0 8y2Rd. Note: . For each fixed y, this is a linear inequality involving entries of X. Each inequality defines a half-space, and is convex. So So C is the intersection of an (infinite!) collection of convex sets, and hence is convex. (See Asst 0) ¥ Remark: This argument also shows that C is closed.

What does the PSD set look like? Consider 2x2 symmetric matrices. M = Let C = { x : X is PSD }. By Definition 4, M is PSD iff det( M[S,S] )¸0 8S. So C = { (®,¯,°) : ®¸0, °¸0, ®°-¯2¸0 }. ® ¯ ° Note: Definitely not a polyhedral set! Image from Jon Dattorro “Convex Optimization & Euclidean Distance Geometry”

Semi-Definite Programs This constraint looks suspiciously non-linear Definition 2 Where x2Rn is a vector A is a mxn matrix, c2Rn and b2Rm X is a dxd symmetric matrix, where n = d(d+1)/2, and x is vector corresponding to X. Replace suspicious constraint with Definition 2 This is a convex program with infinitely many constraints!

History of SDPs Implicitly appear in this paper: We met him in Lecture 8

History of SDPs Scroll down a bit… SDPs were discovered in U. Waterloo C&O Department!

Solving Semi-Definite Programs There are infinitely many constraints! But having many constraints doesn’t scare us: we know the Ellipsoid Method. To solve the SDP, we: Replace objective function by constraint cTx ¸ ®, and binary search to find (nearly) optimal alpha. Need to design a separation oracle.

Separation Oracle for SDPs Is z2P? If not, find a vector a s.t. aTx<aTz 8x2P Separation Oracle Easy to test if Az=b, and if cTz ¸ ® If not, either ai or -ai or c gives the desired vector a How can we test if yT Z y ¸ 0 8y?

Separation Oracle for SDPs Is z2P? If not, find a vector a s.t. aTx<aTz 8x2P Separation Oracle How can we test if yT Z y ¸ 0 8y? Key trick: Compute eigenvalues & eigenvectors! If all eigenvalues ¸ 0, then Z is PSD and yT Z y ¸ 0 8y. If y is a non-zero eigenvector with eigenvalue ¸<0, then Zy = ¸y ) yT Z y = yT¸y = ¸ yTy = ¸ kyk2 < 0 Thus the constraint yT X y ¸ 0 is violated by Z! Suppose y^T X y >= 0 is violated by Z. Then what is the vector a? Let w = [ y1^2, 2 y1 y2, 2 y1 y3, …, y2^2, 2 y2 y3, 2 y2 y4, … ] Then w*x >= 0 for every x in P, but w*z < 0. So we can take a = -w.

Separation Oracle for SDPs Is z2P? If not, find a vector a s.t. aTx<aTz 8x2P Separation Oracle Summary: SDPs can be solved (approximately) by the Ellipsoid Method, in polynomial time. There are some issues relating to irrational numbers and the radii of balls containing and contained in feasible region. SDPs can be solved efficiently in practice (approximately), by Interior Point Methods.

Example: Find vectors v1,…,v10 2 R10 s.t. All vectors have unit length: kvik = 1 8i Distance between vectors is: kvi-vjk 2 [1/3, 5/3] 8ij Sum-of-squared distances is maximized Why does this example relate to SDPs? Key observation: PSD matrices correspond directly to vectors and their dot-products. Given vectors v1,…,vd in Rd, let V be the dxd matrix whose ith column is vi. Let X = VTV. Then X is PSD and Xi,j = viTvj 8i,j.

Example: Find vectors v1,…,v10 2 R10 s.t. All vectors have unit length: kvik = 1 8i Distance between vectors is: kvi-vjk 2 [1/3, 5/3] 8ij Sum-of-squared distances is maximized Key observation: PSD matrices correspond directly to vectors and their dot-products. Given vectors v1,…,vd in Rd, let V be the dxd matrix whose ith column is vi. Let X = VTV. Then X is PSD and Xi,j = viTvj 8i,j. Conversely, given a dxd PSD matrix X, find spectral decomposition X = U D UT, and let V = D1/2 U. To get vectors in Rd, let vi = ith column of V. Then X = VT V ) Xi,j = viTvj 8i,j.

Example: Find vectors v1,…,v10 2 R10 s.t. All vectors have unit length: kvik = 1 8i Distance between vectors is: kvi-vjk 2 [1/3, 5/3] 8ij Sum-of-squared distances is maximized Key observation: PSD matrices correspond directly to vectors and their dot-products: If X PSD, it gives vectors { vi : i=1,…,d } where Xi,j = viTvj. Also, distances and lengths relate to dot-products: kuk2=uTu and ku-vk2 = (u-v)T(u-v) = uTu - 2vTu + vTv So our example is solved by the SDP: max §i,j (Xi,i - 2Xi,j + Xj,j) (i.e., §i,j kvi-vjk2) s.t. Xi,i = 1 (i.e., kvik = 1) Xi,i-2Xi,j+Xj,j2[1/9,25/9] (i.e., kvi-vjk2[1/3,5/3]) X is PSD

Example: Find vectors v1,…,v10 2 R10 s.t. All vectors have unit length: kvik = 1 8i Distance between vectors is: kvi-vjk 2 [1/3, 5/3] 8ij Sum-of-squared distances is maximized Our example is solved by the SDP: max §i,j (Xi,i - 2Xi,j + Xj,j) (i.e., §i,j kvi-vjk2) s.t. Xi,i = 1 (i.e., kvik = 1) Xi,i-2Xi,j+Xj,j2[1/9,25/9] (i.e., kvi-vjk2[1/3,5/3]) X is PSD Note objective function is a linear function of X’s entries Note constraints are all linear inequalities on X’s entries

SDP Summary Matrices can be viewed as vectors. Can force a matrix to be PSD using infinitely many linear inequalities. Can test if a matrix is PSD using eigenvalues. This also gives a separation oracle. PSD matrices correspond to vectors and their dot products (and hence to their distances). So we can solve lots of optimization problems relating to vectors with certain distances. In applications of SDP, it is often not obvious why the problem relates to finding certain vectors…