CS5321 Numerical Optimization

Slides:



Advertisements
Similar presentations
Solve a System Algebraically
Advertisements

Nonlinear Programming McCarl and Spreen Chapter 12.
Engineering Optimization
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
Optimization Methods TexPoint fonts used in EMF.
APPENDIX A: REVIEW OF LINEAR ALGEBRA APPENDIX B: CONVEX AND CONCAVE FUNCTIONS V. Sree Krishna Chaitanya 3 rd year PhD student Advisor: Professor Biswanath.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
L12 LaGrange Multiplier Method Homework Review Summary Test 1.
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Support Vector Machines
Nonlinear Optimization for Optimal Control
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
Unconstrained Optimization Problem
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Engineering Optimization
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Ch. 9: Direction Generation Method Based on Linearization Generalized Reduced Gradient Method Mohammad Farhan Habib NetLab, CS, UC Davis July 30, 2010.
Tier I: Mathematical Methods of Optimization
KKT Practice and Second Order Conditions from Nash and Sofer
1 Chapter 8 Nonlinear Programming with Constraints.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Section 4-1: Introduction to Linear Systems. To understand and solve linear systems.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
L8 Optimal Design concepts pt D
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Optimal Control.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Chapter 11 Optimization with Equality Constraints
Computational Optimization
Systems of First Order Linear Equations
Lecture 8 – Nonlinear Programming Models
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
CS5321 Numerical Optimization
Chapter 5. Sensitivity Analysis
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Conjugate Gradient Method
CS5321 Numerical Optimization
Optimization Methods TexPoint fonts used in EMF.
~ Least Squares example
CS5321 Numerical Optimization
Outline Unconstrained Optimization Functions of One Variable
CS5321 Numerical Optimization
I.4 Polyhedral Theory (NW)
~ Least Squares example
I.4 Polyhedral Theory.
9.3 Linear programming and 2 x 2 games : A geometric approach
Administrivia: November 9, 2009
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Chapter 2. Simplex method
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Constraints.
Presentation transcript:

CS5321 Numerical Optimization 16 Quadratic Programming 1/17/2019

Quadratic programming Quadratic object function + linear constraints If G is positive semidefinite, it is called convex QP. If G is positive definite, it is called strictly convex QP. If G is indefinite, it is called nonconvex QP. m i n x q ( ) = 1 2 T G + c s . t a b E ¸ I 1/17/2019

Equality constraints First order condition for optimality (chap12) where h=Ax−b, g=c+Gx, and p=x*−x. Pair (x*,*) is the optimal solution and Lagrangian multipliers. is called the KKT matrix, which is indefinite, nonsingular if the projected Hessian ZTGZ is positive definite. (chap 12) µ G A T ¶ ¡ p ¸ ¤ = g h K = µ G A T ¶ 1/17/2019

Solving the KKT system Direct methods Iterative methods Symmetric indefinite factorization PTKP=LBLT. Schur complement method (1) Null-Space method (2) Iterative methods CG applied to the reduced system (3) The projected CG method (4) Newton-Krylov method 1/17/2019

1.Schur-complement method Assume G is invertible. This gives two equations Solve for * first. Then use * to solve p. Matrix AG−1AT is called the Schur-complement µ I A G ¡ 1 ¶ T p ¸ ¤ = g h µ G A T ¡ 1 ¶ p ¸ ¤ = g h ¡ G p + A T ¸ ¤ = g ( 1 ) h 1/17/2019

2.Null space method Require (1) A has full row rank (2) ZTGZ is positive definite. (G itself can be singular.) QR decomposition of AT, and partition p (Q2=Z) A T = ¡ Q 1 2 ¢ µ R ¶ ; p + µ G A T ¶ ¡ p ¸ ¤ = Q 1 R g h R T Q 1 p = ¡ h ( 2 G ) g ¸ ¤ + 1/17/2019

3.Conjugate Gradient for ZTGZ Assume AT=Q1R, Q2=Z. (see previous slide). The solution can be expressed as x = Q1x1+Q2x2. x1=R−Tb cannot be changed. Reduce the original problem to the unconstrained one (see chap 15) The minimizer is the solution of (see chap 2) If is s.p.d., the CG can be used as a solver. (see chap 5) m i n x 2 1 T Q G + c Q T 2 G x = ¡ c Q T 2 G 1/17/2019

4.The Projected CG method Use the same assumptions as the previous slide Define P=Q2Q2T, an orthogonal projector, by which Pxspan(Q2) for all x. The projected CG method uses projected residual (see chap 5) instead of rk+1. Matrix P is equivalent to P = I−AT(AAT)−1A. Numerically unstable, use iterative refinement to improve the accuracy. r p k + 1 = P 1/17/2019

Inequality constraints Review the optimality conditions (chap 12) Two difficulties (Figure 16.1, 16.2) Nonconvexity: more than one minimizer Degeneracy: The gradients of active constraints are linearly dependent G x ¤ + c ¡ X i 2 A ( ) ¸ a = T b E I n \ 1/17/2019

Methods for QP Active set method for convex QP Interior point method Only for small or medium sized problems Interior point method Good for large problems Gradient projection method Good when the constraints are bounds for variables 1/17/2019

1.Active set method for QP Basic strategy (for convex QP) Find an active set (called working set, denoted Wk) Check if xk is optimal If not, find pk (solving an equality constrained QP) If pk=0, use Lagrangian multiplier to check optimality Findk[0,1] such that xk+1=xk+kpk satisfies all constraints ai, iWk (next slide) If k<1, add blocking constraints to Wk+1. m i n p 1 2 T G + g s . t a = W k 1/17/2019

Step length Need findk[0,1] as large as possible such that aiT(xk+kpk)bi, iWk If aiTpk0, then for allk0, it satisfies anyway If aiTpk<0, k need be  (bi− aiTxk)/aiTpk. ® k = m i n µ 1 ; 2 W a T p < b ¡ x ¶ 1/17/2019

2.Interior point method for QP Please review the IPM for LP (chap 14) The problem KKT conditions Complementary measure: m i n x q ( ) = 1 2 T G + c s . t A ¸ b G x ¡ A T ¸ + c = y b i ; 1 2 : m ¹ = y T ¸ m 1/17/2019

Using Newton’s method to solve F(x,y,,)=0 Nonlinear equation Using Newton’s method to solve F(x,y,,)=0 where Next step F ( x ; y ¸ ¾ ¹ ) = @ G ¡ A T + c b Y ¤ e 1 @ G ¡ A T I ¤ Y 1 ¢ x y ¸ = r d p e + ¾ ¹ r d = G x ¡ A T ¸ + c p y b ( x k + 1 ; y ¸ ) = ® ¢ 1/17/2019

3.Gradient projection method Consider the bounded constrained problem The gradient of q(x) is g = Gx+c. Bounded projection The gradient projection is a piecewise linear path x(t)=P(x−tg, l. u) m i n x q ( ) = 1 2 T G + c s . t l · u ; x x−tg P ( x ; l u ) = 8 < : i f > 1/17/2019

Piecewise linear path For each dimension t = 8 < : ( x ¡ u ) g f l > < : 1 + ¢ p f o r · 2 . n ¡ t i = 8 < : ( x ¡ u ) g f l > 1 o h e r w s x i ( t ) = ½ ¡ g f · o h e r w s p i ( t ) = ½ ¡ g f 1 · o h e r w s 1/17/2019