Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS5321 Numerical Optimization

Similar presentations


Presentation on theme: "CS5321 Numerical Optimization"— Presentation transcript:

1 CS5321 Numerical Optimization
16 Quadratic Programming 1/17/2019

2 Quadratic programming
Quadratic object function + linear constraints If G is positive semidefinite, it is called convex QP. If G is positive definite, it is called strictly convex QP. If G is indefinite, it is called nonconvex QP. m i n x q ( ) = 1 2 T G + c s . t a b E I 1/17/2019

3 Equality constraints First order condition for optimality (chap12)
where h=Ax−b, g=c+Gx, and p=x*−x. Pair (x*,*) is the optimal solution and Lagrangian multipliers. is called the KKT matrix, which is indefinite, nonsingular if the projected Hessian ZTGZ is positive definite. (chap 12) G A T p = g h K = G A T 1/17/2019

4 Solving the KKT system Direct methods Iterative methods
Symmetric indefinite factorization PTKP=LBLT. Schur complement method (1) Null-Space method (2) Iterative methods CG applied to the reduced system (3) The projected CG method (4) Newton-Krylov method 1/17/2019

5 1.Schur-complement method
Assume G is invertible. This gives two equations Solve for * first. Then use * to solve p. Matrix AG−1AT is called the Schur-complement I A G 1 T p = g h G A T 1 p = g h G p + A T = g ( 1 ) h 1/17/2019

6 2.Null space method Require (1) A has full row rank (2) ZTGZ is positive definite. (G itself can be singular.) QR decomposition of AT, and partition p (Q2=Z) A T = Q 1 2 R ; p + G A T p = Q 1 R g h R T Q 1 p = h ( 2 G ) g + 1/17/2019

7 3.Conjugate Gradient for ZTGZ
Assume AT=Q1R, Q2=Z. (see previous slide). The solution can be expressed as x = Q1x1+Q2x2. x1=R−Tb cannot be changed. Reduce the original problem to the unconstrained one (see chap 15) The minimizer is the solution of (see chap 2) If is s.p.d., the CG can be used as a solver. (see chap 5) m i n x 2 1 T Q G + c Q T 2 G x = c Q T 2 G 1/17/2019

8 4.The Projected CG method
Use the same assumptions as the previous slide Define P=Q2Q2T, an orthogonal projector, by which Pxspan(Q2) for all x. The projected CG method uses projected residual (see chap 5) instead of rk+1. Matrix P is equivalent to P = I−AT(AAT)−1A. Numerically unstable, use iterative refinement to improve the accuracy. r p k + 1 = P 1/17/2019

9 Inequality constraints
Review the optimality conditions (chap 12) Two difficulties (Figure 16.1, 16.2) Nonconvexity: more than one minimizer Degeneracy: The gradients of active constraints are linearly dependent G x + c X i 2 A ( ) a = T b E I n \ 1/17/2019

10 Methods for QP Active set method for convex QP Interior point method
Only for small or medium sized problems Interior point method Good for large problems Gradient projection method Good when the constraints are bounds for variables 1/17/2019

11 1.Active set method for QP
Basic strategy (for convex QP) Find an active set (called working set, denoted Wk) Check if xk is optimal If not, find pk (solving an equality constrained QP) If pk=0, use Lagrangian multiplier to check optimality Findk[0,1] such that xk+1=xk+kpk satisfies all constraints ai, iWk (next slide) If k<1, add blocking constraints to Wk+1. m i n p 1 2 T G + g s . t a = W k 1/17/2019

12 Step length Need findk[0,1] as large as possible such that aiT(xk+kpk)bi, iWk If aiTpk0, then for allk0, it satisfies anyway If aiTpk<0, k need be  (bi− aiTxk)/aiTpk. k = m i n 1 ; 2 W a T p < b x 1/17/2019

13 2.Interior point method for QP
Please review the IPM for LP (chap 14) The problem KKT conditions Complementary measure: m i n x q ( ) = 1 2 T G + c s . t A b G x A T + c = y b i ; 1 2 : m = y T m 1/17/2019

14 Using Newton’s method to solve F(x,y,,)=0
Nonlinear equation Using Newton’s method to solve F(x,y,,)=0 where Next step F ( x ; y ) = @ G A T + c b Y e 1 @ G A T I Y 1 x y = r d p e + r d = G x A T + c p y b ( x k + 1 ; y ) = 1/17/2019

15 3.Gradient projection method
Consider the bounded constrained problem The gradient of q(x) is g = Gx+c. Bounded projection The gradient projection is a piecewise linear path x(t)=P(x−tg, l. u) m i n x q ( ) = 1 2 T G + c s . t l u ; x x−tg P ( x ; l u ) = 8 < : i f > 1/17/2019

16 Piecewise linear path For each dimension t = 8 < : ( x ¡ u ) g f l
> < : 1 + p f o r 2 . n t i = 8 < : ( x u ) g f l > 1 o h e r w s x i ( t ) = g f o h e r w s p i ( t ) = g f 1 o h e r w s 1/17/2019


Download ppt "CS5321 Numerical Optimization"

Similar presentations


Ads by Google