CS5321 Numerical Optimization

Slides:



Advertisements
Similar presentations
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Advertisements

Linear Inequalities in 2 Variables
Optimization with Constraints
Nonlinear Programming McCarl and Spreen Chapter 12.
Engineering Optimization
1 TTK4135 Optimization and control B.Foss Spring semester 2005 TTK4135 Optimization and control Spring semester 2005 Scope - this you shall learn Optimization.
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Engineering Optimization
Advanced Topics in Optimization
Tier I: Mathematical Methods of Optimization
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
1 Chapter 8 Nonlinear Programming with Constraints.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
ME451 Kinematics and Dynamics of Machine Systems Numerical Solution of DAE IVP Newmark Method November 1, 2013 Radu Serban University of Wisconsin-Madison.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 101 Quasi-Newton Methods.
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
General Nonlinear Programming (NLP) Software
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Systems of Inequalities in Two Variables Sec. 7.5a.
Chapter 7 Handling Constraints
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Part 4 Nonlinear Programming 4.5 Quadratic Programming (QP)
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
2.1 – Linear and Quadratic Equations Linear Equations.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Warm up Problems More With Integrals It can be helpful to guess and adjust Ex.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
By Liyun Zhang Aug.9 th,2012. * Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Optimal Control.
CHAPTER 3 NUMERICAL METHODS
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Chap 10. Sensitivity Analysis
CSLT ML Summer Seminar (13)
Computational Optimization
Boundary-Value Problems for ODE )בעיות הגבול(
Chap 9. General LP problems: Duality and Infeasibility
CS5321 Numerical Optimization
Chapter 6.
CS5321 Numerical Optimization
Chap 3. The simplex method
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
CS5321 Numerical Optimization
Optimization Methods TexPoint fonts used in EMF.
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Some Comments on Root finding
CS5321 Numerical Optimization
Chapter 2. Simplex method
Chapter 6.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
CS5321 Numerical Optimization
Computer Animation Algorithms and Techniques
Pivoting, Perturbation Analysis, Scaling and Equilibration
CS5321 Numerical Optimization
Part 4 Nonlinear Programming
CS5321 Numerical Optimization
Presentation transcript:

CS5321 Numerical Optimization 18 Sequential Quadratic Programming (Active Set methods) 12/6/2018

Local SQP model The problem minxf (x) subject to c(x)=0 can be modeled as a quadratic programming at x=xk Assumptions: A(x), the constraint Jacobian, has full row rank. xx2L(x,) is positive definite on the tangent space of constraints, that is, dTxx2L(x,)d>0 for all d0,Ad=0. m i n p k ( ) = f + r T 1 2 x L s . t A c 12/6/2018

Inequality constraints For The local quadratic model is m i n x f ( ) s . t c = ; 2 E ¸ I m i n p k ( ) = f + r T 1 2 x L s . t c ; E ¸ I 12/6/2018

Theorem of active set methods Theorem 18.1 (Robinson 1974) If x* is a local solution of the original problem with some *, and the pair (x*, *) satisfies the KKT condition, the LICO condition, and the second order sufficient conditions, then for (xk, k) sufficiently close to (x*, *), then there is a local quadratic model whose active set is the same as that of the original problem. 12/6/2018

Sequential QP method Choose an initial guess x0, 0 For k = 1, 2, 3, … Evaluate fk, fk, xx2Lk, ck, and ck(=Ak) Solve the local quadratic programming Set xk+1 = xk + pk How to choose the active set? How to solve 2(b)? Haven’t we solved that in chap 16? (Yes and No) 12/6/2018

Algorithms Two types of algorithms to choose active set Inequality constrained QP (IQP): Solve QPs with inequality constraints and take the local active set as the optimal one. Equality constrained QP (EQP): Select constraints as the active set and solve equality constrained QPs. Basic algorithms to solve 2(b) Line search methods Trust region methods Nonlinear gradient projection 12/6/2018

Solving SQP All techniques in chap 16 can be applied. But there are additional problems need be solved Linearized constraints may not be consistent Convergence guarantee (Hessian is not positive def) And some useful properties can be used Hessian can be updated by the quasi-Newton method Solutions can be used as an initial guess (warm start) The exact solution is not required. 12/6/2018

Inconsistent linearizations Linearizing at xk=1 gives The constraints cannot be enforced since they may not exact or consistent. Use penalty function 1 ¡ x ¸ 2 4 ¡ p ¸ 2 3 m i n x ; v w t f ( ) + ¹ X 2 E I s u b j e c o = ¸ ¡ 12/6/2018

Quasi-Newton approximations k = x + 1 ¡ y r L ( ; ¸ ) Recall for the update of Hessian is (BFGS, chap 6) If the updated Hessian Bk+1 is not positive definite Condition fails. Define for (0,1) B k + 1 = ¡ s T y s T k B + 1 = y > r k = µ y + ( 1 ¡ ) B s B k + 1 = ¡ s T r 12/6/2018

BFGS for reduced-Hessian T = ¡ Q 1 2 ¢ µ R ¶ ; p + Let Need to solve Solve * first to obtained an active set. Ignore term. Solve The reduced secant formula is Use BFGS on this equation. R p 1 = ¡ h ( Q T 2 G ) g ¸ ¤ + Q T 2 G 1 p ( Q T 2 G ) p = ¡ g ( Q T 2 G ) k + 1 ® p = [ r x L ; ¸ ¡ ] 12/6/2018

1.Line search SQP method Set step length k such that the merit function is sufficiently decreased? One of the Wolfe condition (chap 3) D(1, pk) is the directional derivative of 1 in pk. (theorem 18.2 ) Let k =1 and decrease it until the condition is satisfied. Á 1 ( x ; ¹ ) = f + k c Á 1 ( x k + ® p ; ¹ ) · ´ D D ( Á 1 x k ; ¹ ) p = r f T ¡ c 12/6/2018

2.Trust region SQP method Problem modification Relax the constraints (rk=0 in the original algorithm) Add trust region radius as a constraint There are smart ways to choose rk. For example, m i n p k ( ) = f + r T 1 2 x L s . t A c ; · ¢ m i n v r k ( ) = A + c 2 s . t · : 8 ¢ 12/6/2018

3.Nonlinear gradient projection Let Bk be the s.p.d. approximation to 2f (xk). Step direction is pk=x−xk Combine with the line-search dir xk+1=xk+kpk. Choose k s.t. f (xk+1)  f (xk)+kfkTpk. Combine with the trust region bounds ||pk||k. m i n x q k ( ) = f + r T ¡ 1 2 B s . t l · u m i n x q k ( ) = f + r T ¡ 1 2 B s . t a l ; ¢ e · u 12/6/2018