CS5321 Numerical Optimization

Slides:



Advertisements
Similar presentations
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Advertisements

Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Solving LP Models Improving Search Special Form of Improving Search
Linear Programming Simplex Method
Engineering Optimization
by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Support Vector Machines
Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
The Widrow-Hoff Algorithm (Primal Form) Repeat: Until convergence criterion satisfied return: Given a training set and learning rate Initial:  Minimize.
The Perceptron Algorithm (Dual Form) Given a linearly separable training setand Repeat: until no mistakes made within the for loop return:
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign.
Support Vector Machine (SVM) Classification
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Unconstrained Optimization Problem
ISM 206 Lecture 4 Duality and Sensitivity Analysis.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
KKT Practice and Second Order Conditions from Nash and Sofer
Decision Procedures An Algorithmic Point of View
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Duality Theory  Every LP problem (called the ‘Primal’) has associated with another problem called the ‘Dual’.  The ‘Dual’ problem is an LP defined directly.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
L8 Optimal Design concepts pt D
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1 THE REVISED SIMPLEX METHOD CONTENTS Linear Program in the Matrix Notation Basic Feasible Solution in Matrix Notation Revised Simplex Method in Matrix.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
OR II GSLM
Elimination Method - Systems. Elimination Method  With the elimination method, you create like terms that add to zero.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
Chapter 11 Optimization with Equality Constraints
CSLT ML Summer Seminar (13)
Computational Optimization
Duality for linear programming.
Boundary-Value Problems for ODE )בעיות הגבול(
Chap 9. General LP problems: Duality and Infeasibility
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Pivoting, Perturbation Analysis, Scaling and Equilibration
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Constraints.
Presentation transcript:

CS5321 Numerical Optimization 19 Interior-Point Methods for Nonlinear Programming (Barrier Methods) 2/22/2019

Problem formulation cE is a vector of ci, iE x f ( ) s . t c E = I ¡ ¸ cE is a vector of ci, iE cI is a vector of ci, iI s is the slack variables Add barrier functions for inequality constraints. m i n x f ( ) ¡ ¹ X = 1 l s . t c E I  is the barrier parameter, which may be reduced iteratively. 2/22/2019

KKT conditions The Lagrangian of the original problem is y and z are the Lagrangian multipliers. Let AE and AI denote the Jacobian of cE and cI. The KKT condition of the barrier function is L ( x ; s y z ) = f ¡ ¹ m X i 1 l n T c E I r f ( x ) ¡ A T E y I z = S ¹ e c s 2/22/2019

Line-search algorithm Solve KKT conditions by the Newton’s method The primal-dual system Update (x, s, y, z)+=(xpx,sps,ypy,zpz) and k+1 The fraction to the boundary rule: for (0,1) 2 6 4 r x L ¡ A T E I Z S 3 7 5 p s y z = f ¹ e c ® s = m a x f 2 ( ; 1 ] : + p ¸ ¡ ¿ ) g z 2/22/2019

Solving the primal-dual system Let=S−1Z and rewrite the primal dual system as The matrix become symmetric. Eliminate ps and reform the problem 2/22/2019

Avoid singularity Singularity may caused by (some elements goes to ) or ill-conditionness of xx2f and AI. Use projected Hessian and modified  Add diagonal shift 2 6 4 G A T E I ¡ 3 7 5 2/22/2019

Barrier param and step length Barrier parameters {k} need converge to zero. Static method k+1= kk for k(0,1) Adaptive methods In the linear programming, (chap 14) Use merit function to decide whether a step is productive and should be accepted. (chap 18) ¹ k + 1 = ¾ s T z m Á ( x ; s ) = f ¡ ¹ m X i 1 l n + º k c E I 2/22/2019

Trust-region SQP method Two differences from the line search approach Solve primal (x,s) and dual problem (y, z) alternately Use scaling S−1 on ps. The primal problem 2/22/2019

Solving the dual problem ^ A = · E I ¡ S ¸ Define . The dual problem is to solve Using the least-square method The solution cannot guarantee z to be positive.  replaced by a small positive number if zi0 ^ A T · y z ¸ = r f ( x ) ¡ ¹ e · y z ¸ = ( ^ A T ) ¡ 1 r f x ¹ e z i ! m n ( 1 ¡ 3 ; ¹ = s ) 2/22/2019

Scaling Let ps' = S−1ps. The problem can be rewritten as Parameter rE and rI can be computed by solving r E = A ( x ) v + c ; I ¡ S s m i n k 2 . t · : 8 ¢ ¸ V ¿ e 2/22/2019

Convergence theory Theorem 19.1: Let {xk} be the iterative points of the basic algorithm and {k}0. Suppose f and c are continuously differentiable. The all limit points x* of {xk} are feasible. Moreover, if any x* satisfies LICQ, the KKT conditions are satisfied. 2/22/2019