General Nonlinear Programming (NLP) Software

Slides:



Advertisements
Similar presentations
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
Advertisements

Solving LP Models Improving Search Special Form of Improving Search
Engineering Optimization
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 123 “True” Constrained Minimization.
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
Automatic Control Laboratory, ETH Zürich Automatic dualization Johan Löfberg.
1 TTK4135 Optimization and control B.Foss Spring semester 2005 TTK4135 Optimization and control Spring semester 2005 Scope - this you shall learn Optimization.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
by Rianto Adhy Sasongko Supervisor: Dr.J.C.Allwright
Exact or stable image\signal reconstruction from incomplete information Project guide: Dr. Pradeep Sen UNM (Abq) Submitted by: Nitesh Agarwal IIT Roorkee.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Separating Hyperplanes
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
MIT and James Orlin © Nonlinear Programming Theory.
Numerical Optimization
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Nonlinear Optimization for Optimal Control
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Efficient Methodologies for Reliability Based Design Optimization
Unconstrained Optimization Problem
1 Multiple Kernel Learning Naouel Baili MRL Seminar, Fall 2009.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Ch. 9: Direction Generation Method Based on Linearization Generalized Reduced Gradient Method Mohammad Farhan Habib NetLab, CS, UC Davis July 30, 2010.
Tier I: Mathematical Methods of Optimization
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
1 Chapter 8 Nonlinear Programming with Constraints.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
ENCI 303 Lecture PS-19 Optimization 2
Some Key Facts About Optimal Solutions (Section 14.1) 14.2–14.16
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Fin500J: Mathematical Foundations in Finance
Remarks: 1.When Newton’s method is implemented has second order information while Gauss-Newton use only first order information. 2.The only differences.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Table of Contents CD Chapter 14 (Solution Concepts for Linear Programming) Some Key Facts.
Introduction to Semidefinite Programs Masakazu Kojima Semidefinite Programming and Its Applications Institute for Mathematical Sciences National University.
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Nonsmooth Optimization for Optimal Power Flow over Transmission Networks GlobalSIP 2015 Authors: Y. Shi, H. D. Tuan, S. W. Su and H. H. M. Tam.
Inexact SQP methods for equality constrained optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Parametric Quadratic Optimization Oleksandr Romanko Joint work with Alireza Ghaffari Hadigheh and Tamás Terlaky McMaster University January 19, 2004.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
Water resources planning and management by use of generalized Benders decomposition method to solve large-scale MINLP problems By Prof. André A. Keller.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solver & Optimization Problems
Computational Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Constraints.
Presentation transcript:

General Nonlinear Programming (NLP) Software CAS 737 / CES 735 Kristin Davies Hamid Ghaffari Alberto Olvera-Salazar Voicu Chis January 12, 2006

Outline Intro to NLP Examination of: IPOPT PENNON CONOPT LOQO KNITRO Comparison of Computational Results Conclusions

Intro to NLP The general problem: Either the objective function or some of the constraints may be nonlinear

Intro to NLP (cont’d…) Recall: However: The feasible region of any LP is a convex set if the LP has an optimal solution, there is an extreme point of the feasible set that is optimal However: even if the feasible region of an NLP is a convex set, the optimal solution might not be an extreme point of the feasible region

Intro to NLP (cont’d…) Some Major approaches for NLP Interior Point Methods Use a log-barrier function Penalty and Augmented Lagrange Methods Use the idea of penalty to transform a constrained problem into a sequence of unconstrained problems. Generalized reduced gradient (GRG) Use a basic Descent algorithm. Successive quadratic programming (SQP) Solves a quadratic approximation at every iteration.

Summary of NLP Solvers

IPOPT SOLVER (Interior Point OPTimizer) Creators Andreas Wachter and L.T. Biegler at CMU (~2002) Aims Solver for Large-Scale Nonlinear Optimization problems Applications General Nonlinear optimization Process Engineering, DAE/PDE Systems, Process Design and Operations, Nonlinear Model Predictive control, Design Under Uncertainty

IPOPT SOLVER (Interior Point OPTimizer) Input Format Can be linked to Fortran and C code MATLAB and AMPL. Language / OS Fortran 77, C++ (Recent Version IPOPT 3.x) Linux/UNIX platforms and Windows Commercial/Free Released as open source code under the Common Public License (CPL).   It is available from the COIN-OR repository

IPOPT SOLVER (Interior Point OPTimizer) Key Claims Global Convergence by using a Line Search. Find a KKT point Point that Minimizes Infeasibility (locally) Exploits Exact Second Derivatives AMPL (automatic differentiation) If not Available use QN approx (BFGS) Sparsity of the KKT matrix. IPOPT has a version to solve problems with MPEC Constraints. (IPOPT-C)

IPOPT SOLVER (Interior Point OPTimizer) Algorithm Interior Point method with a novel line search filter. Optimization Problem The bounds are replaced by a logarithmic Barrier term. The method solves a sequence of barrier problems for decreasing values of ml

IPOPT SOLVER (Interior Point OPTimizer) Algorithm (For a fixed value of ) Solve the Barrier Problem Search Direction (Primal-Dual IP) Use a Newton method to solve the primal dual equations. Hessian Approximation (BFGS update) Line Search (Filter Method) Feasibility Restoration Phase

IPOPT SOLVER (Interior Point OPTimizer) Optimization Problem Outer Loop The bounds are replaced by a logarithmic Barrier term. The method solves a sequence of barrier problems for decreasing values of

IPOPT SOLVER (Interior Point OPTimizer) Algorithm (For a fixed value of ) Solve the Barrier Problem Search Direction (Primal-Dual IP) Use a Newton method to solve the primal dual equations Hessian Approximation (BFGS update)

IPOPT SOLVER (Interior Point OPTimizer) Barrier NLP Inner Loop Optimality conditions At a Newton's iteration (xk,lk,vk) At a Newton's iteration (xk,lk,vk) Algorithm Core: Solution of this Linear system

IPOPT SOLVER (Interior Point OPTimizer) Algorithm (For a fixed value of ) Line Search (Filter Method) A trial point is accepted if improves feasibility or if improves the barrier function Assumes Newton directions are “Good” especially when using Exact 2nd Derivatives If

IPOPT SOLVER (Interior Point OPTimizer) Line Search - Feasibility Restoration Phase When a new trial point does not provides sufficient improvement. Restore Feasibility Minimize constraint violation Force Unique Solution Find closest feasible point. Add Penalty function

IPOPT SOLVER (Interior Point OPTimizer) The complexity of the problem increases when complementarity conditions are introduced from: The interior point will drive the barrier parameter mu to zero as part of the solution hence, the complementarity constraints are recovered in the limit. The interior Point method for NLPs has been extended to handle complementarity problems. (Raghunathan et al. 2003). is relaxed as

IPOPT SOLVER (Interior Point OPTimizer) Additional IPOPT 3x. Is now programmed in C++. Is the primary NLP Solver in an undergoing project for MINLP with IBM. References Ipopt homepage: http://www.coin-or.org/Ipopt/ipopt-fortran.html A. Wächter and L. T. Biegler, On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming, Research Report, IBM T. J. Watson Research Center, Yorktown, USA, (March 2004 - accepted for publication in Mathematical Programming)

PENNON (PENalty method for NONlinear & semidefinite programming) Creators Michal Kocvara & Michael Stingl (~2001) Aims NLP, Semidefinite Programming (SDP), Linear & Bilinear Matrix Inequalities (LMI & BMI), Second Order Conic Programming (SOCP) Applications General purpose nonlinear optimization, systems of equations, control theory, economics & finance, structural optimization, engineering

SDP (SemiDefinite Programming) Minimization of a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite Linear Matrix Inequality (LMI) defines a convex constraint on x

SDP (SemiDefinite Programming) -always an optimal point on the boundary -boundary consists of piecewise algebraic surfaces

SOCP (Second-Order Conic Programming) Minimization of a linear function subject to a second-order cone constraint Called a second-order cone constraint since the unit second-order cone of dimension k is defined as: Which is called the quadratic, ice-cream, or Lorentz cone

PENNON (PENalty method for NONlinear & semidefinite programming) Input Format MATLAB function, routine called from C or Fortran, stand-alone program with AMPL Language Fortran 77 Commercial/Free Variety of licenses ranging from Academic – single user ($460 CDN) to Commercial – company ($40,500 CDN)

PENNON (PENalty method for NONlinear & semidefinite programming) Key Claims 1st available code for combo NLP, LMI, & BMI constraints Aimed at (very) large-scale problems Efficient treatment of different sparsity patterns in problem data Robust with respect to feasibility of initial guess Particularly efficient for large convex problems

PENNON (PENalty method for NONlinear & semidefinite programming) Algorithm Generalized version of the Augmented Langrangian method (originally by Ben-Tal & Zibulevsky) Augmented Problem Augmented Lagrangian

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm Consider only inequality constraints from (NLP) Based on choice of a penalty function, φg, that penalizes the inequality constraints Penalty function must satisfy multiple properties such that the original (NLP) has the same solution as the following “augmented” problem: (NLPφ) [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm (Cont’d…) The Lagrangian of (NLPφ) can be viewed as a (generalized) augmented Lagrangian of (NLP): Inequality constraint Penalty parameter Lagrange multiplier Penalty function [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS Initialization Can start with an arbitrary primal variable , therefore, choose Calculate initial multiplier values Initial p= , typically between 10 - 10000 [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS (Approximate) Unconstrained Minimization Performed either by Newton with Line Search, or by Trust Region Stopping criteria: [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS Update of Multipliers Restricted in order to satisfy: with a positive If left-side violated, let If right side violate, let [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS Update of Penalty Parameter No update during first 3 iterations Afterwards, updated by a constant factor dependent on initial penalty parameter Penalty update is stopped if peps (10-6) is reached [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm Choice of Penalty Function Most efficient penalty function for convex NLP is the quadratic-logarithmic function: [4] Ben-Tal & Zibulevsky

PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm Overall Stopping Criteria [3] Kocvara & Stingl

PENNON (PENalty method for NONlinear & semidefinite programming) Assumptions / Warnings More tuning for nonconvex problems is still required Slower at solving linear SDP problems since algorithm is generalized

PENNON (PENalty method for NONlinear & semidefinite programming) References Kocvara, Michal & Michael Stingl. PENNON: A Code for Convex and Semidefinite Programming. Optimization Methods and Software, 8(3):317-333, 2003. Kocvara, Michal & Michael Stingl. PENNON-AMPL User’s Guide. www.penopt.com . August 2003. Ben-Tal, Aharon & Michael Zibulevsky. Penalty/Barrier Multiplier Methods for Convex Programming Problems. Siam J. Optim., 7(2):347-366, 1997. Pennon Homepage. www.penopt.com/pennon.html Available online January 2007.