MIT and James Orlin © 2003 1 Nonlinear Programming Theory.

Slides:



Advertisements
Similar presentations
Line Search.
Advertisements

Solving LP Models Improving Search Special Form of Improving Search
Advanced Topics in Algorithms and Data Structures Lecture 7.2, page 1 Merging two upper hulls Suppose, UH ( S 2 ) has s points given in an array according.
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Dragan Jovicic Harvinder Singh
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Aim: Concavity & 2 nd Derivative Course: Calculus Do Now: Aim: The Scoop, the lump and the Second Derivative. Find the critical points for f(x) = sinxcosx;
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Optimization 吳育德.
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
1 OR II GSLM Outline  classical optimization – unconstrained optimization  dimensions of optimization  feasible direction.
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Linear Programming?!?! Sec Linear Programming In management science, it is often required to maximize or minimize a linear function called an objective.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Nonlinear Programming
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Local and Global Optima
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Non Linear Programming 1
Page 1 Page 1 Engineering Optimization Second Edition Authors: A. Rabindran, K. M. Ragsdell, and G. V. Reklaitis Chapter-2 (Functions of a Single Variable)
Optimization Methods One-Dimensional Unconstrained Optimization
6. Linear Programming (Graphical Method) Objectives: 1.More than one solution 2.Unbounded feasible region 3.Examples Refs: B&Z 5.2.
ECON 1150, Spring 2013 Lecture 3: Optimization: One Choice Variable Necessary conditions Sufficient conditions Reference: Jacques, Chapter 4 Sydsaeter.
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Optimization Methods One-Dimensional Unconstrained Optimization
Computer Algorithms Mathematical Programming ECE 665 Professor Maciej Ciesielski By DFG.
Lecture 9 – Nonlinear Programming Models
In this section, we will investigate the process for finding the area between two curves and also the length of a given curve.
Instructor: Prof.Dr.Sahand Daneshvar Presented by: Seyed Iman Taheri Student number: Non linear Optimization Spring EASTERN MEDITERRANEAN.
1. Problem Formulation. General Structure Objective Function: The objective function is usually formulated on the basis of economic criterion, e.g. profit,
Chapter 11 Nonlinear Programming
ENCI 303 Lecture PS-19 Optimization 2
Nonlinear Programming (NLP) Operation Research December 29, 2014 RS and GISc, IST, Karachi.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Systems of Inequalities in Two Variables Sec. 7.5a.
Thursday, April 18 Nonlinear Programming (NLP)
Thursday, May 16 Review of Handouts: Lecture Notes.
Nonlinear Programming Models
Linear Programming Problem. Definition A linear programming problem is the problem of optimizing (maximizing or minimizing) a linear function (a function.
Nonlinear Programming I Li Xiaolei. Introductory concepts A general nonlinear programming problem (NLP) can be expressed as follows: objective function.
Optimization unconstrained and constrained Calculus part II.
3 Components for a Spreadsheet Optimization Problem  There is one cell which can be identified as the Target or Set Cell, the single objective of the.
Introduction to Optimization
Calculus-Based Optimization AGEC 317 Economic Analysis for Agribusiness and Management.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Faculty of Economics Optimization Lecture 1 Marco Haan February 14, 2005.
1 Unconstrained and Constrained Optimization. 2 Agenda General Ideas of Optimization Interpreting the First Derivative Interpreting the Second Derivative.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Lecture 4 Chapter 3 Improving Search
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
MIT and James Orlin © The Geometry of Linear Programs –the geometry of LPs illustrated on GTC.
An Introduction to Linear Programming
deterministic operations research
Computational Optimization
Extreme Values of Functions
Concavity.
Lecture 8 – Nonlinear Programming Models
Chapter 3 The Simplex Method and Sensitivity Analysis
3-3 Optimization with Linear Programming
Chapter 7 Optimization.
Outline Unconstrained Optimization Functions of One Variable
Graphical Solution of Linear Programming Problems
EE 458 Introduction to Optimization
Part 4 - Chapter 13.
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
Presentation transcript:

MIT and James Orlin © Nonlinear Programming Theory

MIT and James Orlin © Difficulties of NLP Models Nonlinear Programs: Linear Program:

MIT and James Orlin © Graphical Analysis of Non-linear programs in two dimensions: An example Minimize subject to (x - 8) 2 + (y - 9) 2  49 x  2 x  13 x + y  24

MIT and James Orlin © Where is the optimal solution? y x Note: the optimal solution is not at a corner point. It is where the isocontour first hits the feasible region.

MIT and James Orlin © Another example: y x Minimize (x-8) 2 + (y-8) 2 Then the global unconstrained minimum is also feasible. The optimal solution is not on the boundary of the feasible region.

MIT and James Orlin © Local vs. Global Optima There may be several locally optimal solutions. x z 1 0 z = f(x) max f(x) s.t. 0  x  1 A B C Def’n: Let x be a feasible solution, then –x is a global max if f(x)  f(y) for every feasible y. –x is a local max if f(x)  f(y) for every feasible y sufficiently close to x (i.e., x j -  y j  x j +  for all j and some small  ).

MIT and James Orlin © When is a locally optimal solution also globally optimal? For minimization problems –The objective function is convex. –The feasible region is convex.

W P Convexity and Extreme Points We say that a set S is convex, if for every two points x and y in S, and for every real number in [0,1], x + (1- )y  S. The feasible region of a linear program is convex. x y We say that an element w  S is an extreme point (vertex, corner point), if w is not the midpoint of any line segment contained in S. 8

MIT and James Orlin © On convex feasible regions If all constraints are linear, then the feasible region is convex

MIT and James Orlin © On Convex Feasible Regions The intersection of convex regions is convex

MIT and James Orlin © S Recognizing convex sets Rule of thumb: suppose for all x, y  S the midpoint of x and y is in S. Then S is convex. x y It is convex if the entire line segment is always in S. (x+y)/2

Which are convex? C B B  CB  CB  C D A 12

MIT and James Orlin © Joining two points on a curve The line segment joining two points on a curve. Let f( ) be a function, and let g( y+(1-  z)) = f(y) + (1-  )f(z) for 0  . f(y) f(z) (y+z)/2 f(y)/2 + f(z)/2 g(x) g(y) = f(y) g(z) = f(z) g(y/2 + z/2) = f(y)/2 + g(z)/2 y z f(x) x

MIT and James Orlin © Convex Functions Convex Functions: f( y + (1-  )z)  f(y) + (1-  )f(z) for every y and z and for 0 . e.g., = 1/2 f(y/2 + z/2)  f(y)/2 + f(z)/2 Line joining any points is above the curve f(x) x y z f(y) f(z) (y+z)/2 f(y)/2 + f(z)/2 We say “strict” convexity if sign is “<” for 0 .

MIT and James Orlin © Concave Functions Concave Functions: f( y + (1-  )z)  f(y) + (1-  )f(z) for every y and z and for 0 . e.g., = 1/2 f(y/2 + z/2)  f(y)/2 + f(z)/2 Line joining any points is below the curve x y z f(y) f(z) (y+z)/2 f(y)/2 + f(z)/2 We say “strict” concavity if sign is “<” for 0 .

MIT and James Orlin © Classify as convex or concave or both or neither.

MIT and James Orlin © More on convex functions x f(x) -x f(-x) If f(x) is convex, then f(-x) is convex.

MIT and James Orlin © More on convex functions x y If f(x) is convex, then K - f(x) is concave. x y

MIT and James Orlin © More on convex functions If f(x) is a twice differentiable function of one variable, and if f”(x) > 0 for all x, then f(x) is convex. f(x) = x 2. f’(x) = 2x, f”(x) = 2 f(x) = - ln(x) for x > 0 f’(x) = -1/x, f”(x) = 1/x 2

MIT and James Orlin © f(x)g(x) Even more on convex functions If f(x) is convex and g(x) is convex, then so is f(x) + g(x) f(x)+g(x)

MIT and James Orlin © f(x)g(x) Even more on convex functions If f(x) is convex and g(x) is convex, then so is max [f(x), g(x)] max[f(x), g(x)]

MIT and James Orlin © What functions are convex? f(x) = 4x + 7 all linear functions f(x) = 4x 2 – 13some quadratic functions f(x) = e x f(x) = 1/x for x > 0 f(x) = |x| f(x) = - ln(x) for x > 0

MIT and James Orlin © Convex functions vs. convex sets If y = f(x) is convex, then {(x,y) : f(x)  y} is a convex set y x f(x)

MIT and James Orlin © Local Minimum Property A local min of a convex function on a convex feasible region is also a global min. Strict convexity implies that the global minimum is unique. The following NLPs can be solved –Minimization Problems with a convex objective function and linear constraints

MIT and James Orlin © Local minimum property There is a unique local minimum for the function below. y x f(x) The local minimum is a global minimum

MIT and James Orlin © Local Maximum Property A local max of a concave function on a convex feasible region is also a global max. Strict concavity implies that the global optimum is unique. Given this, the following NLPs can be solved –Maximization Problems with a concave objective function and linear constraints

MIT and James Orlin © Local maximum property y x There is a unique local maximum for the function below. The local maximum is a global minimum

MIT and James Orlin © More on local optimality The techniques for non-linear optimization minimization usually find local optima. This is useful when a locally optimal solution is a globally optimal solution It is not so useful in many situations. Conclusion: if you solve an NLP, try to find out how good the local optimal solutions are.

MIT and James Orlin © Finding a local optimal for a single variable NLP Solving NLP's with One Variable: max f(  ) s.t. a  b Optimal solution is either a boundary point or satisfies f  (   ) = 0 and f  (   ) < 0.  f(  ) a  * b  f(  ) a  * b  f(  ) a  * b

MIT and James Orlin © Unimodal Functions A single variable function f is unimodal if there is at most one local maximum (or at most one local minimum).

MIT and James Orlin © Solving Single Variable NLP (contd.) If f(  ) is concave (or simply unimodal) and differentiableunimodal max f(  ) s.t. a    b Bisection (or Bolzano) Search: Step 1. Begin with the region of uncertainty for  as [a, b]. Evaluate f  (  ) at the midpoint   =(a+b)/2. Step 2. If f  (   ) > 0, then eliminate the interval up to  . If f  (   ) < 0, then eliminate the interval beyond  . Step 3. Evaluate f  (  ) at the midpoint of the new interval. Return to Step 2 until the interval of uncertainty is sufficiently small.

MIT and James Orlin © Interval Bisection or Bolzano Search 0 Determine by taking a derivative if a local maximum is to the right or left. to right to left

MIT and James Orlin ©

MIT and James Orlin © Other Search Techniques Instead of taking derivatives (which may be computationally intensive), use two function evaluations to determine updated interval. Fibonacci Search Step 1. Begin with the region of uncertainty for  as [a, b]. Evaluate f  (   ) and f  (   ) for 2 symmetric points   <  . Step 2. If f  (   )  f  (   ), then eliminate the interval up to  . If f  (   )  f  (   ), then eliminate the interval beyond  . Step 3. Select a second point symmetric to the point already in the new interval, rename these points   and   such that   <   and evaluate f  (   ) and f  (   ). Return to Step 2 until the interval is sufficiently small.

MIT and James Orlin © On Fibonacci search 1, 1, 2, 3, 5, 8, 13, 21, 34 At iteration 1, the length of the search interval is the kth fibonacci number for some k At iteration j, the length of the search interval is the k-j+1 fibonacci number. The technique converges to the optimal when the function is unimodal.

MIT and James Orlin © Finding a local maximum using Fibonacci Search Where the maximum may be Length of search interval

MIT and James Orlin © The search finds a local maximum, but not necessarily a global maximum

MIT and James Orlin © The search finds a local maximum, but not necessarily a global maximum

MIT and James Orlin © Number of function evaluations in Fibonacci Search As new point is chosen symmetrically, the length l k of successive search intervals is given by: l k = l k+1 + l k+2. Solving for these lengths given a final interval length of 1, l n = 1, gives the Fibonacci numbers: 1, 2, 3, 5, 8, 13, 21, 34,… Thus, if the initial interval has length 34, it takes 8 function calculations to reduce the interval length to 1. Remark: if the function is convex or unimodal, then fibonacci search converges to the global maximum

MIT and James Orlin © NLP Summary Convex and Concave functions as well as convex sets are important properties Bolzano and Fibonacci search techniques –used to solve single variable unimodal functions