Lecture 9 – Nonlinear Programming Models

Slides:



Advertisements
Similar presentations
Fin500J: Mathematical Foundations in Finance
Advertisements

Nonlinear Programming McCarl and Spreen Chapter 12.
10-1 Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Nonlinear Programming Chapter 10.
CHEN 4460 – Process Synthesis, Simulation and Optimization
The securities market economy -- theory Abstracting again to the two- period analysis - - but to different states of payoff.
Mean-variance portfolio theory
MS&E 211 Quadratic Programming Ashish Goel. A simple quadratic program Minimize (x 1 ) 2 Subject to: -x 1 + x 2 ≥ 3 -x 1 – x 2 ≥ -2.
BA 452 Lesson A.2 Solving Linear Programs 1 1ReadingsReadings Chapter 2 An Introduction to Linear Programming.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Engineering Optimization
Introduction to Management Science
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Nonlinear Programming
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Local and Global Optima
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Separating Hyperplanes
Non Linear Programming 1
MIT and James Orlin © Nonlinear Programming Theory.
Optimization using Calculus
Constrained Maximization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
CHAPTER SIX THE PORTFOLIO SELECTION PROBLEM. INTRODUCTION n THE BASIC PROBLEM: given uncertain outcomes, what risky securities should an investor own?
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
Linear Programming Econ Outline  Review the basic concepts of Linear Programming  Illustrate some problems which can be solved by linear programming.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Computer Algorithms Mathematical Programming ECE 665 Professor Maciej Ciesielski By DFG.
Tier I: Mathematical Methods of Optimization
1 OR II GSLM Outline  separable programming  quadratic programming.
Linear-Programming Applications
FIN638 Vicentiu Covrig 1 Portfolio management. FIN638 Vicentiu Covrig 2 How Finance is organized Corporate finance Investments International Finance Financial.
Introduction to Optimization (Part 1)
1 OR II GSLM Outline  course outline course outline  general OR approach  general forms of NLP  a list of NLP examples.
KKT Practice and Second Order Conditions from Nash and Sofer
Roman Keeney AGEC  In many situations, economic equations are not linear  We are usually relying on the fact that a linear equation.
1. Problem Formulation. General Structure Objective Function: The objective function is usually formulated on the basis of economic criterion, e.g. profit,
Chapter 11 Nonlinear Programming
ENCI 303 Lecture PS-19 Optimization 2
CHAPTER SEVEN PORTFOLIO ANALYSIS. THE EFFICIENT SET THEOREM THE THEOREM An investor will choose his optimal portfolio from the set of portfolios that.
0 Portfolio Managment Albert Lee Chun Construction of Portfolios: Introduction to Modern Portfolio Theory Lecture 3 16 Sept 2008.
1 DSCI 3023 Linear Programming Developed by Dantzig in the late 1940’s A mathematical method of allocating scarce resources to achieve a single objective.
Nonlinear Programming (NLP) Operation Research December 29, 2014 RS and GISc, IST, Karachi.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Thursday, April 18 Nonlinear Programming (NLP)
Chapter 1. Formulations 1. Integer Programming  Mixed Integer Optimization Problem (or (Linear) Mixed Integer Program, MIP) min c’x + d’y Ax +
Nonlinear Programming Models
Risk Analysis & Modelling
Risk Management with Coherent Measures of Risk IPAM Conference on Financial Mathematics: Risk Management, Modeling and Numerical Methods January 2001.
Nonlinear Programming I Li Xiaolei. Introductory concepts A general nonlinear programming problem (NLP) can be expressed as follows: objective function.
Optimization unconstrained and constrained Calculus part II.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Introduction to Optimization
Calculus-Based Optimization AGEC 317 Economic Analysis for Agribusiness and Management.
OR II GSLM
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
deterministic operations research
Chapter 11 Optimization with Equality Constraints
Bilevel Portfolio Optimization: A Global Optimization Approach
Lecture 8 – Nonlinear Programming Models
Dr. Arslan Ornek IMPROVING SEARCH
Lecture 9 – Nonlinear Programming Models
Outline Unconstrained Optimization Functions of One Variable
EE 458 Introduction to Optimization
Calculus-Based Optimization AGEC 317
Presentation transcript:

Lecture 9 – Nonlinear Programming Models Topics Convex sets and convex programming First-order optimality conditions Examples Problem classes

General NLP Minimize f(x) s.t. gi(x) (, , =) bi, i = 1,…,m x = (x1,…,xn)T is the n-dimensional vector of decision variables f (x) is the objective function gi(x) are the constraint functions bi are fixed known constants

x0 = lx1 + (1–l)x2 Î S for all l such 0 ≤ l ≤ 1. Convex Sets Definition: A set S  n is convex if every point on the line segment connecting any two points x1, x2 Î S is also in S. Mathematically, this is equivalent to x0 = lx1 + (1–l)x2 Î S for all l such 0 ≤ l ≤ 1.  x1 x2 x1 x1   x2   x2 

(Nonconvex) Feasible Region S = {(x1, x2) : (0.5x1 – 0.6)x2 ≤ 1 2(x1)2 + 3(x2)2 ≥ 27; x1, x2 ≥ 0}

Convex Sets and Optimization Let S = { x Î n : gi(x) £ bi, i = 1,…,m } Fact: If gi(x) is a convex function for each i = 1,…,m then S is a convex set. Convex Programming Theorem: Let x  n and let f (x) be a convex function defined over a convex constraint set S. If a finite solution exists to the problem Minimize { f (x) : x Î S } then all local optima are global optima. If f (x) is strictly convex, the optimum is unique.

Convex Programming Min f (x1,…,xn) s.t. gi(x1,…,xn) £ bi i = 1,…,m is a convex program if f is convex and each gi is convex. Max f (x1,…,xn) s.t. gi(x1,…,xn) £ bi i = 1,…,m x1 ³ 0,…,xn ³ 0 is a convex program if f is concave and each gi is convex.

Linearly Constrained Convex Function with Unique Global Maximum Maximize f (x) = (x1 – 2)2 + (x2 – 2)2 subject to –3x1 – 2x2 ≤ –6 –x1 + x2 ≤ 3 x1 + x2 ≤ 7 2x1 – 3x2 ≤ 4

(Nonconvex) Optimization Problem

First-Order Optimality Conditions Minimize { f (x) : gi(x)  bi, i = 1,…,m } Lagrangian: Optimality conditions Stationarity: Complementarity: migi(x) = 0, i = 1,…,m Feasibility: gi(x)  bi, i = 1,…,m Nonnegativity: mi  0, i = 1,…,m

Importance of Convex Programs Commercial optimization software cannot guarantee that a solution is globally optimal to a nonconvex program. NLP algorithms try to find a point where the gradient of the Lagrangian function is zero – a stationary point – and complementary slackness holds. Given L(x,m) = f(x) + m(g(x) – b) we want L(x,m) = f(x) + mg(x) = 0 m(g(x) – b) = 0 g(x) – b ≤ 0, m ³ 0 For a convex program, all local solutions are global optima.

Example: Cylinder Design We want to build a cylinder (with a top and a bottom) of maximum volume such that its surface area is no more than s units. Max V(r,h) = pr2h s.t. 2pr2 + 2prh = s r ³ 0, h ³ 0 r h There are a number of ways to approach this problem. One way is to solve the surface area constraint for h and substitute the result into the objective function.

Solution by Substitution s - 2pr 2 s - 2pr 2 rs  Volume = V = pr2 - pr 3 h = [ ] = 2r 2pr 2 dV s s s 1/2 1/2 = 0  r = ( ) h = - r = 2( ) dr 6 p 2pr 6 p s 3/2 s 1/2 s 1/2 V = pr 2h = 2p ( ) p r = ( ) h = 2( ) 6 6 p 6 Is this a global optimal solution?

Test for Convexity rs dV(r) s d2V(r ) - pr 3  = - 3pr 2  = 6pr dr 2 dr 2 d 2 V £ 0 for all r ³ 0 dr 2 Thus V(r ) is concave on r ³ 0 so the solution is a global maximum.

Advertising (with Diminishing Returns) A company wants to advertise in two regions. The marketing department says that if $x1 is spent in region 1, sales volume will be 6(x1)1/2. If $x2 is spent in region 2, sales volume will be 4(x2)1/2. The advertising budget is $100. Model: Max f (x) = 6(x1)1/2 + 4(x2)1/2 s.t. x1 + x2 £ 100, x1 ³ 0, x2 ³ 0 Solution: x1* = 69.2, x2* = 30.8, f (x*) = 72.1 Is this a global optimum?

Excel Add-in Solution

Portfolio Selection with Risky Assets (Markowitz) Suppose that we may invest in (up to) n stocks. Investors worry about (1) expected gain (2) risk. Let rj = random variable associated with return on stock j mj = expected return on stock j sjj = variance of return for stock j We are also concerned with the covariance terms: sij = cov(ri, rj) If sij > 0 then returns on i and j are positively correlated. If sij < 0 returns are negatively correlated.

Decision Variables: xj = # of shares of stock j purchased Expected return of the portfolio: R(x) = å mjxj n i =1 n j =1 Variance (measure of risk): V(x) = å  å sijxixj V(x) = s11x1x1 + s12x1x2 + s21x2x1 + s22x2x1 = 1 + (-1) + (-1) + 1 = 0 Thus we can construct a “risk-free” portfolio (from variance point of view) if we can find stocks “fully” negatively correlated. Example: If x1 = x2 = 1, we get

Nonlinear optimization models … If , then buying stock 2 is just like buying additional shares of stock 1. Nonlinear optimization models … Let pj = price of stock j b = our total budget b = risk-aversion factor (when b = 0 risk is not a factor) Consider 3 different models: 1) Max f (x) = R(x) – bV(x) s.t. å pj xj £ b, xj ³ 0, j = 1,…,n where b ³ 0 is determined by the decision maker n j =1

s.t. V(x) £ a, å pjxj £ b, xj ³ 0, j = 1,…,n Max f (x) = R(x) s.t. V(x) £ a, å pjxj £ b, xj ³ 0, j = 1,…,n where a ³ 0 is determined by the investor. Smaller values of a represent greater risk aversion. n j =1 3) Min f (x) = V(x) s.t. R(x) ³ g, å pj xj £ b, xj ³ 0, j = 1,…,n where g ³ 0 is the desired rate of return (minimum expectation) is selected by the investor. n j =1

Hanging Chain with Rigid Links 10ft x y 1 ft each link What is equilibrium shape of chain? Decision variables: Let (xj, yj), j = 1,…,n, be the incremental horizontal and vertical displacement of each link, where n ³ 10. Constraints: xj2 + yj2 = 1, j = 1,…,n, each link has length 1 x1 + x2 + • • • + xn = 10, net horizontal displacement y1 + y2 + • • • + yn = 0, net vertical displacement

Summary Objective: Minimize chain’s potential energy Assuming that the center of the mass of each link is at the center of the link. This is equivalent to minimizing 1 2 y1 + (y1 + y2) + (y1 + y2 + y3) + • • • (y1 + • • • + yn-1 + yn) = (n - 1 + 1 2 ]y1 + (n - 2 + )y2 + (n - 3 + )y3 + • • • + 3 yn-1 + yn Summary Min å (n - j + ½)yj n j =1 s.t. xj2 + yj2 = 1, j = 1,…,n x1 + x2 + • • • + xn = 10 y1 + y2 + • • • + yn = 0

Is a local optimum guaranteed to be a global optimum? No! Consider a chain with 4 links: These solutions are both local minima. Constraints xj2 + yj2 = 1 for all j yield a nonconvex feasible region so there may be several local optima.

Direct Current Network Problem: Determine the current flows I1, I2,…,I7 so that the total content is minimized Content: G(I) = 0I v(i)di for I ≥ 0 and G(I) = I 0 v(i)di for I < 0

Solution Approach Electrical Engineering: Use Kirchoff’s laws to find currents when power source is given. Operations Research: Optimize performance measure in network taking flow balance into account. Linear resistor: Voltage, v(I ) = IR Content function, G(I ) = I 2R/2 Battery: Voltage, v(I ) = –E Content function, G(I ) = –EI

Network Flow Model Network diagram: Minimize Z = –100I1 + 5I22 + 5I32 + 10I42 + 10I52 subject to I1 – I2 = 0, I2 – I3 – I4 = 0, I5 – I6 = 0, I5 + I7 = 0, I3 + I6 – I7 = 0, –I1 – I6 = 0 Solution: I1 = I2 = 50/9, I3 = 40/9, I4 = I5 = 10/9, I6 = –50/9, I7 = –10/9

NLP Problem Classes Constrained vs. unconstrained Convex programming problem Quadratic programming problem f (x) = a + cTx + ½ xTQx, Q  0 Separable programming problem f (x) = j=1,n fj(xj) Geometric programming problem g(x) = t=1,T ctPt(x), Pt(x) = (x1at1) . . . (xnatn), xj > 0 Equality constrained problems

What You Should Know About Nonlinear Programming How to identify a convex program. How to write out the first-order optimality conditions. The difference between a local and global solution. How to classify problems.