1 OR II GSLM 52800. 2 Outline  separable programming  quadratic programming.

Slides:



Advertisements
Similar presentations
Dantzig-Wolfe Decomposition
Advertisements

Solving LP Models Improving Search Special Form of Improving Search
Nonlinear Programming McCarl and Spreen Chapter 12.
Operation Research Chapter 3 Simplex Method.
L17 LP part3 Homework Review Multiple Solutions Degeneracy Unbounded problems Summary 1.
LECTURE 14 Minimization Two Phase method by Dr. Arshad zaheer
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
MATHEMATICS 3 Operational Analysis Štefan Berežný Applied informatics Košice
3.4 Artificial starting solution Page103
Optimization. f(x) = 0 g i (x) = 0 h i (x)
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Support vector machine
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
C&O 355 Mathematical Programming Fall 2010 Lecture 15 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
1 OR II GSLM Outline  classical optimization – unconstrained optimization  dimensions of optimization  feasible direction.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
L12 LaGrange Multiplier Method Homework Review Summary Test 1.
Nonlinear Programming
Linear Programming Fundamentals Convexity Definition: Line segment joining any 2 pts lies inside shape convex NOT convex.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Local and Global Optima
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
MIT and James Orlin © Nonlinear Programming Theory.
1 Chapter 8: Linearization Methods for Constrained Problems Book Review Presented by Kartik Pandit July 23, 2010 ENGINEERING OPTIMIZATION Methods and Applications.
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
Support Vector Machines and Kernel Methods
Optimization Linear Programming and Simplex Method
Unconstrained Optimization Problem
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
ISM 206 Lecture 3 The Simplex Method. Announcements.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
LINEAR PROGRAMMING: THE GRAPHICAL METHOD
Computer Algorithms Mathematical Programming ECE 665 Professor Maciej Ciesielski By DFG.
Lecture 9 – Nonlinear Programming Models
4.8: Quadratic Formula HW: worksheet
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Part 4 Nonlinear Programming 4.3 Successive Linear Programming.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Linear Programming Terminology. Contents 1.What is a Mathematical Model? 2.Illustration of LPP: Maximization Case 3.What.
Nonlinear Programming Models
Chapter 4 Linear Programming: The Simplex Method
1 Chapter 4 The Simplex Algorithm PART 2 Prof. Dr. M. Arslan ÖRNEK.
5-5 Solving Quadratic Equations Objectives:  Solve quadratic equations.
1 Simplex Method (created by George Dantzig in late 1940s) A systematic way of searching for an optimal LP solution BMGT 434, Spring 2002 Instructor: Chien-Yu.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
D Nagesh Kumar, IIScOptimization Methods: M8L1 1 Advanced Topics in Optimization Piecewise Linear Approximation of a Nonlinear Function.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
OR II GSLM
1 Optimization Linear Programming and Simplex Method.
Part 4 Nonlinear Programming 4.3 Successive Linear Programming.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chapter 11 Optimization with Equality Constraints
L11 Optimal Design L.Multipliers
Lecture 8 – Nonlinear Programming Models
Solving Linear Programming Problems: Asst. Prof. Dr. Nergiz Kasımbeyli
ENGM 435/535 Optimization Adapting to Non-standard forms.
Basic Concepts of Optimization
Part 4 Nonlinear Programming
Part 4 Nonlinear Programming
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
Presentation transcript:

1 OR II GSLM 52800

2 Outline  separable programming  quadratic programming

3 Separable Programs  a separable NLP if  f and all g j are separable functions  0  x i   i, a finite number

4 Idea of Separable Program  min f(x), s.t.s.t.  g j (x)  0 for j = 1, …, m.  hard NLP but simple LP problems  approximating a separable NL program by a LP  a non-linear function by a piecewise linear one

5 A Fact About Convex Functions  f: a convex function  for any  > 0, possible to find a sequence of piecewise linear convex functions f n such that |f  f n |  

6 Example 6.1  a separable program

7 Example 6.1  approximating by a piecewise linear function  two representations, -form and  -form pointsOABC x1x y C B A O

8  Form  a piecewise linear function with (segment) break points  any point = the convex combination of the two break points of the linear segment  i (  0) = the weight of break point i 1 C B A O

9 Example 6.1  the program becomes  the last but one type of constraints is non-linear

10 Fact  nonlinear constraint: at most two adjacent i taking non-zero values  possible to have only one i = 1  for convex f and g j : no need to have the non- linear constraint  non-optimal to have more than two non-zero i, or two i not adjacent

11 Fact  non-optimal to have more than two non-zero i, or two i not adjacent  e.g., f being an objective function  any convex combination between two non-adjacent break points being above the piecewise non-linear function  similarly, the point for three or more non-zero I ’s lying above the piecewise non-linear function  think about A = 0.3, B = 0.4, and C = C B A O

12 Fact  non-optimal to have more than two non-zero i, or two i not adjacent  e.g., g j being a constraint  g j (0.3A+0.7B)  g j (0.3A+0.7C)  b j  the feasible set of {0.3A+0.7B} is larger than that by {0.3A+0.7C}  the solution from {0.3A+0.7C} cannot be minimum  similar argument for three or more non-zero i ’s lying above the piecewise non-linear function C B A O

13 Example 6.1  the program becomes a linear program

14 Example 6.2: Non-Convex Problem  min f(x),  s.t.1  x  3.  approximating f(x) by a piecewise linear function  y = A + 6 B  x = A + 3 B

15 Example 6.2: Non-Convex Problem  adding slack variable s, surplus variable u, and artificial variable a 1 and a 2 :

16 Example 6.2: Non-Convex Problem

17 Example 6.2: Non-Convex Problem

18 Example 6.2: Non-Convex Problem  most negative 0  B in basis  only A qualified to enter, not O

19 Example 6.2: Non-Convex Problem

20 Example 6.2: Non-Convex Problem

21 Example 6.2: Non-Convex Problem

22  Form  again, the last constraint is unnecessary for a convex program 1 C B A O

23 Quadratic Programming

24 Quadratic Objective Function & Linear Constraints  Langrangian function

25 KKT Conditions  positive definite Q  a convex program  a unique global minimum  the KKT sufficient  otherwise, KKT necessary

26 KKT Conditions  c T + x T Q +  T A  0  Qx + A   y =  c  Ax  b  0  Ax + v = b  x T (c + Qx + A  ) = 0  x T y = 0   T (Ax  b) = 0   T v = 0  x  0,   0, y  0, v  0  solving the set of equations  phase-1 of a linear program

27 Example 7.1 (Example of JB)

28 Example 7.1 (Example of JB)  KKT conditions  2x 1 +  1 +  2  y 1 = 8,  8x 2 +  1  y 2 = 16,  x 1 + x 2 + v 1 = 5,  x 1 + v 2 = 3.  x 1 y 1 = x 2 y 2 =  1 v 1 =  2 v 2 = 0  x 1, y 1, x 2, y 2,  1, v 1,  2, v 2  0

29 Example 7.1 (Example of JB)

30 Example 7.1 (Example of JB) Example 7.1 (Example of JB)