EE 458 Introduction to Optimization

Slides:



Advertisements
Similar presentations
Chapter 11-Functions of Several Variables
Advertisements

Engineering Optimization
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Nonlinear Programming
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
EE 553 Introduction to Optimization
Optimization in Engineering Design 1 Lagrange Multipliers.
MIT and James Orlin © Nonlinear Programming Theory.
OPTIMAL CONTROL SYSTEMS
Constrained Optimization
1 Lecture 11 Resource Allocation Part1 (involving Continuous Variables- Linear Programming) 1.040/1.401/ESD.018 Project Management Samuel Labi and Fred.
Optimization Methods One-Dimensional Unconstrained Optimization
Support Vector Machines
D Nagesh Kumar, IIScOptimization Methods: M2L5 1 Optimization using Calculus Kuhn-Tucker Conditions.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Math for CSLecture 71 Constrained Optimization Lagrange Multipliers ____________________________________________ Ordinary Differential equations.
Tier I: Mathematical Methods of Optimization
Lecture 9 – Nonlinear Programming Models
Introduction to Optimization (Part 1)
EE/Econ 458 Introduction to Linear Programming J. McCalley 1.
ENCI 303 Lecture PS-19 Optimization 2
Nonlinear Programming (NLP) Operation Research December 29, 2014 RS and GISc, IST, Karachi.
LAGRANGE mULTIPLIERS By Rohit Venkat.
1 University of Palestine Operations Research ITGD4207 WIAM_H-Whba Dr. Sana’a Wafa Al-Sayegh 2 nd Semester
Nonlinear Programming Models
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Tangent Planes and Normal Lines
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Section Lagrange Multipliers.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
Copyright © Cengage Learning. All rights reserved.
An Introduction to Linear Programming
deterministic operations research
Lecture 7 Constrained Optimization Lagrange Multipliers
1.1 A Preview of Calculus What is Calculus????????????????????
EE/Econ 458 Integer Programming
Mathematical Programming
L11 Optimal Design L.Multipliers
Copyright © Cengage Learning. All rights reserved.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
MK. OPTMIMASI BAGIAN 2 Kuliah 1: 12 NOVEMBER 2017.
prepared by Imran Ismail
Lecture 8 – Nonlinear Programming Models
Dr. Arslan Ornek IMPROVING SEARCH
Unconstrained and Constrained Optimization
Lecture 9 – Nonlinear Programming Models
The Lagrange Multiplier Method
L5 Optimal Design concepts pt A
Equations of Straight Lines
Classification of optimization problems
Outline Unconstrained Optimization Functions of One Variable
Systems Analysis Methods
Copyright © Cengage Learning. All rights reserved.
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
What are optimization methods?
Multivariable optimization with no constraints
EE/Econ 458 Introduction to Optimization
EE/Econ 458 Introduction to Linear Programming
Presentation transcript:

EE 458 Introduction to Optimization J. McCalley

Electricity markets and tools Day-ahead Real-time SCED SCUC and SCED BOTH LOOK LIKE THIS SCUC: x contains discrete & continuous variables. SCED: x contains only continuous variables. Minimize f(x) subject to h(x)=c g(x)< b

Optimization Terminology An optimization problem or a mathematical program or a mathematical programming problem. Minimize f(x) subject to h(x)=c g(x)< b f(x): Objective function x: Decision variables h(x)=c: Equality constraint g(x)< b: Inequality constraint x*: solution

Classification of Optimization Problems Continuous Optimization Unconstrained Optimization Bound Constrained Optimization Derivative-Free Optimization Global Optimization Linear Programming Network Flow Problems Nondifferentiable Optimization Nonlinear Programming Optimization of Dynamic Systems Quadratic Constrained Quadratic Programming Quadratic Programming Second Order Cone Programming Semidefinite Programming Semiinfinite Programming Discrete and Integer Optimization Combinatorial Optimization Traveling Salesman Problem Integer Programming Mixed Integer Linear Programming Mixed Integer Nonlinear Programming Optimization Under Uncertainty Robust Optimization Stochastic Programming Simulation/Noisy Optimization Stochastic Algorithms Complementarity Constraints and Variational Inequalities Complementarity Constraints Game Theory Linear Complementarity Problems Mathematical Programs with Complementarity Constraints Nonlinear Complementarity Problems Systems of Equations Data Fitting/Robust Estimation Nonlinear Equations Nonlinear Least Squares Systems of Inequalities Multiobjective Optimization http://www.neos-guide.org/NEOS/index.php/Optimization_Tree

Convex functions Definition #1: A function f(x) is convex in an interval if its second derivative is positive on that interval. Example: f(x)=x2 is convex since f’(x)=2x, f’’(x)=2>0

Convex functions The second derivative test is sufficient but not necessary. Definition #2: A function f(x) is convex if a line drawn between any two points on the function remains on or above the function in the interval between the two points. www.ebyte.it/library/docs/math09/AConvexInequality.html

Convex functions Definition #2: A function f(x) is convex if a line drawn between any two points on the function remains on or above the function in the interval between the two points. Is a linear function convex? Answer is “yes” since a line drawn between any two points on the function remains on the function.

Convex Sets Definition #3: A set C is convex if a line segment between any two points in C lies in C. Ex: Which of the below are convex sets? The set on the left is convex. The set on the right is not.

Convex Sets Definition #3: A set C is convex if a line segment between any two points in C lies in C. S. Boyd and L. Vandenberghe, “Convex optimization,” Cambridge University Press, 2004.

Global vs. local optima Example: Solve the following: Minimize f(x)=x2 Solution: f’(x)=2x=0x*=0. This solution is a local optimum. It is also the global optimum. Example: Solve the following: Minimize f(x)=x3-17x2+80x-100 Solution: f’(x)=3x2-34x+80=0 Solving the above results in x=3.33 and x=8. Issue#1: Which is the best solution? Issue#2: Is the best solution the global solution?

Global vs. local optima Example: Solve the following: Minimize f(x)=x2 Solution: f’(x)=2x=0x*=0. This solution is a local optimum. It is also the global optimum. Example: Solve the following: Minimize f(x)=x3-17x2+80x-100 Solution: f’(x)=3x2-34x+80=0 Solving the above results in x=3.33 and x=8. Issue#1: Which is the best solution? Issue#2: Is the best solution the global solution?

Global vs. local optima Example: Solve the following: Minimize f(x)=x3-17x2+80x-100 Solution: f’(x)=3x2-34x+80=0. Solving results in x=3.33, x=8. Issue#1: Which is the best solution? x=8 Issue#2: Is the best solution the global solution? No! It is unbounded.

Convexity & global vs. local optima When minimizing a function, if we want to be sure that we can get a global solution via differentiation, we need to impose some requirements on our objective function. We will also need to impose some requirements on the set of possible values that the solution x* may take. Min f(x) subject to h(x)=c g(x)< b Definition: If f(x) is a convex function, and if S is a convex set, then the above problem is a convex programming problem. Definition: If f(x) is not a convex function, or if S is not a convex set, then the above problem is a non-convex programming problem.

Convex vs. nonconvex programming problems MATHEMATICAL PROGRAMMING The desirable quality of a convex programming problem is that any locally optimal solution is also a globally optimal solution. If we have a method of finding a locally optimal solution, that method also finds for us the globally optimum solution. Convex We address convex programming problems for the remainder of these notes, and beyond. The undesirable quality of a non-convex programming problem is that any method which finds a locally optimal solution does not necessarily find the globally optimum solution. Non-convex We will later address a special form of non-convex programming problems.

A convex programming problem We focus on this one, but conclusions we derive will also apply to the other two. The benefit of focusing on this one is that we can visualize it. Two variables with one equality-constraint Multi-variable with one equality-constraint. Multi-variable with multiple equality-constraints.

Contour maps Definition: A contour map is a 2-dimensional plane, i.e., a coordinate system in 2 variables, say, x1, x2, that illustrates curves (contours) of constant functional value f(x1, x2). . Example: Draw the contour map for [X,Y] = meshgrid(-2.0:.2:2.0,-2.0:.2:2.0); Z = X.^2+Y.^2; [c,h]=contour(X,Y,Z); clabel(c,h); grid; xlabel('x1'); ylabel('x2');

Contour maps and 3-D illustrations . Example: Draw the 3-D surface for [X,Y] = meshgrid(-2.0:.2:2.0,-2.0:.2:2.0); Z = X.^2+Y.^2; surfc(X,Y,Z) xlabel('x1') ylabel('x2') zlabel('f(x1,x2)') Height is f(x) Contours Each contour of fixed value f is the projection onto the x1-x2 plane of a horizontal slice made of the 3-D figure at a value f above the x1-x2 plane.

Solving a convex program: graphical analysis . Example: Solve this convex program: Superimpose this relation on top of the contour plot for f(x1,x2). 1. f(x1,x2) must be minimized, and so we would like the solution to be as close to the origin as possible; 2. The solution must be on the thick line in the right-hand corner of the plot, since this line represents the equality constraint.

Solving a convex program: graphical analysis . Solution: Any contour f<3 does not intersect the equality constraint; Any contour f>3 intersects the equality constraint at two points. The contour f=3 and the equality constraint just touch each other at the point x*. “Just touch”: The two curves are tangent to one another at the solution point.

Solving a convex program: graphical analysis The two curves are tangent to one another at the solution point. .  The normal (gradient) vectors of the two curves, at the solution (tangent) point, are parallel. This means the following two vectors are parallel: “Parallel” means that the two vectors have the same direction. We do not know that they have the same magnitude. To account for this, we equate with a “multiplier” λ:

Solving a convex program: graphical analysis . Moving everything to the left: Alternately: Performing the gradient operation (taking derivatives with respect to x1 and x2) : In this problem, we already know the solution, but what if we did not? Then could we use the above equations to find the solution?

Solving a convex program: analytical analysis In this problem, we already know the solution, but what if we did not? Then could we use the above equations to find the solution? NO! Because we only have 2 equations, yet 3 unknowns: x1, x2, λ. So we need another equation. Where do we get that equation? Recall our equality constraint: h(x1, x2)-c=0 . This must be satisfied! Therefore: Three equations, three unknowns, we can solve.

Solving a convex program: analytical analysis Observation: The three equations are simply partial derivatives of the function This is obviously true for the first two equations , but it is not so obviously true for the last one. But to see it, observe

Formal approach to solving our problem Define the Lagrangian function: In a convex programming problem, the “first-order conditions” for finding the solution is given by OR Or more compactly where we have used x=(x1, x2)

Formal approach to solving our problem Define the Lagrangian function: In a convex programming problem, the “first-order conditions” for finding the solution is given by OR Or more compactly where we have used x=(x1, x2)