Classification of optimization problems

Slides:



Advertisements
Similar presentations
Linear Programming Problem. Introduction Linear Programming was developed by George B Dantzing in 1947 for solving military logistic operations.
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Nonlinear Programming
Maxima and Minima. Maximum: Let f(x) be a function with domain DC IR then f(x) is said to attain the maximum value at a point a є D if f(x)
Optimization in Engineering Design 1 Lagrange Multipliers.
OPTIMAL CONTROL SYSTEMS
Page 1 Page 1 Engineering Optimization Second Edition Authors: A. Rabindran, K. M. Ragsdell, and G. V. Reklaitis Chapter-2 (Functions of a Single Variable)
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Constrained Optimization
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Optimization using Calculus
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
f /(x) = 6x2 – 6x – 12 = 6(x2 – x – 2) = 6(x-2)(x+1).
Section 2.3 Properties of Functions. For an even function, for every point (x, y) on the graph, the point (-x, y) is also on the graph.
D Nagesh Kumar, IIScOptimization Methods: M1L3 1 Introduction and Basic Concepts (iii) Classification of Optimization Problems.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
MER 160, Prof. Bruno1 Optimization The idea behind “optimization” is to find the “best” solution from a domain of “possible” solutions. Optimization methods.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
1.2 ANALYZING GRAPHS OF FUNCTIONS Copyright © Cengage Learning. All rights reserved.
Another sufficient condition of local minima/maxima
Properties of Functions
deterministic operations research
Chapter 11 Optimization with Equality Constraints
Copyright © Cengage Learning. All rights reserved.
3.1 – Derivative of a Function
3.1 – Derivative of a Function
Concavity.
CALCULUS AND ANALYTIC GEOMETRY CS 001 LECTURE 02.
Functions of Several Variables
Copyright © Cengage Learning. All rights reserved.
Lecture 8 – Nonlinear Programming Models
Constrained Optimization
Copyright © Cengage Learning. All rights reserved.
Unconstrained and Constrained Optimization
Applications of Differential Calculus
1. Problem Formulation.
Copyright © Cengage Learning. All rights reserved.
Optimization Problems
Chapter 7 Optimization.
Constrained Optimization
Constrained Optimization – Part 1
Linear Programming I: Simplex method
13 Functions of Several Variables
PRELIMINARY MATHEMATICS
The Stale of a System Is Completely Specified by lts Wave Function
Outline Unconstrained Optimization Functions of One Variable
Optimization and Some Traditional Methods
Properties of Functions
Ch 5.4: Euler Equations; Regular Singular Points
EE 458 Introduction to Optimization
Derivatives in Action Chapter 7.
Increasing and Decreasing Functions and the First Derivative Test
Constrained Optimization – Part 1
Part 4 - Chapter 13.
Increasing and Decreasing Functions and the First Derivative Test
Applications of Differentiation
Properties of Functions
Constrained Optimization
What are optimization methods?
More Nonlinear Functions and Equations
Calculus-Based Optimization AGEC 317
Multivariable optimization with no constraints
Maximum and Minimum Values
Analyzing Multivariable Change: Optimization
Presentation transcript:

Classification of optimization problems Classification based on: Constraints Constrained optimization problem Unconstrained optimization problem Nature of the design variables Static optimization problems Dynamic optimization problems In static optimization problems, the design variables are constants whereas in the dynamic optimization problems, each design variable is a function of one or more parameters. Prof. Sweta Shah

Classification of optimization problems Classification based on: Physical structure of the problem Optimal control problems Non-optimal control problems Nature of the equations involved Nonlinear programming problem Geometric programming problem Quadratic programming problem Linear programming problem In static optimization problems, the design variables are constants whereas in the dynamic optimization problems, each design variable is a function of one or more parameters. An optimal Prof. Sweta Shah

Classification of optimization problems Classification based on: Permissable values of the design variables Integer programming problems Real valued programming problems Deterministic nature of the variables Stochastic programming problem Deterministic programming problem In static optimization problems, the design variables are constants whereas in the dynamic optimization problems, each design variables is a function of one or more parameters. Prof. Sweta Shah

Classification of optimization problems Classification based on: Separability of the functions Separable programming problems Non-separable programming problems Number of the objective functions Single objective programming problem Multiobjective programming problem Prof. Sweta Shah

Geometric Programming A geometric programming problem (GMP) is one in which the objective function and constraints are expressed as posynomials in X. Prof. Sweta Shah

Prof. Sweta Shah

Quadratic Programming Problem A quadratic programming problem is a nonlinear programming problem with a quadratic objective function and linear constraints. It is usually formulated as follows: subject to where c, qi,Qij, aij, and bj are constants. Prof. Sweta Shah

Optimal Control Problem An optimal control (OC) problem is a mathematical programming problem involving a number of stages, where each stage evolves from the preceding stage in a prescribed manner. Prof. Sweta Shah

It is usually described by two types of variables: the control (design) and the state variables. The control variables define the system and govern the evolution of the system from one stage to the next, and the state variables describe the behaviour or status of the system in any stage. Prof. Sweta Shah

Optimal Control Problem The problem is to find a set of control or design variables such that the total objective function (also known as the performance index) over all stages is minimized subject to a set of constraints on the control and state variables. An OC problem can be stated as follows: Find X which minimizes subject to the constraints Prof. Sweta Shah

where xi is the ith control variable, yi is the ith control variable, and fi is the contribution of the ith stage to the total objective function; gj, hk and qi are functions of xj, yk and xi and yi, respectively, and l is the total number of stages. Prof. Sweta Shah

Integer Programming Problem If some or all of the design variables x1,x2,..,xn of an optimization problem are restricted to take on only integer (or discrete) values, the problem is called an integer programming problem. If all the design variables are permitted to take any real value, the optimization problem is called a real-valued programming problem. Prof. Sweta Shah

Stochastic Programming Problem A stochastic programming problem is an optimization problem in which some or all of the parameters (design variables and/or preassigned parameters) are probabilistic (nondeterministic or stochastic). In other words, stochastic programming deals with the solution of the optimization problems in which some of the variables are described by probability distributions. Prof. Sweta Shah

Separable Programming Problem A function f (x) is said to be separable if it can be expressed as the sum of n single variable functions, f1(x1), f2(x2),….,fn(xn), that is, A separable programming problem is one in which the objective function and the constraints are separable and can be expressed in standard form as: Find X which minimizes subject to where bj is constant. Prof. Sweta Shah

Multiobjective Programming Problem A multiobjective programming problem can be stated as follows: Find X which minimizes f1 (X), f2 (X),…., fk (X) subject to where f1 , f2,…., fk denote the objective functions to be minimized simultaneously. Prof. Sweta Shah

CH.2. Classical optimization techniques Single variable optimization Useful in finding the optimum solutions of continuous and differentiable functions These methods are analytical and make use of the techniques of differential calculus in locating the optimum points. Since some of the practical problems involve objective functions that are not continuous and/or differentiable, the classical optimization techniques have limited scope in practical applications. Prof. Sweta Shah

Single variable optimization A function of one variable f (x) has a relative or local minimum at x = x* if f (x*) ≤ f (x*+h) for all sufficiently small positive and negative values of h A point x* is called a relative or local maximum if f (x*) ≥ f (x*+h) for all values of h sufficiently close to zero. Local minimum Global minima Local minima Prof. Sweta Shah

Classicial optimization techniques Single variable optimization A function f (x) is said to have a global or absolute minimum at x* if f (x*) ≤ f (x) for all x, and not just for all x close to x*, in the domain over which f (x) is defined. Similarly, a point x* will be a global maximum of f (x) if f (x*) ≥ f (x) for all x in the domain. Prof. Sweta Shah

Necessary condition If a function f (x) is defined in the interval a ≤ x ≤ b and has a relative minimum at x = x*, where a < x* < b, and if the derivative df (x) / dx = f’(x) exists as a finite number at x = x*, then f’ (x*)=0 Prof. Sweta Shah

Necessary condition The theorem does not say that the function necessarily will have a minimum or maximum at every point where the derivative is zero. e.g. f’ (x)=0 at x= 0 for the function shown in figure. However, this point is neither a minimum nor a maximum. In general, a point x* at which f’(x*)=0 is called a stationary point. Prof. Sweta Shah

Necessary condition The theorem does not say what happens if a minimum or a maximum occurs at a point x* where the derivative fails to exist. For example, in the figure depending on whether h approaches zero through positive or negative values, respectively. Unless the numbers or are equal, the derivative f’ (x*) does not exist. If f’ (x*) does not exist, the theorem is not applicable. FIGURE 2.2 SAYFA 67 Prof. Sweta Shah

Sufficient condition Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f(x*) is A minimum value of f (x) if f (n)(x*) > 0 and n is even A maximum value of f (x) if f (n)(x*) < 0 and n is even Neither a minimum nor a maximum if n is odd Prof. Sweta Shah

Example Determine the maximum and minimum values of the function: Solution: Since f’(x)=60(x4-3x3+2x2)=60x2(x-1)(x-2), f’(x)=0 at x=0,x=1, and x=2. The second derivative is: At x=1, f’’(x)=-60 and hence x=1 is a relative maximum. Therefore, fmax= f (x=1) = 12 At x=2, f’’(x)=240 and hence x=2 is a relative minimum. Therefore, fmin= f (x=2) = -11 Prof. Sweta Shah

Example Solution cont’d: At x=0, f’’(x)=0 and hence we must investigate the next derivative. Since at x=0, x=0 is neither a maximum nor a minimum, and it is an inflection point. Prof. Sweta Shah