Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification of optimization problems

Similar presentations


Presentation on theme: "Classification of optimization problems"— Presentation transcript:

1 Classification of optimization problems
Classification based on: Constraints Constrained optimization problem Unconstrained optimization problem Nature of the design variables Static optimization problems Dynamic optimization problems In static optimization problems, the design variables are constants whereas in the dynamic optimization problems, each design variable is a function of one or more parameters. Prof. Sweta Shah

2 Classification of optimization problems
Classification based on: Physical structure of the problem Optimal control problems Non-optimal control problems Nature of the equations involved Nonlinear programming problem Geometric programming problem Quadratic programming problem Linear programming problem In static optimization problems, the design variables are constants whereas in the dynamic optimization problems, each design variable is a function of one or more parameters. An optimal Prof. Sweta Shah

3 Classification of optimization problems
Classification based on: Permissable values of the design variables Integer programming problems Real valued programming problems Deterministic nature of the variables Stochastic programming problem Deterministic programming problem In static optimization problems, the design variables are constants whereas in the dynamic optimization problems, each design variables is a function of one or more parameters. Prof. Sweta Shah

4 Classification of optimization problems
Classification based on: Separability of the functions Separable programming problems Non-separable programming problems Number of the objective functions Single objective programming problem Multiobjective programming problem Prof. Sweta Shah

5 Geometric Programming
A geometric programming problem (GMP) is one in which the objective function and constraints are expressed as posynomials in X. Prof. Sweta Shah

6 Prof. Sweta Shah

7 Quadratic Programming Problem
A quadratic programming problem is a nonlinear programming problem with a quadratic objective function and linear constraints. It is usually formulated as follows: subject to where c, qi,Qij, aij, and bj are constants. Prof. Sweta Shah

8 Optimal Control Problem
An optimal control (OC) problem is a mathematical programming problem involving a number of stages, where each stage evolves from the preceding stage in a prescribed manner. Prof. Sweta Shah

9 It is usually described by two types of variables: the control (design) and the state variables. The control variables define the system and govern the evolution of the system from one stage to the next, and the state variables describe the behaviour or status of the system in any stage. Prof. Sweta Shah

10 Optimal Control Problem
The problem is to find a set of control or design variables such that the total objective function (also known as the performance index) over all stages is minimized subject to a set of constraints on the control and state variables. An OC problem can be stated as follows: Find X which minimizes subject to the constraints Prof. Sweta Shah

11 where xi is the ith control variable, yi is the ith control variable, and fi is the contribution of the ith stage to the total objective function; gj, hk and qi are functions of xj, yk and xi and yi, respectively, and l is the total number of stages. Prof. Sweta Shah

12 Integer Programming Problem
If some or all of the design variables x1,x2,..,xn of an optimization problem are restricted to take on only integer (or discrete) values, the problem is called an integer programming problem. If all the design variables are permitted to take any real value, the optimization problem is called a real-valued programming problem. Prof. Sweta Shah

13 Stochastic Programming Problem
A stochastic programming problem is an optimization problem in which some or all of the parameters (design variables and/or preassigned parameters) are probabilistic (nondeterministic or stochastic). In other words, stochastic programming deals with the solution of the optimization problems in which some of the variables are described by probability distributions. Prof. Sweta Shah

14 Separable Programming Problem
A function f (x) is said to be separable if it can be expressed as the sum of n single variable functions, f1(x1), f2(x2),….,fn(xn), that is, A separable programming problem is one in which the objective function and the constraints are separable and can be expressed in standard form as: Find X which minimizes subject to where bj is constant. Prof. Sweta Shah

15 Multiobjective Programming Problem
A multiobjective programming problem can be stated as follows: Find X which minimizes f1 (X), f2 (X),…., fk (X) subject to where f1 , f2,…., fk denote the objective functions to be minimized simultaneously. Prof. Sweta Shah

16 CH.2. Classical optimization techniques
Single variable optimization Useful in finding the optimum solutions of continuous and differentiable functions These methods are analytical and make use of the techniques of differential calculus in locating the optimum points. Since some of the practical problems involve objective functions that are not continuous and/or differentiable, the classical optimization techniques have limited scope in practical applications. Prof. Sweta Shah

17 Single variable optimization
A function of one variable f (x) has a relative or local minimum at x = x* if f (x*) ≤ f (x*+h) for all sufficiently small positive and negative values of h A point x* is called a relative or local maximum if f (x*) ≥ f (x*+h) for all values of h sufficiently close to zero. Local minimum Global minima Local minima Prof. Sweta Shah

18 Classicial optimization techniques
Single variable optimization A function f (x) is said to have a global or absolute minimum at x* if f (x*) ≤ f (x) for all x, and not just for all x close to x*, in the domain over which f (x) is defined. Similarly, a point x* will be a global maximum of f (x) if f (x*) ≥ f (x) for all x in the domain. Prof. Sweta Shah

19 Necessary condition If a function f (x) is defined in the interval a ≤ x ≤ b and has a relative minimum at x = x*, where a < x* < b, and if the derivative df (x) / dx = f’(x) exists as a finite number at x = x*, then f’ (x*)=0 Prof. Sweta Shah

20 Necessary condition The theorem does not say that the function necessarily will have a minimum or maximum at every point where the derivative is zero. e.g. f’ (x)=0 at x= 0 for the function shown in figure. However, this point is neither a minimum nor a maximum. In general, a point x* at which f’(x*)=0 is called a stationary point. Prof. Sweta Shah

21 Necessary condition The theorem does not say what happens if a minimum or a maximum occurs at a point x* where the derivative fails to exist. For example, in the figure depending on whether h approaches zero through positive or negative values, respectively. Unless the numbers or are equal, the derivative f’ (x*) does not exist. If f’ (x*) does not exist, the theorem is not applicable. FIGURE 2.2 SAYFA 67 Prof. Sweta Shah

22 Sufficient condition Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f(x*) is A minimum value of f (x) if f (n)(x*) > 0 and n is even A maximum value of f (x) if f (n)(x*) < 0 and n is even Neither a minimum nor a maximum if n is odd Prof. Sweta Shah

23 Example Determine the maximum and minimum values of the function:
Solution: Since f’(x)=60(x4-3x3+2x2)=60x2(x-1)(x-2), f’(x)=0 at x=0,x=1, and x=2. The second derivative is: At x=1, f’’(x)=-60 and hence x=1 is a relative maximum. Therefore, fmax= f (x=1) = 12 At x=2, f’’(x)=240 and hence x=2 is a relative minimum. Therefore, fmin= f (x=2) = -11 Prof. Sweta Shah

24 Example Solution cont’d:
At x=0, f’’(x)=0 and hence we must investigate the next derivative. Since at x=0, x=0 is neither a maximum nor a minimum, and it is an inflection point. Prof. Sweta Shah


Download ppt "Classification of optimization problems"

Similar presentations


Ads by Google