Optimization and Some Traditional Methods

Slides:



Advertisements
Similar presentations
Linear Programming (LP) (Chap.29)
Advertisements

P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Optimization methods Review
Optimization Introduction & 1-D Unconstrained Optimization
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Operation Research Chapter 3 Simplex Method.
MIT and James Orlin © Nonlinear Programming Theory.
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Approximation Algorithms
Constrained Optimization
Optimization Mechanics of the Simplex Method
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Problem
Advanced Topics in Optimization
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
Principles of Computer-Aided Design and Manufacturing Second Edition 2004 ISBN Author: Prof. Farid. Amirouche University of Illinois-Chicago.
Ranga Rodrigo April 6, 2014 Most of the sides are from the Matlab tutorial. 1.
ENCI 303 Lecture PS-19 Optimization 2
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Topic III The Simplex Method Setting up the Method Tabular Form Chapter(s): 4.
Ken YoussefiMechanical Engineering Dept. 1 Design Optimization Optimization is a component of design process The design of systems can be formulated as.
Chapter 6 Linear Programming: The Simplex Method Section R Review.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Chapter 6 Linear Programming: The Simplex Method Section 3 The Dual Problem: Minimization with Problem Constraints of the Form ≥
MER 160, Prof. Bruno1 Optimization The idea behind “optimization” is to find the “best” solution from a domain of “possible” solutions. Optimization methods.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
ZEIT4700 – S1, 2015 Mathematical Modeling and Optimization School of Engineering and Information Technology.
ZEIT4700 – S1, 2015 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Business Mathematics MTH-367 Lecture 14. Last Lecture Summary: Finished Sec and Sec.10.3 Alternative Optimal Solutions No Feasible Solution and.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
1 2 Linear Programming Chapter 3 3 Chapter Objectives –Requirements for a linear programming model. –Graphical representation of linear models. –Linear.
Optimal Control.
Discrete Optimization
The Transportation and Assignment Problems
Linear Programming for Solving the DSS Problems
deterministic operations research
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Non-linear Minimization
Solver & Optimization Problems
Lecture 3.
The minimum cost flow problem
Linear Programming (LP) (Chap.29)
10CS661 OPERATION RESEARCH Engineered for Tomorrow.
Linear programming Simplex method.
Design and Analysis of Algorithm
Chapter 4 Linear Programming: The Simplex Method
Unconstrained and Constrained Optimization
Chapter 3 The Simplex Method and Sensitivity Analysis
Collaborative Filtering Matrix Factorization Approach
Chapter 7 Optimization.
Linear programming Simplex method.
Classification of optimization problems
Outline Unconstrained Optimization Functions of One Variable
5.2.2 Optimization, Search and
Chapter 5 Transportation, Assignment, and Transshipment Problems
Chapter 7: Systems of Equations and Inequalities; Matrices
Algebra 1 Section 12.1.
EEE 244-8: Optimization.
Part 4 - Chapter 13.
L23 Numerical Methods part 3
Multivariable optimization with no constraints
Presentation transcript:

Optimization and Some Traditional Methods It is the process of finding the best one out of all feasible solutions Example: Optimal Gear-Box → optimization is to be merged with traditional principle of machine design Can provide with more efficient and cost effective design Mathematically, if y = f(x) and f (x)=0 at a point x=x*, we say that either the optimum (minimum or maximum) or inflection point exists at that point Inflection/Saddle point is a point that is neither a maximum nor a minimum

For further investigation of the nature of the point, let n be the first non-zero higher order derivative Cases: (i) If n is odd, x* is an inflection point, (ii) If n is found to be an even number If the value of derivative is seen to be positive → x* is a local minimum point, If the value of the derivative is found to be negative → x* is a local maximum point.

A Practical Example Wooden Pointer Conditions : Light in weight ; No mechanical breakage ; Deflection of pointing end is negligible Here d and L : Design/Decision variables; ρ : pre-assigned parameter d L

} } Mathematical Formulation Minimize Mass M=(Π d2Lρ)/12 } objective function Subject to deflection δ ≤ δallowable strength s ≥ srequired and dmin ≤ d≤ dmax Lmin ≤ L≤ Lmax } Functional or behavior constraints } Geometric or Side constraints or Variables ranges

Classification of Optimization problems 1. Depending on the nature of equations involved → Linear or Non-linear optimization problems Linear optimization Maximize y=f(x1,x2)=2x1+x2 subject to x1+x2 ≤ 3, 5x1+2x2 ≤ 10 and x1,x2 ≥ 0 Non-linear optimization Either the objective function or any of the functional constraints is non-linear

2. Based on the existence of any functional constraint 3. Depending on the nature of design variables Un-constrained optimization problem → No functional constraint Constrained optimization problem → At least one functional constraint is present Integer Programming Problem → all design variables take integer values Real-valued Programming Problem → all design variables take real values Mixed-integer Programming Problem → some of the variables are integers and the remaining variables take real values

4. Static vs Dynamic Optimization Problems Static optimization problem → a, b do not depend on L Dynamic optimization problem → a, b are dependent on L P a b L P a(x) b(x) x L

Principle of Optimization Constrained Optimization Problem Minimize y= f(x1,x2)= (x1-a)2+(x2-b)2 subject to gi(x1,x2) ≤ Ci; i=1,2,3…,n and x1≥ x1min, x2 ≥ x2min

Principle of optimization Free points: The points residing the feasible zone are called free points Bound points: The points lying on boundary of the feasible zone are called bound points

Duality Principle Minimize y = f(x) equivalent Maximize –f(x) Subject to to subject to x ≥ 0.0 x ≥ 0.0

Conventional Optimization Methods Specialized Algorithms Integer programming Geometric programming Dynamic programming Linear Programming Methods Graphical Method Simplex Method Non-linear Programming Methods

Single Variable problems Multi-Variable problems Analytical Method Numerical Methods Exhaustive Search Dichotomous search Fibonacci Method Golden section Method Gradient-based Methods 1. Steepest Descent Method etc.. Direct search methods Random Search Methods Pattern Search Methods

Exhaustive Search Method Let us consider an optimization problem as given below. Maximize y=f(x) subject to xmin ≤ x ≤ xmax Let the range of x, that is, (xmax-xmin) is divided into n equal parts small change in x, that is, Δx =

Step 1: We set x1=xmin x2 = x1+Δx = xI1 x3 = x2+Δx = xI2 Step 2: We calculate the function values, that is, f(x1),f(x2),f(x3) check for the maximum point If f(x1) ≤ f(x2) ≥ f(x3), the maximum point lies in the range of (x1,x3). We terminate the program, Else x1 = x2 (previous) x2 = x3 (previous) x3 = x2 (present) + Δx Step 3: We check whether x3 exceeds xmax. If x3 does not exceed xmax, we go to step2, Else we say that the maximum does not lie in the range of (xmin, xmax).

Random Walk Method Direct search method, where the search is carried out using the objective function value No derivative information is required Present solution Xi+1 is determined using the previous solution Xi as follows: Xi+1 = Xi + λui Where X = (x1, x2, ……, xm)T λ = step length Where (r1, r2, …..,rn) are the random numbers lying between -1.0 and 1.0 Note : n = m

Step 1: Set initial values of λ; ε (permissible minimum value of λ); N (maximum number of iterations to be tried) Start with an initial solution X1, created at random Determine the function value f1 = f(x1) Step 2: Generate a set of n random numbers lying between -1.0 to 1.0 and calculate u1 Step 3: Determine the function value f2 = f(X2)=f(X1+λ u1) Step 4: If f2 < f1, then we set X1= X1+λ u1; f1= f2, and repeat the steps 2 through 4 Else Repeat steps 2 through 4, up to the maximum number of iterations N Step 5: If a better point Xi+1 is not obtained after running the program for N iterations, reduce λ to 0.5λ Step 6: Is the modified (new) λ < ε ? If no, go to step 2, Else we declare X* = X1, f* = f1, and terminate the program.

Steepest Descent Method Gradient of a function It is the gradient-based method Not applicable to a discontinuous function Let us consider a function y = f(X) = f(x1, x2, ……., xm) Gradient of the function Note: Gradient direction is the direction of steepest ascent.

Principle of the Method Termination Criteria Start with an initial random solution X1 and move along the search direction according to the rule given below. Xi+1 = Xi + λi* Si, where the search direction Rate of change of the function value

Advantages Limitation of the Algorithm It has a faster convergence rate This algorithm is simple and easy to understand and implement There is a chance of the solutions of this algorithm for being trapped into the local minima

Drawbacks of Traditional Optimization Methods Final solution of an optimization problem depends on the randomly chosen initial solution. If the initial solution lies in the local basin, the final solution will get stuck at local optimum Gadient-based methods cannot be used for discontinuous objective function There is a chance of the solutions of a gradient-based optimization method for being trapped into the local minima These methods may not be suitable for paralllel computing

Drawbacks of Traditional Optimization Methods (contd.) Discrete/integer variables are difficult to handle using the traditional methods of optimization A particular traditional method of optimization may not be suitable to solve a variety of problems