By Liyun Zhang Aug.9 th,2012. * Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method.

Slides:



Advertisements
Similar presentations
MATLAB Optimization Greg Reese, Ph.D Research Computing Support Group Miami University.
Advertisements

Optimization with Constraints
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization 吳育德.
Optimization methods Review
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Optimization in Engineering Design 1 Lagrange Multipliers.
MIT and James Orlin © Nonlinear Programming Theory.
Constrained Maximization
Engineering Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Constrained Optimization
Optimization Mechanics of the Simplex Method
Optimization Methods One-Dimensional Unconstrained Optimization
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Interpolation Chapter 18.
Newton's Method for Functions of Several Variables
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
1 Chapter 5 Nonlinear Programming Chemical Engineering Department National Tsing-Hua University Prof. Shi-Shang Jang May, 2003.
Chapter 3 Root Finding.

UNCONSTRAINED MULTIVARIABLE
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Solving Non-Linear Equations (Root Finding)
Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities.
Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities Algebra 2 Chapter 3 Notes Systems of Linear Equalities and Inequalities.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
3.4 Linear Programming p Optimization - Finding the minimum or maximum value of some quantity. Linear programming is a form of optimization where.
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Department of Mechanical Engineering, The Ohio State University Sl. #1GATEWAY Optimization.
Optimization unconstrained and constrained Calculus part II.
One Dimensional Search
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Optimization of functions of one variable (Section 2)
Calculus-Based Optimization AGEC 317 Economic Analysis for Agribusiness and Management.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints:
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Solving Systems of Linear Equations in 3 Variables.
Computational Optimization
CS5321 Numerical Optimization
3-3 Optimization with Linear Programming
Chapter 7 Optimization.
CS5321 Numerical Optimization
Chapter 7: Optimization
7.5 – Constrained Optimization: The Method of Lagrange Multipliers
CS5321 Numerical Optimization
Solving Systems of Linear Equations in 3 Variables.
Part 4 - Chapter 13.
CS5321 Numerical Optimization
L23 Numerical Methods part 3
Part 2 Chapter 7 Optimization
Calculus-Based Optimization AGEC 317
Multivariable optimization with no constraints
Constraints.
Presentation transcript:

By Liyun Zhang Aug.9 th,2012

* Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method for Multivariable Constraint Optimization Method 1. Lagrange Multiplier Method 2. Exterior-point Penalty Method

* Solve for: unconstraint optimization problems with single variable and unimodal function * Advantage and Disadvantage: converges fast but UNRILIABLE (you will see it in a later example)

target function: f(x) select three points x0, x1, x2 on f(x) such that: x0<x1<x2, f2=min(f0, f1, f2) there must exist a quadratic function: q(x)=a0+a1x+a2x^2 which connects these points such that: q(x0)=a0+a1x0+a2x0^2=f(x0) q(x1)=a0+a1x1+a2x1^2=f(x1) q(x2)=a0+a1x2+a2x2^2=f(x2)

Initialization: Determine x0, x1 and x2 which is known to contain the minimum of the function f(x). Step 1 If x3-x1<ε (a sufficiently small number), then the minimum occurs at xo=x3 and stop iteration, else go to Step 2. Step 2 Evaluate x3 and x1 If x3<x1, go to Step 3, else go to Step 4. Step 3 Evaluate f(x3) and f(x1) If f(x3)<=f(x1), then determine new x0, x1, x2 as shown following, and go to Step5 x0=x0 x2=x1 x1=x3 If f(x3)>f(x1), then determine the new x1, x2, x3 as shown following, and go to Step5 x0=x3 x1=x1 x2=x2

Step 4 Evaluate f(x3) and f(x1) If f(x3)<=f(x1), then determine new x0, x1, x2 as shown following, and go to Step5 x0=x1 x1=x3 x2=x2 If f(x3)>f(x1), then determine the new x1, x2, x3 as shown following, and go to Step5 x0=x0 x1=x1 x2=x3 Step 5 Finish the shrinking of the old interval, start a new round of iteration, and compute the new x3. code illustration

Example 1. Find the minimum point and value of the function f(x)=(x^2-2)^2/2-1 Solution: f1001=inline('(x.*x-2).^2/2-1','x'); a=0; b=5; TolX=1e-5; TolFun=1e-8; MaxIter=100; [xoq,foq]=Opt_Quadratic(f1001,[a,b],TolX,TolFun,MaxIter) xoq = foq =

Fminbnd method attempts to solve the optimum of one variable function within a fixed interval. The most common syntaxes are: [x, f]=fminbnd (fun, x1, x2) [x, f]=fminbnd (fun, x1, x2, options) In Ex. 1, we have x1=0 x2=5 and the result is: x = f =

* Solve for: unconstraint optimization problems with single variable and unimodal function * Advantages: 1. more reliable than quadratic interpolation method 2. still converges fast since: a. rely on interval approximation instead of derivative calculations. b. the rate of the convergence is fixed to avoid the wider interval being used many times which may slow down the rate of convergence. c. interior points are selected to reduce the bounds as quickly as possible.

Select the intermediate points x3 and x4: c/a=a/b c/(b-c)=a/b so k=a/b= (1+ )/2 x3=x1+1/(k+1)*(x2-x1) x4=x1+k/(k+1)*(x2-x1)

Initialization: Determine x1 and x2 which is known to contain the minimum of the function f(x). Step 1 Determine two intermediate points x3 and x4 such that x3=x1+(1-r)*h x4=x1+r*h where r=k/(k+1)= ( -1)/2 h=x2-x1 Step 2 If x4-x3 f(x4)) and stop iterating, else go to Step3. Step3 Evaluate f(x3) and f(x4) If f(x3)<f(x4), then determine new x1 and x2 as shown following, and go to Step4. x1=x1 x2=x4 If f(x3)>f(x4), then determine the new x1 and x2 as shown following, and go to Step4. x1=x3 x2=x2 Step 4 Finish the shrinking of the old interval, start a new round of iteration, and compute the new x3, x4. code illustration

Example 2. Find the minimum point and value of the function x-(x^2-2)^3/2, x ∈ [0,4] Solution: f1002=inline('x-(x.*x-2).^3/2','x'); a=0; b=4; TolX=1e-4; TolFun=1e-4; MaxIter=100; [xog,fog]=Opt_Golden(f1002,a,b,TolX,TolFun,MaxIter) xog = fog = e+003 [xoq,foq]=Opt_Quadratic(f1002,[a,b],TolX,TolFun,MaxIter) xoq = foq =

* Apply primarily on: optimization problems with multivariable functions and equality constraints * Mathematical Backgrounds: Consider the function z=f(x,y) which is subject to ϕ (x,y)=0 or y=g(x), suppose x0 is the minimum point of z=f(x,g(x)), then x0 is the solution of the following equation system: f’x+ λ ϕ ’x=0 f’y+ λ ϕ ’y=0 ϕ (x,y)=0 More generally, we can introuduce a transformed function L(x1,x2…xm; λ1,λ2…λn)=f(x1,x2…xm)+ ∑λn ϕ n(x1,x2…xn) and find all x satisfying the following equation system f’xi+ ∑ λn ϕ n’xi=0 (i=1,2…m) ϕ k(x1,x2…xm) =0 (k=1,2…n) substitute {x} back into f(x1,x2…xm) and then compare their value

Example 3. Find the minimum point and value of the function f(x1,x2)=x1+x2, s.t x1^2+x2^2=2 Solution: codecode >> [xo,yo,fo]=Lag() xo = 1 yo = 1 fo = -2 2 so minimum is -2 at (-1,-1)

* Apply primarily on: optimization problems with multivariable functions and equality or inequality constraints, and the initial guess point is out of the feasible domain. * Advantage: work well when the initial guess point is out of the domain so that the computation is subjected to less restrictions compared with the interior- point penalty method.

Consider the function f(x) which is subject to a series of equality or inequality constraints Ci(x)>=0(i=1,2…m),to find the minimum point of f(x), we combine these conditions into a single function: F(x)=f(x)+c*∑g(Ci(x)). penalty coefficients: ci=c1*p^(i-1) where p>=2 penalty function: g(Ci(x))=min(0, Ci(x))^2 Note that the optimum point of the unconstrained function will eventually converge to the original constrained one when c*∑g(Ci(x))<= ɛ. One way to search for the optimum point of the unconstrained function is the Newton method, which resorts to the derivative method.

Initialization Set up the initial optimum guess point x0, c1,p(p>=2), accuracy ɛ >0, and let k=1. Step 1 Combine the given conditions into a new function F(x)=f(x)+ci*(∑hp(x)^2+∑gs(x)^2), hp(x)=0 (p=1,2…m) gs(x)>=0 (s=1,2…n) Step 2 Search for the minimum of F(x) from the point xk-1 with unconstraint optimization methods. Step 3 Let S(xk)= ck* (∑hp(x)^2+∑gs(x)^2), if S(xk)<= ɛ, stop iteration, else let k=k+1 and go to step 2 code

Example 4. Find the minimum point and value of the function f(x1,x2)=x1^2-x1*x2+x2-x1+1, s.t 2x1+3x2-9=0 x2^2+x1^2-6>=0 x1>=0 x2>=0 Solution: let c1=0.05, p=2, (x1,x2)=(2,2) >> syms x1 x2; >> f=x1^2-x1*x2+x2-x1+1; >> g=[x1^2+x2^2-6;x1;x2]; h=[2*x1+3*x2-9]; >> [x,minf]=minEPF(f,[2 2],g,h,0.05,2,[x1 x2]) >> x= minf=

Fmincon attempts to solve constraint optimization problems of the form: min f(x) subject to: A*x<=b, Aeq*x=beq (linear constraints) x subject to: c(x)<=0, ceq(x)=0 (nonlinear constraints) lb<=x<=ub (bounds) The common syntax for fmincon is [x,fval]=fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) where x0 is the initial guess. In Ex. 4, we have >> A=[-1 0; 0 -1]; >> b=[0; 0]; >> Aeq=[2 3]; >> beq=[9]; and the result is x = fval=