Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Line Search.
CSE 330: Numerical Methods
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
CSE 541 – Numerical Methods
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Optimization Introduction & 1-D Unconstrained Optimization
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
MIT and James Orlin © Nonlinear Programming Theory.
Chapter 4 Roots of Equations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 51.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Engineering Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Open Methods Chapter 6 The Islamic University of Gaza
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
Why Function Optimization ?
Bracketing Methods Chapter 5 The Islamic University of Gaza
FP1: Chapter 2 Numerical Solutions of Equations
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
Chapter 3 Root Finding.
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Solving Non-Linear Equations (Root Finding)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 Roots of Equations Why? But.
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Fin500J: Mathematical Foundations in Finance
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
1 Solution of Nonlinear Equation Dr. Asaf Varol
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
4 Numerical Methods Root Finding Secant Method Modified Secant
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 5 Bracketing Methods.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
Finding zeros (also called roots) of a function Overview: Define the problem Methods of solution Graphical Newton’s Bisection Secant.
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
4 Numerical Methods Root Finding.
By Liyun Zhang Aug.9 th,2012. * Method for Single Variable Unconstraint Optimization : 1. Quadratic Interpolation method 2. Golden Section method * Method.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
Finding zeros (also called roots) of a function Overview: Define the problem Methods of solution Graphical Newton’s Bisection Secant.
Solution of Nonlinear Equations ( Root Finding Problems ) Definitions Classification of Methods  Analytical Solutions  Graphical Methods  Numerical.
MATH342: Numerical Analysis Sunjae Kim.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 / Chapter 5.
Solution of Nonlinear Equations ( Root Finding Problems )
Solution of Nonlinear Equations (Root finding Problems
Numerical Methods and Analysis
Newton’s Method for Systems of Non Linear Equations
Read Chapters 5 and 6 of the textbook
2.3 Linear Inequalities Understand basic terminology related to inequalities Solve linear inequalities symbolically Solve linear inequalities graphically.
Roots of equations Class VII.
Computers in Civil Engineering 53:081 Spring 2003
Chapter 7 Optimization.
Systems of Nonlinear Equations
Continuity Alex Karassev.
Newton’s Method and Its Extensions
Part 4 - Chapter 13.
Some Comments on Root finding
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Solutions for Nonlinear Equations
Presentation transcript:

Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x is a vector of length n. F( ) is called the objective function. a i ( ) and b i ( ) are called the constraint functions.

Special Cases If n=1 there is just one variable, and we have the univariate case (as opposed to the multivariate case). If a i (x) and b i (x) are linear functions then we have linear constraints (as opposed to nonlinear constraints). If m 2 =0 we have equality constraints only. If m 1 =0 we have inequality constraints only. If m 1 =m 2 =0 we have the unconstrained case.

Techniques The techniques used to solve an optimisation problem depends on the properties of the functions F, a i, and b i. Important factors include: –Univariate or multivariate case? –Constrained or unconstrained problem? –Do we know the derivatives of F?

Example Linear Problem An oil refinery can buy light crude at £35/barrel and heavy crude at £30/barrel. Refining one barrel of oil produces petrol, heating oil, and jet fuel as follows: PetrolHeating oilJet fuel Light crude Heavy crude The refinery has contracts for 0.9M barrels of petrol, 0.8M barrels of heating oil and 0.5M barrels of jet fuel. How much light and heavy crude should the refinery buy to satisfy the contracts at least cost?

Problem Specification Let x 1 and x 2 be the number of barrels (in millions) of light and heavy crude that the refinery purchases. Cost (in millions of £): F(x) = 35x x 2 Constraints: 0.3x x 2  0.9 (petrol) 0.2x x 2  0.8 (heating oil) 0.3x x 2  0.5 (jet fuel) x 1  0, x 2  0 (non-negativity) This is called a “linear program”

Graphical Solution Minimum of F lies on boundary of feasible region. F varies linearly on each section of the boundary. Can get the solution by looking at the intersection points of the constraints forming the boundary. Feasible region x2x2 x1x (1, 2)

Solution (x 1,x 2 )F(x) (0,3)90 (2,1)100 (4,0)140 So minimum cost is for x 1 = 0 and x 2 = 3. Recall that: F(x) = 35x x 2

Unconstrained Univariate Case We seek to minimise f(x). If x* minimises f(x) then: i.f (x*) = 0 (first order condition) ii.f  (x*)  0 (second order condition) f(x) = (x-1)(x-1)+2

Example Minimise f(x) = x 2 + 4Cos(x) Solve: f (x) = 2x – 4Sin(x) = 0 y = Gives y =

Bisection Method Suppose we have already bracketed the zero in the interval [a,b]. Then: 1.Evaluate f at mid-point c=(a+b)/2. 2.If f(c) is zero then quit. 3.If f(a) and f(c) have the same sign then set a=c; else set b=c. 4.Go to Step 1. ab (a+b)/2

MATLAB Example >> >> a=1; fa=f(a); >> b=2; fb=f(b); >> c=(a+b)/2;fc=f(c);if fa*fc>0 a=c; else b=c; end;c Using the up arrow to repeat the last line we get values of c that converge to the solution of f(x)=0.

Convergence At each iteration the zero x* lies within the current interval from a to b. So the error |x*-x|<interval size. But the interval size gets reduced by a factor of 2 at each iteration. So if a and b are the original values bracketing the zero, and x n is the estimate of x* at iteration n, then: |x*-x n |<(b-a)/2 n

f(x k ) Newton’s Method Given an estimate x k of the zero a better estimate is obtained by approximating the function by the tangent line at x k. xkxk x k+1 f (x k ) = f(x k )/(x k -x k+1 ) x k+1 = x k – f(x k )/f (x k )

Convergence of Newton’s Method Error can be shown to be quadratic if initial estimate of zero is sufficiently close to x*. |x*-x k+1 | < M|x*-x k | 2 for some constant M. (Proof: Taylor series expansion of f(x*) about x k.)

Example Find real root of f(x)=x 3 +4x 2 -10=0. >> format long >> r=roots([ ]’); y=r(3);x=1; >> for i=1:8 fx=-10+x*x*(4+x);fxd=x*(8+3*x); err=y-x; a(i,1)=i;a(i,2)=x;a(i,3)=fx;a(i,4)=fxd;a(i,5)=err; x=x-fx/fxd; >> end; >> a

Problems with Newton’s Method Problems may arise if the initial estimate is not “sufficiently close” to the zero. Consider f(x)=ln(x). e1 If 0<x 1 <e then Newton’s method will converge. If x 1  e it will fail.

Linear Interpolation Methods Newton method requires first derivative at each iteration. The bisection method doesn’t use the magnitudes of f at each end of the interval. Suppose we use f(a n ) and f(b n ) and finds a new estimate of the zero by approximating the function between a n and b n by a straight line. f(b n ) bnbn anan xnxn f(a n )

Secant Method The secant method is a linear interpolation method that generates approximations to the zero starting with x 0 and x 1 according to: x n-1 xnxn x n+1 x n+2 Problem with divergence!

Method of False Position To avoid possible divergence problem with the secant method we keep the zero bracketed in an interval (a,b), as in the bisection method. If f(c) = 0 we are finished. If f(a) and f(c) have the same sign we replace a by c; otherwise, we replace b by c.

Golden Section Method A function is unimodal on an interval [a,b] if it has a single local minimum on [a,b]. The Golden Section method can be used to find the minimum of function F on [a,b], where F is unimodal on [a,b]. This method is not based on solving F(x)=0. We seek to avoid unnecessary function evaluations.

If u>v then x* must lie in [a,x], and if u  v then x* must lie in [y,b]. Case 1: If u>v then new interval is [a,x] and length is x- a=  (b-a). At the next step we need to know F at: a +  (x-a) = a +  2 (b-a) a +  2 (x-a) = a +  3 (b-a) But we already know F at a +  2 (b-a) from the previous step so we can avoid this function evaluation. Golden Section Method Divide interval [a,b] at x and y as follows: x = a +  (b-a); u = F(x) y = a +  2 (b-a); v = F(y)

Golden Section Method Case 2: If u  v then new interval is [y,b] and length is b- y=  (b-a). At the next step we need to know F at: y +  (b-y) = a + 2  2 (b-a) y +  2 (b-y) = a +  2 (1+  )(b-a) = a +  (b-a) But we already know F at a +  (b-a) from the previous step so we can avoid this function evaluation. In both cases we get a new interval that is  times the length of the current interval, and each iteration requires only one function evaluation. After n iterations the error is bounded by (b-a)  n /2 Note:  2 +  - 1 = 0

MATLAB Code for Golden Section >> >> a=1; fa=f(a); b=2; fb=f(b);t=(sqrt(5)-1)/2; >> x=a+t*(b-a);y=a+t*t*(b-a);u=f(x);v=f(y); if u>v b=x;fb=u; else a=y;fa=v; end; c=(b+a)/2 Using the up arrow to repeat the last line we get values of c that converge to the minimum of F on [1,2].