Line Search.

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Optimization.
CSE 330: Numerical Methods
Yunfei duan Hui Pan.  A parabola through three points f(a) f(b) f(c)  for the derivation of this formula is from  denominator should not be zero.
THE DERIVATIVE AND THE TANGENT LINE PROBLEM
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Open Methods Chapter 6 The Islamic University of Gaza
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Optimization 吳育德.
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Optimization Introduction & 1-D Unconstrained Optimization
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Nonlinear programming: One dimensional minimization methods
MIT and James Orlin © Nonlinear Programming Theory.
Page 1 Page 1 Engineering Optimization Second Edition Authors: A. Rabindran, K. M. Ragsdell, and G. V. Reklaitis Chapter-2 (Functions of a Single Variable)
Chapter 1 Introduction The solutions of engineering problems can be obtained using analytical methods or numerical methods. Analytical differentiation.
Engineering Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Optimization Mechanics of the Simplex Method
Optimization Methods One-Dimensional Unconstrained Optimization
Open Methods Chapter 6 The Islamic University of Gaza
NUMERICAL METHODS WITH C++ PROGRAMMING
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
UNCONSTRAINED MULTIVARIABLE
Chapter 2 Single Variable Optimization
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Solving Non-Linear Equations (Root Finding)
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Review Taylor Series and Error Analysis Roots of Equations
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Section 7.4: Arc Length. Arc Length The arch length s of the graph of f(x) over [a,b] is simply the length of the curve.
Numerical Methods.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Solving Non-Linear Equations (Root Finding)
One Dimensional Search
L22 Numerical Methods part 2 Homework Review Alternate Equal Interval Golden Section Summary Test 4 1.
§3.6 Newton’s Method. The student will learn about
4 Numerical Methods Root Finding Secant Method Modified Secant
Optimization of functions of one variable (Section 2)
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
CSE 330: Numerical Methods. Introduction The bisection and false position method require bracketing of the root by two guesses Such methods are called.
Quiz 2.
Root Finding Methods Fish 559; Lecture 15 a.
Numerical Methods and Analysis
Introduction to Optimization
Multi-band impedance matching using an evolutionary algorithm
THE DERIVATIVE AND THE TANGENT LINE PROBLEM
Computers in Civil Engineering 53:081 Spring 2003
Chapter 7 Optimization.
Optimization Part II G.Anuradha.
SOLUTION OF NONLINEAR EQUATIONS
~ Least Squares example
Math 175: Numerical Analysis II
~ Least Squares example
Part 4 - Chapter 13.
L23 Numerical Methods part 3
Comp 208 Computers in Engineering Yi Lin Winter, 2007
Bracketing.
Presentation transcript:

Line Search

Line Search Line search techniques are in essence optimization algorithms for one-dimensional minimization problems. They are often regarded as the backbones of nonlinear optimization algorithms. Typically, these techniques search a bracketed interval. Often, unimodality is assumed. a x* b Exhaustive search requires N = (b-a)/ + 1 calculations to search the above interval, where  is the resolution.

Basic bracketing algorithm  a x1 x2 b Two point search (dichotomous search) for finding the solution to minimizing ƒ(x): 0) assume an interval [a,b] 1) Find x1 = a + (b-a)/2 - /2 and x2 = a+(b-a)/2 + /2 where  is the resolution. 2) Compare ƒ(x1) and ƒ(x2) 3) If ƒ(x1) < ƒ(x2) then eliminate x > x2 and set b = x2 If ƒ(x1) > ƒ(x2) then eliminate x < x1 and set a = x1 If ƒ(x1) = ƒ(x2) then pick another pair of points 4) Continue placing point pairs until interval < 2 

Fibonacci Search Fibonacci numbers are: 1,1,2,3,5,8,13,21,34,.. that is , the sum of the last 2 numbers Fn = Fn-1 + Fn-2 L2 L3  a x1 x2 b L2 L1 L1 = L2 + L3 It can be derived that Ln = (L1 + Fn-2 ) / Fn

Golden Section b a Discard a - b a b In Golden Section, you try to have b/(a-b) = a/b which implies b*b = a*a - ab Solving this gives a = (b ± b* sqrt(5)) / 2 a/b = -0.618 or 1.618 (Golden Section ratio) See also 36 in your book for the derivation. Note that 1/1.618 = 0.618

Bracketing a Minimum using Golden Section Initialize: x1 = a + (b-a)*0.382 x2 = a + (b-a)*0.618 f1 = ƒ(x1) f2 = ƒ(x2) Loop: if f1 > f2 then a = x1; x1 = x2; f1 = f2 else b = x2; x2 = x1; f2 = f1 endif a x1 x2 b

Newton's Methods If your function is differentiable, then you do not need to evaluate two points to determine the region to be discarded. Get the slope and the sign indicates which region to discard. Basic premise in Newton-Raphson method: Root finding of first derivative is equivalent to finding optimum (if function is differentiable). Method is sometimes referred to as a line search by curve fit because it approximates the real (unknown) objective function to be minimized.

Newton-Raphson Method Question: How many iterations are necessary to solve an optimization problem with a quadratic objective function ?

False Position Method or Secant Method Second order information is expensive to calculate (for multi-variable problems). Thus, try to approximate second order derivative.\ Replace y''(xk) in Newton Raphson with Hence, Newton Raphson becomes Main advantage is no second derivative requirement Question: Why is this an advantage ?