Newton’s method for finding local minimum

Slides:



Advertisements
Similar presentations
Numerical Computation Lecture 4: Root Finding Methods - II United International College.
Advertisements

Lecture 5 Newton-Raphson Method
Optimization.
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
Why Function Optimization ?
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Solving Non-Linear Equations (Root Finding)
Newton-Raphson Method
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Loop Application: Numerical Methods, Part 1 The power of Matlab Mathematics + Coding.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Problem of the Day No calculator! What is the instantaneous rate of change at x = 2 of f(x) = x2 - 2 ? x - 1 A) -2 C) 1/2 E) 6 B) 1/6 D) 2.
Dr. Jie Zou PHY Chapter 2 Solution of Nonlinear Equations: Lecture (II)
4 Numerical Methods Root Finding Secant Method Modified Secant
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Newton-Raphson Method. Figure 1 Geometrical illustration of the Newton-Raphson method. 2.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved.
4.8 Newton’s Method Mon Nov 9 Do Now Find the equation of a tangent line to f(x) = x^5 – x – 1 at x = 1.
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
Root Finding UC Berkeley Fall 2004, E77 Copyright 2005, Andy Packard. This work is licensed under the Creative.
Advanced Computer Graphics Optimization Part 2 Spring 2002 Professor Brogan.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Introduction to error analysis Class II "The purpose of computing is insight, not numbers", R. H. Hamming.
1 M 277 (60 h) Mathematics for Computer Sciences Bibliography  Discrete Mathematics and its applications, Kenneth H. Rosen  Numerical Analysis, Richard.
Answers for Review Questions for Lectures 1-4. Review Lectures 1-4 Problems Question 2. Derive a closed form for the estimate of the solution of the equation.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
CSE 330: Numerical Methods. Introduction The bisection and false position method require bracketing of the root by two guesses Such methods are called.
P2 Chapter 8 CIE Centre A-level Pure Maths © Adam Gibson.
Newton-Raphson Method
Ch 11.6: Series of Orthogonal Functions: Mean Convergence
4 Numerical Methods Root Finding Secant Method Modified Secant
Newton-Raphson Method
Programming in C Paper – VIII B.
Secant Method.
Newton’s Method for Systems of Non Linear Equations
Local Search Algorithms
CSE 575 Computer Arithmetic Spring 2003 Mary Jane Irwin (www. cse. psu
Newton’s method for finding local minima
Solution of Equations by Iteration
Numerical Analysis Lecture 7.
Non-linear Least-Squares
Northeastern Illinois University
Newton-Raphson Method
Computers in Civil Engineering 53:081 Spring 2003
Lecture 10 Root Finding using Open Methods
Further Sequences and Series
Newton-Raphson Method
6.5 Taylor Series Linearization
SOLUTION OF NONLINEAR EQUATIONS
4 Numerical Methods Root Finding.
Measuring Errors Major: All Engineering Majors
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Newton-Raphson Method
Local Search Algorithms
Programming assignment #1 Solving an elliptic PDE using finite differences Numerical Methods for PDEs Spring 2007 Jim E. Jones.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Industrial Engineering Majors Authors: Autar Kaw, Jai Paul
1 Newton’s Method.
EE, NCKU Tien-Hao Chang (Darby Chang)
Errors and Error Analysis Lecture 2
Pivoting, Perturbation Analysis, Scaling and Equilibration
Presentation transcript:

Newton’s method for finding local minimum

Simplest derivation If F(x) -> minimum at xm, then F’(xm) = 0. Solve for F’(xm) = 0 using Newton’s for root finding. Remember, for Z(x) = 0, Newton’s iterations are xn+1 = xn + h, where h = -Z(xn)/Z’(xn) Now, substitute F’(x) for Z(x) To get h = -F’(xn)/F’’(xn) So, Newton’s iterations for minimization are: xn+1 = xn - F’(xn)/F’’(xn)

Rationalize Newton’s Method Let’s Taylor expand F(x) around it minimum, xm: F(x) = F(xm) + F’(xm)(x – xm) + (1/2)*F’’(xm)(x – xm)2 + (1/3!)*F’’’(xm)(x – xm)3 + … Neglect higher order terms for |x – xm| << 1, you must start with a good initial guess anyway. Then F(x) ≈ F(xm)+ F’(xm)(x – xm) + (1/2)*F’’(xm)(x – xm)2 But recall that we are expanding around a minimum, so F’(xm) =0. Then F(x) ≈ + F(xm) + (1/2)*F’’(xm)(x – xm)2 So, most functions look like a parabola around their local minima!

Newton’s Method = Approximate the function with a parabola, iterate. Xm

Newton’s Method

Newton’s Method

Newton’s Method

Newton’s method. More informative derivation. Math 685/CSI 700 Spring 08 Newton’s method. More informative derivation. Newton’s method for finding minimum has quadratic (fast) convergence rate for many (but not all!) functions, but must be started close enough to solution to converge. George Mason University, Department of Mathematical Sciences

Example Math 685/CSI 700 Spring 08 George Mason University, Department of Mathematical Sciences

Convergence. If x0 is chosen well (close to the min.), and the function is close to a quadratic near minimum, the convergence is fast, quadratic. E.g: |x0 – xm | < 0.1, Next iteration: |x1– xm | < (0.1)2 Next: |x2– xm | < ((0.1)2)2 = 10-4 Next: 10-8, etc. Convergence is NOT guaranteed, same pitfalls as with locating the root (run-away, cycles, does not converge to xm, etc. Sometimes, convergence may be very poor even for “nice looking”, smooth functions! ) Combining Newton’s with slower, but more robust methods such as “golden section” often helps. No numerical method will work well for truly ill-conditioned (pathological) cases. At best, expect significant errors, at worst – completely wrong answer!

Choosing stopping criteria The usual: |xn+1 – xn | < TOL, or, better |xn+1 – xn | < TOL*|xn+1 + xn |/2 to ensure that the given TOL is achieved for the relative accuracy. Generally, TOL ~ sqrt(εmach) or higher if full theoretical precision is not needed. But not smaller. TOL can not be lower becasue, in general F(x) is quadratic around the minimum xm, so when you are h < sqrt(εmach) away from xm, |F(xm) – F(xm+h)| < εmach, and you can no longer distinguish between values of F(x) so close to the min.

The stopping criterion: more details To be more specific: we must stop iterations when |F(xn) – F(xm)| ~ εmach*|F(xn) |, else we can no longer tell F(xn) from F(xm) and further iterations are meaningless. Since |F(xn) – F(xm)| ≈ |(1/2)*F’’(xm)(xn – xm)2| , we get for this smallest |(xn – xm)| ~ sqrt(εmach)* sqrt(|2F(xn) / F’’(xm) |) = sqrt(εmach)* sqrt(|2F(xn) / F’’(xn) |) = sqrt(εmach)*|xn|*sqrt(|2F(xn) /(xn2 F’’(xn)) |) . For many functions the quantity in green | | ~1, which gives us the rule from the previous slide: stop when |xn+1 – xn | < sqrt(εmach)* |xn+1 + xn |/2