EE, NCKU Tien-Hao Chang (Darby Chang)

Slides:



Advertisements
Similar presentations
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Advertisements

EE, NCKU Tien-Hao Chang (Darby Chang)
Copyright © Cengage Learning. All rights reserved.
Lecture 5 Newton-Raphson Method
Fixed point iterations and solution of non-linear functions
Newton’s Method finds Zeros Efficiently finds Zeros of an equation: –Solves f(x)=0 Why do we care?
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Roots of Equations Open Methods (Part 2).
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
Notes, part 5. L’Hospital Another useful technique for computing limits is L'Hospital's rule: Basic version: If, then provided the latter exists. This.
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
Notes, part 4 Arclength, sequences, and improper integrals.
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
Chapter 3 Root Finding.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Solving Non-Linear Equations (Root Finding)
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
Quadrature rules 1Michael Sokolov / Numerical Methods for Chemical Engineers / Numerical Quadrature Michael Sokolov ETH Zurich, Institut für Chemie- und.
Numerical Methods.
Newton’s Method, Root Finding with MATLAB and Excel
Today’s class Roots of equation Finish up incremental search
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
AUGUST 2. MATH 104 Calculus I Review of previous material…. …methods and applications of integration, differential equations ………..
MATH 175: Numerical Analysis II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Sem AY
Solution of Nonlinear Equations ( Root Finding Problems ) Definitions Classification of Methods  Analytical Solutions  Graphical Methods  Numerical.
7/11/ Bisection Method Major: All Engineering Majors Authors: Autar Kaw, Jai Paul
MATH342: Numerical Analysis Sunjae Kim.
1 4.8 – Newton’s Method. 2 The Situation Let’s find the x-intercept of function graphed using derivatives and tangent lines. |x1|x1 |x2|x2 |x3|x3 Continuing,
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
CSE 330: Numerical Methods. Introduction The bisection and false position method require bracketing of the root by two guesses Such methods are called.
Solution of Nonlinear Equations ( Root Finding Problems )
Hypothesis: Conclusion:
Numerical Methods and Analysis
CS B553: Algorithms for Optimization and Learning
Multiplicity of a Root First Modified Newton’s Method
Read Chapters 5 and 6 of the textbook
CSE Differentiation Roger Crawfis.
Chapter 6.
MATH 2140 Numerical Methods
Solution of Equations by Iteration
Roots of equations Class VII.
Newton-Raphson Method
Computers in Civil Engineering 53:081 Spring 2003
Bisection Method.
MATH 175: Numerical Analysis II
Newton-Raphson Method
SOLUTION OF NONLINEAR EQUATIONS
Section 4.8: Newton’s Method
ROOTS OF EQUATIONS.
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Newton’s Method and Its Extensions
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
FP1: Chapter 2 Numerical Solutions of Equations
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Some Comments on Root finding
Newton-Raphson Method
Mechanical Engineering Majors Authors: Autar Kaw, Jai Paul
Fixed- Point Iteration
Chapter 6.
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Industrial Engineering Majors Authors: Autar Kaw, Jai Paul
1 Newton’s Method.
Major: All Engineering Majors Authors: Autar Kaw, Jai Paul
Presentation transcript:

EE, NCKU Tien-Hao Chang (Darby Chang) Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang)

In the previous slide Rootfinding Bisection method False position multiplicity Bisection method Intermediate Value Theorem convergence measures False position yet another simple enclosure method advantage and disadvantage in comparison with bisection method

In this slide Fixed point iteration scheme Newton’s method what is a fixed point? iteration function convergence Newton’s method tangent line approximation Secant method

Rootfinding Simple enclosure Fixed point iteration Intermediate Value Theorem guarantee to converge convergence rate is slow bisection and false position Fixed point iteration Mean Value Theorem rapid convergence loss of guaranteed convergence

Fixed Point Iteration Schemes 2.3 Fixed Point Iteration Schemes

There is at least one point on the graph at which the tangent lines is parallel to the secant line

Mean Value Theorem We use a slightly different formulation An example of using this theorem proof the inequality

Fixed Points

Fixed points Consider the function sinx thought of as moving the input value of π/6 to the output value 1/2 the sine function maps zero to zero the sine function fixes the location of 0 x=0 is said to be a fixed point of the function sinx

Number of fixed points According to the previous figure, a trivial question is how many fixed points of a given function?

Only sufficient conditions Namely, not necessary conditions it is possible for a function to violate one or more of the hypotheses, yet still have a (possibly unique) fixed point

Fixed Point Iteration

Fixed point iteration If it is known that a function g has a fixed point, one way to approximate the value of that fixed point is ‘fixed point iteration scheme’ These can be defined as follows:

http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action

About fixed point iteration

Relation to rootfinding Now we know what fixed point iteration is, but how to apply it on rootfinding? More precisely, given a rootfinding equation, f(x)=x3+x2-3x-3=0, what is its iteration function g(x)? hint

Iteration function Algebraically transform to the form x = … f(x) = x3 + x2 – 3x – 3 x = x3 + x2 – 2x – 3 x = (x3 + x2 – 3 ) / 3 … Every rootfinding problem can be transformed into any number of fixed point problems (fortunately or unfortunately?)

http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action

Analysis #1 iteration function converges #2 fails to converge but to a fixed point outside the interval (1,2) #2 fails to converge despite attaining values quite close to #1 #3 and #5 converge rapidly #3 add one correct decimal every iteration #5 doubles correct decimals every iteration #4 converges, but very slow

Convergence This analysis suggests a trivial question the fixed point of g is justified in our previous theorem

k demonstrates the importance of the parameter k k = 1/2 when k → 0, rapid when k → 1, dramatically slow k = 1/2 roughly the same as the bisection method

Fixed Point Iteration Schemes Order of Convergence All about the derivatives, g(k)(p)

Stopping condition

Two steps

The first step

The second step

2.3 Fixed Point Iteration Schemes

2.4 Newton’s Method

Newtoon’ Method Definition

http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action

In the previous example Newton’s method used 8 function evaluations Bisection method requires 36 evaluations starting from (1,2) False position requires 31 evaluations starting from (1,2)

Initial guess Are these comparisons fair? answer Are these comparisons fair? p0=0.48, converges to 0.4510472613 after 5 iterations p0=0.4, fails to converges after 5000 iterations p0=0, converges to 697.4995475 after 42 iterations example

p0 in Newton’s method Not guaranteed to converge p0=0.4, fails to converge May converge to a value very far from p0 p0=0, converges to 697.4995475 Heavily dependent on the choice of p0

Convergence Analysis for Newton’s Method

The simplest plan of attack is to apply the general fixed point iteration convergence theorem

Analysis strategy To do this, it is must be shown that there exists such an interval, I, which contains the root p, for which

Newton’s Method Guaranteed to Converge? Why sometimes Newton’s method does not converge? This theorem guarantees that δ exists But it may be very small hint answer

http://img2. timeinc. net/people/i/2007/startracks/071008/brad_pitt300 http://img2.timeinc.net/people/i/2007/startracks/071008/brad_pitt300.jpg Oh no! After these annoying analyses, the Newton’s method is still not guaranteed to converge!?

Don’t worry Actually, there is an intuitive method Combine Newton’s method and bisection method Newton’s method first if an approximation falls outside current interval, then apply bisection method to obtain a better guess (Can you write an algorithm for this method?)

Newton’s Method Convergence analysis At least quadratic g’(p)=0, since f(p)=0 Stopping condition

Recall that http://www.dianadepasquale.com/ThinkingMonkey.jpg

Is Newton’s method always faster?

http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action

2.4 Newton’s Method

2.5 Secant Method

Secant method Because that Newton’s method 2 function evaluations per iteration requires the derivative Secant method is a variation on either false position or Newton’s method 1 additional function evaluation per iteration does not require the derivative Let’s see the figure first answer

Secant method Secant method is a variation on either false position or Newton’s method 1 additional function evaluation per iteration does not require the derivative does not maintain an interval pn+1 is calculated with pn and pn-1

2.5 Secant Method