Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.

Slides:



Advertisements
Similar presentations
Chapter 6: Roots: Open Methods
Advertisements

Part 2 Chapter 6 Roots: Open Methods
Polynomial Approximation PSCI 702 October 05, 2005.
Roundoff and truncation errors
CSE 330: Numerical Methods
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Open Methods.
Chapter 4 Roots of Equations
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Chapter 6 Open Methods.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Martin Mendez UASLP Chapter 61 Unit II.
Notes, part 5. L’Hospital Another useful technique for computing limits is L'Hospital's rule: Basic version: If, then provided the latter exists. This.
1 Chapter 2 Limits and Continuity Rates of Change and Limits.
Open Methods Chapter 6 The Islamic University of Gaza
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
Notes, part 4 Arclength, sequences, and improper integrals.
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
- + Suppose f(x) is a continuous function of x within interval [a, b]. f(a) = - ive and f(b) = + ive There exist at least a number p in [a, b] with f(p)
Chapter 3 Root Finding.
Graphing Quadratic Functions
3 Polynomial and Rational Functions © 2008 Pearson Addison-Wesley. All rights reserved Sections 3.1–3.4.
I. Finding Roots II. Integrating Functions
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Scientific Computing Algorithm Convergence and Root Finding Methods.
Solving Non-Linear Equations (Root Finding)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 Roots of Equations Why? But.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Copyright © 2014, 2010 Pearson Education, Inc. Chapter 2 Polynomials and Rational Functions Copyright © 2014, 2010 Pearson Education, Inc.
Roots & Zeros of Polynomials III
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
Newton’s Method, Root Finding with MATLAB and Excel
Today’s class Roots of equation Finish up incremental search
4 Numerical Methods Root Finding Secant Method Modified Secant
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 7 Roots of Polynomials.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
4 Numerical Methods Root Finding Secant Method Modified Secant
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Solution of Nonlinear Equations ( Root Finding Problems ) Definitions Classification of Methods  Analytical Solutions  Graphical Methods  Numerical.
MATH342: Numerical Analysis Sunjae Kim.
Solution of Nonlinear Equations ( Root Finding Problems )
Numerical Methods and Analysis
Part 2 Chapter 6 Roots: Open Methods
Solution of Equations by Iteration
Numerical Analysis Lecture 7.
Computers in Civil Engineering 53:081 Spring 2003
Quadratic Equations and Functions
Systems of Nonlinear Equations
Intermediate Value Theorem
Intermediate Value Theorem
Newton’s Method and Its Extensions
FP1: Chapter 2 Numerical Solutions of Equations
Roots & Zeros of Polynomials III
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Solutions for Nonlinear Equations
Presentation transcript:

Roots of Equations Chapter 3

Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications –Can represent optimal shapes for structures, equilibrium points for the economy, etc. Polynomials up to degree 4 can be solved “exactly” –But we’ve already seen the care you need to exercise with even a quadratic equation!

Solution Methods Two categories: Iterative (“open”) methods –Fixed-point Iteration –Newton’s method –Secant method Bracketing methods –Bisection –False position We’ll do a hybrid of bisection and false position Program 3

Fixed-point Methods Rewrite f(x) = 0 as x = g(x) Choose a starting value, x 0 Calculate the sequence x i+1 = g(x i ) Maybe it will converge, maybe it won’t :-) –We’ll investigate convergence criteria

Example Consider f(x) = x 2 – 5x + 4 –The solutions are 4 and 1 Rewrite as x = (x 2 + 4) / 5 Try initial guesses of 2, then 5 –One converges to 1, the other diverges! –See iterate.cpp

Convergence Criterion If the x i converge, then their difference diminishes –In other words |x i+1 – x i | decreases By the Mean Value Theorem:

Convergence Criterion (continued) Let a = x i-1, b = x i in the MVT Remember that x i+1 = g(x i )

Convergence Criterion (continued) Suppose that the derivative of g(x) is bounded in the region of interest, say |g'(x)| <= M The reasoning that follows shows that |g'(x)| < 1 will guarantee convergence:

Newton’s Method An iterative method with a quadratic order of convergence (g'(r) = 0) Uses g(x) = x – f(x)/f'(x) Two derivations: –Geometric –Taylor Series See newton.cpp

Newton’s Method Geometric Approach Given a guess x 0, x 1 is obtained by finding where the tangent line at (x 0, f(x 0 )) intersects the x-axis The line can be found by setting y 1 to 0 and solving the following for x 1 :

Newton’s Method Taylor Series Approach Expand f(x) about x i, evaluate at x i+1, and drop terms after second term: We want the iterates to approach zero, so substitute 0 for f(x i+1 ) on the left:

Newton’s Method Problems The obvious problem is the divisor f'(x) –If it’s zero, bad news! –Happens when you have a double root (like the vertex of a parabola on the x-axis (y = x 2 ) Because the first derivative is zero (horizontal) The closer the derivative goes to zero, the worse Newton’s Method behaves –Flat tangents send you all over the place –And it can spin forever if there’s no real root (like x = 0)

Order of Convergence The smaller the first derivative, the faster the iteration will converge –If the first derivative is zero, it will converge an order of magnitude faster –We will show this by looking at the Taylor series Definition: –The Order of Convergence of an iterative method is the order of its lowest, non-zero derivative –Simple iteration as we just saw is linear Because the first derivative is not necessarily 0

Newton’s Method Order of Convergence Newton’s method is quadratic because g'(r) = 0 (remember f(r) = 0):

Complex Roots Can just use Newton’s method with complex numbers –Must start with a non-zero imaginary part! –In C++ we use the complex class template –See cnewton.cpp Can also solve an equivalent system of real equations –But we’ll skip that (it’s mathy)

Secant Method Like Newton’s Method, but uses the difference approximation to f'(x) –Linear Interpolation technique ((a,f(a))—(b,f(b))) –Order of Convergence ≈ (1 + √5)/2 (Fibonacci!) –Only requires 1 function evaluation per iteration Newton’s requires two Secant avoids evaluating a costly derivative See secant.cpp (see next two slides first)

Secant Method

Secant Method Problems Requires 2 initial guesses Might still divide by 0 Two places where cancellation can occur

Bracketing Methods Begin with the endpoints of an interval that “bracket” a root –Signs of f(a) and f(b) differ –A root is guaranteed to be found Bisection (bisect.cpp) –Halves the interval, like binary search –Sure, but slow (linear) False Position (false.cpp) –Like secant, but maintains the bracketing behavior –Can perform poorly

Hybrid Methods Combine the safety of bracketing methods with the speed of iterative methods Program 4 –Will use False Position Maintains bracketing –Also will use a “secondary secant” To reduce interval at both ends –Reverts to bisection if the secants don’t “sufficiently reduce” the interval

Secondary Secants Connect f(a) to f(c) Replace [a, b] by [c, d]

Secondary Secants Or, replace [a, b] by [d, c] –Governed by what will maintain a sign change

Program 4 Using false position, compute c –If c = b, bisect –(After each bisection, return to attempt false position) Compute d (depends on sign(f(c))) –If d = b, bisect –If |d – c| > |b – a|/2, bisect Exit when f evaluates to 0 at c or d, or if the bisection step narrows to 1 ulp –Check for f(c) == 0 or f(d) == 0 immediately –Never evaluate f at the same x-value twice –Should have no more than 3 function evaluations per iteration

Optimizations - After computing c, insert the following code: if (c <= a) c = a + eps*abs(a); // 1-2 ulps past a else if (c >= b) c = b - eps*abs(b); // 1-2 ulps before b - Then test for c = b again as before… (It makes a tremendous difference!) -Always use the smallest, current interval whenever you degrade to bisection ([a,c], [c,b], [c,d] or [d,c]) -Not the original [a,b]

Root Finding in Matlab fzero function fzero(f, x0) –Searches for sign change fzero(f,[a b]) –[a b] must contain a sign change –Uses a method similar to our Program 3 Trace options: –options = optimset('display','iter') –[x,fx] = fzero('x^10-1',[ ],options)