Sec:5.2 The Bisection Method.

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
CSE 330: Numerical Methods
Numeriska beräkningar i Naturvetenskap och Teknik 1.Solving equations.
Error Measurement and Iterative Methods
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 51.
CSCE Finding Roots of Equations This is a fundamental computational problem Several methods exist. We will look at the bisection method and at Newton’s.
Second Term 05/061 Roots of Equations Bracketing Methods.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 6 Roots of Equations Bracketing Methods.
Roots of Equations Bracketing Methods.
Notes, part 5. L’Hospital Another useful technique for computing limits is L'Hospital's rule: Basic version: If, then provided the latter exists. This.
Lectures on Numerical Methods 1 Numerical Methods Charudatt Kadolkar Copyright 2000 © Charudatt Kadolkar.
NUMERICAL METHODS WITH C++ PROGRAMMING
Notes, part 4 Arclength, sequences, and improper integrals.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Theorems on continuous functions. Weierstrass’ theorem Let f(x) be a continuous function over a closed bounded interval [a,b] Then f(x) has at least one.
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
Applied Numerical Analysis Chapter 2 Notes (continued)
Solving Non-Linear Equations (Root Finding)
Application of Bisection Method Presenting: c Presenters : Faisal Zubi.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Newton’s Method Other Recursive Methods Modified Fixed Point Method.
Solving equations numerically The sign - change rule If the function f(x) is continuous for an interval a  x  b of its domain, if f(a) and f(b) have.
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Numerical Methods Solution of Equation.
Roots: Bracketing Methods
Recursive Methods for Finding Roots of Functions Bisection & Newton’s Method.
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
Solving Polynomials.
Root Finding UC Berkeley Fall 2004, E77 Copyright 2005, Andy Packard. This work is licensed under the Creative.
Lecture 4 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
1 M 277 (60 h) Mathematics for Computer Sciences Bibliography  Discrete Mathematics and its applications, Kenneth H. Rosen  Numerical Analysis, Richard.
Answers for Review Questions for Lectures 1-4. Review Lectures 1-4 Problems Question 2. Derive a closed form for the estimate of the solution of the equation.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 / Chapter 5.
Chapter 5 Numerical Root Findings
CHAPTER 3 NUMERICAL METHODS
Numerical Methods and Analysis
Newton’s Method for Systems of Non Linear Equations
Linford Group Meeting Department of Chemistry and Biochemistry Brigham Young University Thursday, Feb. 23, 2017 Problems on the oxidation of tertiary amines,
4.2 Properties of Polynomial Graphs
Numerical Analysis Lecture 7.
Remainder of a Taylor Polynomial
College Algebra Chapter 3 Polynomial and Rational Functions
Roots of equations Class VII.
Important Values for Continuous functions
Computers in Civil Engineering 53:081 Spring 2003
Roots of equations Class IX.
Copyright © Cengage Learning. All rights reserved.
Copyright © Cengage Learning. All rights reserved.
Copyright © Cengage Learning. All rights reserved.
SOLUTION OF NONLINEAR EQUATIONS
Section 4.8: Newton’s Method
Algorithms and Convergence
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Copyright © Cengage Learning. All rights reserved.
Newton’s Method and Its Extensions
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
College Algebra Chapter 3 Polynomial and Rational Functions
Solving Linear Equations by Graphing
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Some Comments on Root finding
Sec:5.2 The Bisection Method.
Fixed- Point Iteration
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
1 Newton’s Method.
Solutions for Nonlinear Equations
Presentation transcript:

Sec:5.2 The Bisection Method

Sec:5.2 The Bisection Method The root-finding problem is a process involves finding a root, or solution, of an equation of the form 𝑓 𝑥 = 0 for a given function 𝑓 . A root of this equation is also called a zero of the function 𝑓 . In graph, the root (or zero) of a function is the x-intercept three numerical methods for root-finding Sec(2.1): The Bisection Method root Sec(2.2): Fixed point iterations Sec(2.3): The Newton-Raphson Method

Sec:5.2 The Bisection Method This technique is based on the Intermediate Value Theorem Example: Suppose 𝑓 is a continuous function defined on the interval [𝑎, 𝑏], with 𝑓 (𝑎) and 𝑓 (𝑏) of opposite sign. The Intermediate Value Theorem implies that a number 𝑝 exists in (𝑎, 𝑏) with 𝑓 ( 𝑝) = 0. Show that 𝑓 (𝑥) = 10 𝑥 6 −149 𝑥 5 +10𝑥−149 10( 𝑥 4 +1) has a root in [12, 16] Sol: 12 16 𝒇(𝟏𝟐)=−𝟑𝟒.𝟖 𝒇(𝟏𝟔)=𝟏𝟕.𝟔

Sec:5.2 The Bisection Method Example: Use Bisection method to find the root of the function 𝑓 (𝑥) = 10 𝑥 6 −149 𝑥 5 +10𝑥−149 10( 𝑥 4 +1) in [12, 16] 12 16 Change of sign -34.8 17.6 𝒑 𝟏 = True root: 𝑥 𝑟 ∗ =14.9 12 14 Change of sign 16 𝒏 𝒑 𝒏 -34.8 -12.6 17.6 1 14.0000000000 2 15.0000000000 3 14.5000000000 4 14.7500000000 5 14.8750000000 6 14.9375000000 7 14.9062500000 8 14.8906250000 9 14.8984375000 10 14.9023437500 11 14.9003906250 12 14.8994140625 13 14.8999023438 14 14.9001464844 15 14.9000244141 16 14.8999633789 𝒑 𝟐 = 14 15 16 Change of sign -12.6 1.5 17.6 𝒑 𝟑 = Change of sign 14.5 14 15 -5.8 1.5 -12.6

Sec:5.2 The Bisection Method Textbook notations 𝒂=𝒂 𝟏 𝒃=𝒃 𝟏 12 16 Change of sign Iter1 𝒑 𝟏 At the n-th iteration: -34.8 17.6 endpoints of the inteval 𝒂 𝟐 𝒃 𝟐 12 14 Change of sign 16 [ 𝒂 𝒏 , 𝒃 𝒏 ] Iter2 𝒑 𝟐 -34.8 -12.6 17.6 Length of the interval 𝒂 𝟑 𝒃 𝟑 𝒃 𝒏 − 𝒂 𝒏 = 14 15 16 Iter3 Change of sign 𝒑 𝟑 -12.6 1.5 17.6 𝑳 𝒏 = 𝒃−𝒂 𝟐 𝒏−𝟏 𝒂 𝟒 𝒃 𝟒 Change of sign 14.5 14 15 Iter4 -5.8 1.5 -12.6 𝒑 𝟒

Sec:5.2 The Bisection Method Error Estimates for Bisection 𝒂 𝟏 𝒃 𝟏 12 14 16 Iter1 -34.8 -12.6 17.6 𝒑 𝟏 At the iter1: 𝒆𝒓𝒓𝒐𝒓= 𝒑 𝟏 − 𝒑 ∗ 𝒑 𝟏 − 𝒑 ∗ ≤ 𝟏 𝟐 (length of the interval) True root live inside this interval 𝒑 ∗ 𝒑 𝟏 − 𝒑 ∗ < 𝒃−𝒂 𝟐 At the iter2: 𝒂 𝟐 𝒃 𝟐 14 16 𝒑 𝟐 − 𝒑 ∗ ≤ 𝟏 𝟐 (length of the interval) 15 Iter2 𝒑 𝟐 − 𝒑 ∗ ≤ 𝒃−𝒂 𝟐 𝟐 -12.6 1.5 17.6 𝒑 𝟐 At the nth iteration: Theorem 2.1 Suppose that f ∈ C[a, b] and f (a) ・f (b) < 0. The Bisection method generates a sequence 𝒑 𝒏 approximating a zero p of f with 𝒑 𝒏 − 𝒑 ∗ ≤ 𝟏 𝟐 (length of the interval) 𝒑 𝒏 − 𝒑 ∗ ≤ 𝒃−𝒂 𝟐 𝒏 the absolute error in the n-th iteration 𝒑 𝒏 − 𝒑 ∗ ≤ 𝒃−𝒂 𝟐 𝒏 ≤ 𝒃−𝒂 𝟐 𝒏

Sec:5.2 The Bisection Method Example: Theorem 2.1 Show that 𝑓 (𝑥) = 10 𝑥 6 −149 𝑥 5 +10𝑥−149 10( 𝑥 4 +1) has a root in [12, 16] Suppose that f ∈ C[a, b] and f (a) ・f (b) < 0. The Bisection method generates a sequence 𝒑 𝒏 approximating a zero p of f with 𝒑 𝒏 − 𝒑 ∗ ≤ 𝒃−𝒂 𝟐 𝒏 Remark 𝒏 𝒑 𝒏 𝒑 𝒏 −𝒑∗ It is important to realize that Theorem 2.1 gives only a bound for approximation error and that this bound might be quite conservative. For example, 1 14.0000 9.0000e-01 2 15.0000 1.0000e-01 3 14.5000 4.0000e-01 4 14.7500 1.5000e-01 5 14.8750 2.5000e-02 6 14.9375 3.7500e-02 7 14.9063 6.2500e-03 8 14.8906 9.3750e-03 9 14.8984 1.5625e-03 𝑝 7 −𝑝∗ < 16−12 2 7 =3.125𝑒−2 𝑝 7 −𝑝∗ =6.25𝑒−3

Sec:5.2 The Bisection Method Theorem 2.1 Example: Suppose that f ∈ C[a, b] and f (a) ・f (b) < 0. The Bisection method generates a sequence 𝒑 𝒏 approximating a zero p of f with Show that 𝑓 (𝑥) = 10 𝑥 6 −149 𝑥 5 +10𝑥−149 10( 𝑥 4 +1) has a root in [12, 16] 𝒑 𝒏 − 𝒑 ∗ ≤ 𝒃−𝒂 𝟐 𝒏 Example Determine the number of iterations necessary to solve f (x) = 0 with accuracy 10−2 using a1 = 12 and b1 = 16. 𝒏 𝒑 𝒏 𝒑 𝒏 −𝒑∗ 1 14.0000 9.0000e-01 2 15.0000 1.0000e-01 3 14.5000 4.0000e-01 4 14.7500 1.5000e-01 5 14.8750 2.5000e-02 6 14.9375 3.7500e-02 7 14.9063 6.2500e-03 8 14.8906 9.3750e-03 9 14.8984 1.5625e-03 10 14.9023 2.3437e-03 11 14.9004 3.9062e-04 12 14.8994 5.8594e-04 𝒑 𝒏 −𝒑∗ < 𝟏𝟔−𝟏𝟐 𝟐 𝒏 < 𝟏𝟎 −𝟐 the desired error solve for n: 𝟐 𝒏 > 𝟒 𝟏𝟎 −𝟐 𝑛>8.64 𝑛=9 Remark It is important to keep in mind that the error analysis gives only a bound for the number of iterations. In many cases this bound is much larger than the actual number required.

Sec:5.2 The Bisection Method Theorem 2.1 Rates of Convergence Suppose that f ∈ C[a, b] and f (a) ・f (b) < 0. The Bisection method generates a sequence 𝒑 𝒏 approximating a zero p of f with sequence: {αn}  α { 𝜷 𝒏 }  0 𝒑 𝒏 − 𝒑 ∗ ≤ 𝒃−𝒂 𝟐 𝒏 then we say that {αn} converges to α with rate of convergence O( 𝜷 𝒏 ). If a positive constant K exists with 𝒑 𝒏 − 𝒑 ∗ ≤(𝒃−𝒂) 𝟏 𝟐 𝒏 |αn − α| ≤ K 𝜷 𝒏 for large n, 𝒑 𝒏 − 𝒑 ∗ ≤𝑲 𝟏 𝟐 𝒏 Then we write: αn = α + O( 𝜷 𝒏 ). the sequence 𝒑 𝒏 converges to p with rate of convergence O( 𝟏 𝟐 𝒏 ).

Sec:5.2 The Bisection Method %% clear; clc a=12; b=16; es=1e-3; f=@(x) ( x.^5*(10*x-149) + 10*x - 149)./(10*(x^4+1)); max_iter= round((log(b-a)-log(es))/log(2)); fa=f(a); fb=f(b); iter =0; if fa*fb > 0,return,end for k=1:max_iter iter = iter +1; p=(a+b)/2; fp=f(p); x(k)=p; if fp==0 a=p; b=p; elseif sign(fb)*sign(fp)<0 a=p; fa=fp; else b=p; fb=fp; end fprintf('%d %14.4f %14.4e \n', iter,p,abs(p-14.9)); Remark sign(fb)*sign(fp)<0 instead of fb*fp<0 avoids the possibility of overflow or underflow in the multiplication