Algorithms and Convergence

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Numerical Solution of Nonlinear Equations
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
Numeriska beräkningar i Naturvetenskap och Teknik 1.Solving equations.
ITEC113 Algorithms and Programming Techniques
Roots of Equations Open Methods (Part 2).
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
ENGG 1801 Engineering Computing MATLAB Lecture 7: Tutorial Weeks Solution of nonlinear algebraic equations (II)
Roots of Equations Open Methods Second Term 05/06.
Secant Method Another Recursive Method. Secant Method The secant method is a recursive method used to find the solution to an equation like Newton’s Method.
CMSC 104: Peter Olsen, Fall 99Lecture 9:1 Algorithms III Representing Algorithms with pseudo-code.
Numerical Methods.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Numerical Methods Solution of Equation.
4.8 Newton’s Method Mon Nov 9 Do Now Find the equation of a tangent line to f(x) = x^5 – x – 1 at x = 1.
Linearization, Newton’s Method
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
Root Finding UC Berkeley Fall 2004, E77 Copyright 2005, Andy Packard. This work is licensed under the Creative.
Numerical Methods and Optimization C o u r s e 1 By Bharat R Chaudhari.
1 4.8 – Newton’s Method. 2 The Situation Let’s find the x-intercept of function graphed using derivatives and tangent lines. |x1|x1 |x2|x2 |x3|x3 Continuing,
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Chapter 5 Numerical Root Findings
CHAPTER 3 NUMERICAL METHODS
Numerical Methods Some example applications in C++
Program design Program Design Process has 2 phases:
Root Finding Methods Fish 559; Lecture 15 a.
2008/09/22: Lecture 6 CMSC 104, Section 0101 John Y. Park
Numerical Methods and Analysis
The formulae for the roots of a 3rd degree polynomial are given below
The formulae for the roots of a 3rd degree polynomial are given below
Section 1.3 Solving Equations Using a Graphing Utility
Introduction to Computer Programming
Computational Methods EML3041
Read Chapters 5 and 6 of the textbook
Boundary-Value Problems for ODE )בעיות הגבול(
The formulae for the roots of a 3rd degree polynomial are given below
ENGG 1801 Engineering Computing
MATH 2140 Numerical Methods
Numerical Analysis Lecture 7.
Numerical Analysis Lecture 45.
Section 4.3 Solve.
Computers in Civil Engineering 53:081 Spring 2003
Roots of equations Class IX.
Copyright © Cengage Learning. All rights reserved.
Copyright © Cengage Learning. All rights reserved.
Section 1.3 Solving Equations Using a Graphing Utility
Copyright © Cengage Learning. All rights reserved.
SOLUTION OF NONLINEAR EQUATIONS
Section 4.8: Newton’s Method
ROOTS OF EQUATIONS.
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Copyright © Cengage Learning. All rights reserved.
Newton’s Method and Its Extensions
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Solving Linear Equations by Graphing
The formulae for the roots of a 3rd degree polynomial are given below
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Mathematical Preliminaries
Sec:5.2 The Bisection Method.
Some Comments on Root finding
Sec:5.2 The Bisection Method.
Fixed- Point Iteration
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Computer Animation Algorithms and Techniques
1 Newton’s Method.
EE, NCKU Tien-Hao Chang (Darby Chang)
The formulae for the roots of a 3rd degree polynomial are given below
The formulae for the roots of a 3rd degree polynomial are given below
2.2 Fixed-Point Iteration
Presentation transcript:

Algorithms and Convergence Sec:1.3 Algorithms and Convergence

Sec:1.3 Algorithms and Convergence An algorithm is a procedure that describes, in an unambiguous manner, a finite sequence of steps to be performed in a specified order. The object of the algorithm is to implement a procedure to solve a problem or approximate a solution to the problem. We use a pseudocode to describe the algorithms. This pseudocode specifies the form of the input to be supplied and the form of the desired output.

Sec:1.3 Algorithms and Convergence Looping techniques counter-controlled x=1:5; vsum = 0; for i=1:5 vsum = vsum + x(i); end vsum conditional execution x=1:5; vsum = 0; for i=1:5 vsum = vsum + x(i); if vsum > 5; break; end vsum condition-controlled x=1:5; vsum = 0; i=1; while i < 3 vsum = vsum + x(i); i = i + 1; end vsum Indentation

Sec:1.3 Algorithms and Convergence 𝑷 𝑵 𝒙 = 𝒙−𝟏 𝟏 − 𝒙−𝟏 𝟐 𝟐 + 𝒙−𝟏 𝟑 𝟑 −…+ −𝟏 𝑵+𝟏 𝒙−𝟏 𝑵 𝟑 Calculate: 𝑷 𝟗 𝟏.𝟓 clear; clc n = 9; x=1.5; s=+1; pw = x-1; pn = s*pw; for i = 2:n s = -s; pw=pw*(x-1); term = s*pw/i; pn = pn + term; end pn clear; clc n = 9; x=1.5; pn = 0; for i = 1:n term = (-1)^(i+1)*(x-1)^i/i; pn = pn + term; end pn

Sec:1.3 Algorithms and Convergence Construct an algorithm to determine the minimal value of N required for 𝑷 𝑵 𝒙 = 𝒙−𝟏 𝟏 − 𝒙−𝟏 𝟐 𝟐 + 𝒙−𝟏 𝟑 𝟑 −…+ −𝟏 𝑵+𝟏 𝒙−𝟏 𝑵 𝟑 | 𝒍𝒏⁡𝟏.𝟓 − 𝑷 𝑵 (𝟏.𝟓)| < 𝟏𝟎 −𝟓 , 𝑺 𝑵 = 𝟎.𝟓 𝟏 − 𝟎.𝟓 𝟐 𝟐 + 𝟎.𝟓 𝟑 𝟑 −…+ −𝟏 𝑵+𝟏 𝟎.𝟓 𝑵 𝟑 clear; clc n = 13; x=1.5; pn = 0; for i = 1:n term = (-1)^(i+1)*(x-1)^i/i; pn = pn + term; end pn From calculus we know that |𝑆 − 𝑆 𝑁 | ≤ | 𝑡𝑒𝑟𝑚 𝑁+1 |. if abs(term) < 1e-5; N=i; break; end

Sec:1.3 Algorithms and Convergence Algorithm is stable small changes in the initial data produce correspondingly small changes in the final results. otherwise it is unstable. Some algorithms are stable only for certain choices of initial data, and are called conditionally stable. Example: How small is 𝝅 𝑥 2 +100𝑥−22=0 𝑥 1 𝑥 2 𝑥 1 − 𝑥 1 (𝟑.𝟏𝟒𝟏𝟓) 𝑥 2 +100𝑥−22=0 𝑥 1 𝑥 2 𝑥 2 − 𝑥 2 |𝝅−𝟑.𝟏𝟒𝟏𝟓| 𝑥 1 − 𝑥 1 𝑥 2 − 𝑥 2 small changes in the initial data produce small changes

Sec:1.3 Algorithms and Convergence Example Rates of Convergence Consider the following two series sequence: {αn}  α 𝒏 𝜶 𝒏 𝜸 𝒏 { 𝜶 𝒏 −𝜶}  0 1 2 3 4 5 6 7 2.00000 0.75000 0.44444 0.31250 0.24000 0.19444 0.16327 4.00000 0.62500 0.22222 0.10938 0.064000 0.041667 0.029155 then we say that {αn} converges to α with rate (order) of convergence O( ( 𝟏 𝒏 ) 𝒑 ). “big oh of” If a positive constant K exists with |αn − α| ≤ K 1 𝑛 𝒑 for large n, 𝜶 𝒏 = 𝒏+𝟏 𝒏 𝟐 𝜸 𝒏 = 𝒏+𝟑 𝒏 𝟑 Then we write: Which one is faster? αn = α + O( ( 1 𝑛 ) 𝑝 ). Rate of convergence Remark: Comparsion test and Limit comparison test

Sec:1.3 Algorithms and Convergence Example Rates of Convergence Consider the following two series sequence: {αn}  α 𝜶 𝒏 = 𝒏+𝟏 𝒏 𝟐 𝜸 𝒏 = 𝒏+𝟑 𝒏 𝟑 { 𝜶 𝒏 −𝜶}  0 then we say that {αn} converges to α with rate (order) of convergence O( ( 𝟏 𝒏 ) 𝒑 ). 𝜶 𝒏 = 𝒏+𝟏 𝒏 𝟐 ≤𝟐 ( 𝟏 𝒏 ) 𝟏 “big oh of” 𝒑=𝟏 If a positive constant K exists with 𝜸 𝒏 = 𝒏+𝟑 𝒏 𝟑 ≤𝟒 ( 𝟏 𝒏 ) 𝟐 𝒑=𝟐 |αn − α| ≤ K 1 𝑛 𝒑 for large n, Then we write: αn = α + O( ( 1 𝑛 ) 𝑝 ). Remark: Comparsion test and Limit comparison test

Sec:1.3 Algorithms and Convergence Rates of Convergence Suppose {βn} is a sequence known to converge to zero, and {αn} converges to a number α. If a positive constant K exists with |αn − α| ≤ K|βn|, for large n, then we say that {αn} converges to α with rate (order) of convergence O(βn). (This expression is read “big oh of βn”.) Rates of Convergence |αn − α| ≤ K 1 𝑛 𝑝 for large n, Two sequences: {αn}  α { 𝛽 𝑛 = ( 1 𝑛 ) 𝑝 }  0 We are generally interested in the largest value of p with αn = α + O( ( 1 𝑛 ) 𝑝 ).

Root-finding problem 𝑓 𝑥 = 0 The root-finding problem is a process involves finding a root, or solution, of an equation of the form 𝑓 𝑥 = 0 for a given function 𝑓 . A root of this equation is also called a zero of the function 𝑓 . In graph, the root (or zero) of a function is the x-intercept Three numerical methods for root-finding Sec(2.1): The Bisection Method Sec(2.2): Fixed point Iteration root Sec(2.3): The Newton-Raphson Method

𝒏 𝒙 𝒏 Newton’s Method 𝑓 𝑥 =0 THE NEWTON-RAPHSON METHOD is a method for finding successively better approximations to the roots (or zeroes) of a function. Example Algorithm Use the Newton-Raphson method to estimate the root of f (x) = 𝒆 −𝒙 −𝒙, employing an initial guess of x1 = 0. To approximate the roots of 𝑓 𝑥 =0 Given initial guess 𝑥 1 f (x) = 𝒆 −𝒙 −𝒙 𝒇 ′ 𝒙 =−𝒆 −𝒙 −𝟏 f (0) =𝟏 𝒇 ′ 𝟎 = −𝟐 𝑥 𝑛+1 = 𝑥 𝑛 − 𝑓( 𝑥 𝑛 ) 𝑓′( 𝑥 𝑛 ) 𝑥 1 =0 𝑥 2 = 𝑥 1 − 𝑓( 𝑥 1 ) 𝑓′( 𝑥 1 ) 𝑥 2 =0− 𝑓(0) 𝑓′(0) 𝒏 𝒙 𝒏 =0.5 1 0.000000000000000 2 0.500000000000000 3 0.566311003197218 4 0.567143165034862 5 0.567143290409781 𝑥 3 =0.5− 𝑓(0.5 ) 𝑓′(0.5 ) =0.566311003197218 The true value of the root: 0.56714329.Thus, the approach rapidly converges on the true root.

Newton’s Method THE NEWTON-RAPHSON METHOD is a method for finding successively better approximations to the roots (or zeroes) of a function. Example clear f = @(t) exp(-t) - t; df = @(t) - exp(-t) - 1; x(1) = 0; for i=1:4 x(i+1) = x(i) - f( x(i) )/df( x(i) ); end x' Use the Newton-Raphson method to estimate the root of f (x) = 𝒆 −𝒙 −𝒙, employing an initial guess of x1 = 0. 𝒏 𝒙 𝒏 1 0.000000000000000 2 0.500000000000000 3 0.566311003197218 4 0.567143165034862 5 0.567143290409781

𝒏 𝒙 𝒏 (Newton) Newton’s Method Example Approximate a root of f (x) = 𝒄𝒐𝒔𝒙 − 𝒙 using Newton’s Method employing an initial guess of 𝒙𝟏 = 𝝅 𝟒 clear f = @(t) exp(-t) - t; df = @(t) - exp(-t) - 1; x(1) = 0; for i=1:4 x(i+1) = x(i) - f( x(i) )/df( x(i) ); end x' 𝒏 𝒙 𝒏 (Newton) 1 2 3 4 5 6 7 8 0.785398163397448 0.739536133515238 0.739085178106010 0.739085133215161