Algorithms and Convergence Sec:1.3 Algorithms and Convergence
Sec:1.3 Algorithms and Convergence An algorithm is a procedure that describes, in an unambiguous manner, a finite sequence of steps to be performed in a specified order. The object of the algorithm is to implement a procedure to solve a problem or approximate a solution to the problem. We use a pseudocode to describe the algorithms. This pseudocode specifies the form of the input to be supplied and the form of the desired output.
Sec:1.3 Algorithms and Convergence Looping techniques counter-controlled x=1:5; vsum = 0; for i=1:5 vsum = vsum + x(i); end vsum conditional execution x=1:5; vsum = 0; for i=1:5 vsum = vsum + x(i); if vsum > 5; break; end vsum condition-controlled x=1:5; vsum = 0; i=1; while i < 3 vsum = vsum + x(i); i = i + 1; end vsum Indentation
Sec:1.3 Algorithms and Convergence 𝑷 𝑵 𝒙 = 𝒙−𝟏 𝟏 − 𝒙−𝟏 𝟐 𝟐 + 𝒙−𝟏 𝟑 𝟑 −…+ −𝟏 𝑵+𝟏 𝒙−𝟏 𝑵 𝟑 Calculate: 𝑷 𝟗 𝟏.𝟓 clear; clc n = 9; x=1.5; s=+1; pw = x-1; pn = s*pw; for i = 2:n s = -s; pw=pw*(x-1); term = s*pw/i; pn = pn + term; end pn clear; clc n = 9; x=1.5; pn = 0; for i = 1:n term = (-1)^(i+1)*(x-1)^i/i; pn = pn + term; end pn
Sec:1.3 Algorithms and Convergence Construct an algorithm to determine the minimal value of N required for 𝑷 𝑵 𝒙 = 𝒙−𝟏 𝟏 − 𝒙−𝟏 𝟐 𝟐 + 𝒙−𝟏 𝟑 𝟑 −…+ −𝟏 𝑵+𝟏 𝒙−𝟏 𝑵 𝟑 | 𝒍𝒏𝟏.𝟓 − 𝑷 𝑵 (𝟏.𝟓)| < 𝟏𝟎 −𝟓 , 𝑺 𝑵 = 𝟎.𝟓 𝟏 − 𝟎.𝟓 𝟐 𝟐 + 𝟎.𝟓 𝟑 𝟑 −…+ −𝟏 𝑵+𝟏 𝟎.𝟓 𝑵 𝟑 clear; clc n = 13; x=1.5; pn = 0; for i = 1:n term = (-1)^(i+1)*(x-1)^i/i; pn = pn + term; end pn From calculus we know that |𝑆 − 𝑆 𝑁 | ≤ | 𝑡𝑒𝑟𝑚 𝑁+1 |. if abs(term) < 1e-5; N=i; break; end
Sec:1.3 Algorithms and Convergence Algorithm is stable small changes in the initial data produce correspondingly small changes in the final results. otherwise it is unstable. Some algorithms are stable only for certain choices of initial data, and are called conditionally stable. Example: How small is 𝝅 𝑥 2 +100𝑥−22=0 𝑥 1 𝑥 2 𝑥 1 − 𝑥 1 (𝟑.𝟏𝟒𝟏𝟓) 𝑥 2 +100𝑥−22=0 𝑥 1 𝑥 2 𝑥 2 − 𝑥 2 |𝝅−𝟑.𝟏𝟒𝟏𝟓| 𝑥 1 − 𝑥 1 𝑥 2 − 𝑥 2 small changes in the initial data produce small changes
Sec:1.3 Algorithms and Convergence Example Rates of Convergence Consider the following two series sequence: {αn} α 𝒏 𝜶 𝒏 𝜸 𝒏 { 𝜶 𝒏 −𝜶} 0 1 2 3 4 5 6 7 2.00000 0.75000 0.44444 0.31250 0.24000 0.19444 0.16327 4.00000 0.62500 0.22222 0.10938 0.064000 0.041667 0.029155 then we say that {αn} converges to α with rate (order) of convergence O( ( 𝟏 𝒏 ) 𝒑 ). “big oh of” If a positive constant K exists with |αn − α| ≤ K 1 𝑛 𝒑 for large n, 𝜶 𝒏 = 𝒏+𝟏 𝒏 𝟐 𝜸 𝒏 = 𝒏+𝟑 𝒏 𝟑 Then we write: Which one is faster? αn = α + O( ( 1 𝑛 ) 𝑝 ). Rate of convergence Remark: Comparsion test and Limit comparison test
Sec:1.3 Algorithms and Convergence Example Rates of Convergence Consider the following two series sequence: {αn} α 𝜶 𝒏 = 𝒏+𝟏 𝒏 𝟐 𝜸 𝒏 = 𝒏+𝟑 𝒏 𝟑 { 𝜶 𝒏 −𝜶} 0 then we say that {αn} converges to α with rate (order) of convergence O( ( 𝟏 𝒏 ) 𝒑 ). 𝜶 𝒏 = 𝒏+𝟏 𝒏 𝟐 ≤𝟐 ( 𝟏 𝒏 ) 𝟏 “big oh of” 𝒑=𝟏 If a positive constant K exists with 𝜸 𝒏 = 𝒏+𝟑 𝒏 𝟑 ≤𝟒 ( 𝟏 𝒏 ) 𝟐 𝒑=𝟐 |αn − α| ≤ K 1 𝑛 𝒑 for large n, Then we write: αn = α + O( ( 1 𝑛 ) 𝑝 ). Remark: Comparsion test and Limit comparison test
Sec:1.3 Algorithms and Convergence Rates of Convergence Suppose {βn} is a sequence known to converge to zero, and {αn} converges to a number α. If a positive constant K exists with |αn − α| ≤ K|βn|, for large n, then we say that {αn} converges to α with rate (order) of convergence O(βn). (This expression is read “big oh of βn”.) Rates of Convergence |αn − α| ≤ K 1 𝑛 𝑝 for large n, Two sequences: {αn} α { 𝛽 𝑛 = ( 1 𝑛 ) 𝑝 } 0 We are generally interested in the largest value of p with αn = α + O( ( 1 𝑛 ) 𝑝 ).
Root-finding problem 𝑓 𝑥 = 0 The root-finding problem is a process involves finding a root, or solution, of an equation of the form 𝑓 𝑥 = 0 for a given function 𝑓 . A root of this equation is also called a zero of the function 𝑓 . In graph, the root (or zero) of a function is the x-intercept Three numerical methods for root-finding Sec(2.1): The Bisection Method Sec(2.2): Fixed point Iteration root Sec(2.3): The Newton-Raphson Method
𝒏 𝒙 𝒏 Newton’s Method 𝑓 𝑥 =0 THE NEWTON-RAPHSON METHOD is a method for finding successively better approximations to the roots (or zeroes) of a function. Example Algorithm Use the Newton-Raphson method to estimate the root of f (x) = 𝒆 −𝒙 −𝒙, employing an initial guess of x1 = 0. To approximate the roots of 𝑓 𝑥 =0 Given initial guess 𝑥 1 f (x) = 𝒆 −𝒙 −𝒙 𝒇 ′ 𝒙 =−𝒆 −𝒙 −𝟏 f (0) =𝟏 𝒇 ′ 𝟎 = −𝟐 𝑥 𝑛+1 = 𝑥 𝑛 − 𝑓( 𝑥 𝑛 ) 𝑓′( 𝑥 𝑛 ) 𝑥 1 =0 𝑥 2 = 𝑥 1 − 𝑓( 𝑥 1 ) 𝑓′( 𝑥 1 ) 𝑥 2 =0− 𝑓(0) 𝑓′(0) 𝒏 𝒙 𝒏 =0.5 1 0.000000000000000 2 0.500000000000000 3 0.566311003197218 4 0.567143165034862 5 0.567143290409781 𝑥 3 =0.5− 𝑓(0.5 ) 𝑓′(0.5 ) =0.566311003197218 The true value of the root: 0.56714329.Thus, the approach rapidly converges on the true root.
Newton’s Method THE NEWTON-RAPHSON METHOD is a method for finding successively better approximations to the roots (or zeroes) of a function. Example clear f = @(t) exp(-t) - t; df = @(t) - exp(-t) - 1; x(1) = 0; for i=1:4 x(i+1) = x(i) - f( x(i) )/df( x(i) ); end x' Use the Newton-Raphson method to estimate the root of f (x) = 𝒆 −𝒙 −𝒙, employing an initial guess of x1 = 0. 𝒏 𝒙 𝒏 1 0.000000000000000 2 0.500000000000000 3 0.566311003197218 4 0.567143165034862 5 0.567143290409781
𝒏 𝒙 𝒏 (Newton) Newton’s Method Example Approximate a root of f (x) = 𝒄𝒐𝒔𝒙 − 𝒙 using Newton’s Method employing an initial guess of 𝒙𝟏 = 𝝅 𝟒 clear f = @(t) exp(-t) - t; df = @(t) - exp(-t) - 1; x(1) = 0; for i=1:4 x(i+1) = x(i) - f( x(i) )/df( x(i) ); end x' 𝒏 𝒙 𝒏 (Newton) 1 2 3 4 5 6 7 8 0.785398163397448 0.739536133515238 0.739085178106010 0.739085133215161