Download presentation
Presentation is loading. Please wait.
Published byPaula Chandler Modified over 8 years ago
1
MA2213 Lecture 9 Nonlinear Systems
2
Midterm Test Results
3
Topics Roots of One Nonlinear Equation in One Variable Secant Method pages 90-97 Roots of Nonlinear Systems (n Equations, n Variables) Newton’s Method pages 352-360 Calculus Review : Intermediate Value Theorem, Newton’s Method pages 79-89 Mean Value Theorems for Derivatives and Integrals Applications to Eigenvalue-Eigenvector Calculation Applications to Optimization
4
Mean Value Theorem for Derivatives Theorem A.4 p. 494 Let There there is at least one point such that
5
Newton’s Method Newton’s method is based on approximating the graph of y = f(x) with a tangent line and on then using a root of this straight line as an approximation to the root of f(x)
6
Error of Newton’s Method Newton’s iteration for finding a rootof Mean Value Theorem between the error satisfies and Mean Value Theorem betweenand Question Compare B with estimate in slides 33,34 Lect 1
7
MATLAB for Newton’s Method MATLAB implementation of formula 3.27 on page 91 Start with one estimate >> x(1)=2; f(1) = x(1)^6-x(1)-1 >> for n = 1:10 S = 6*x(n)^5 – 1; x(n+1) = x(n) – f(n) / S; f(n+1) = x(n+1)^6 – x(n) – 1; end For n = 1:nmax end >> x' ans = 2.00000000000000 1.68062827225131 1.43073898823906 1.25497095610944 1.16153843277331 1.13635327417051 1.13473052834363 1.13472413850022 1.13472413840152 Example 3.3.1 pages 91-92
8
Secant Method is based on approximating the graph of y = f(x) with a secant line and on then using a root of this straight line as an approximation to the root of f(x)
9
Error of Secant Method It can be shown, using methods from calculus that we used to derive error bounds for Newton’s method, that the sequence of estimates computed using the secant method satisfy equation 3.28 on page 92 where is betweenand is between the largest and smallest of The analysis on page 92 and Problem 8 on pages 96-97 shows, using the growth of the Fibonacci sequence, that and c is a constant
10
MATLAB for Secant Method MATLAB implementation of formula 3.27 on page 91 Start with two estimates >> x(1)=2; f(1) = x(1)^6-x(1)-1 >> x(2)=1; f(2) = x(2)^6-x(2)-1 >> for n = 1:9 S = (f(n+1)-f(n)) / (x(n+1)-x(n)); x(n+2) = x(n+1) - (x(n+1)^6-x(n+1) - 1) / S; f(n+2) = x(n+2)^6-x(n+2)-1; end For n = 1:nmax end >> x' ans = 2.00000000000000 1.00000000000000 1.01612903225806 1.19057776867664 1.11765583094155 1.13253155021613 1.13481680800485 1.13472364594870 1.13472413829122 1.13472413840152 Example 3.3.1 pages 91-92
11
Applications of Secant Method The secant method is particularly useful for finding roots of a function Example 1. The frequency where of gene (that causes certain moths to be black) in the n-th generation satisfies selection coefficient, so the MATLAB algorithm below hence Newton’s method is useless. that is defined by an algorithm. In this case there may not even exist an algorithm to compute the derivative of >> b(1) = 0.00001 >> s =.33; >> for n = 1:50 b(n+1) = b(n)/(1-s*(1-b(n))^2); end is the defines a function that equals the frequency of genes in the 50 th generation if the frequency in the 1 st generation = 0.00001 and the selection coefficient = s
12
Applications of Secant Method
13
Motivation Let us consider two equations in two variables Question What are the graphs of these equations ? Ifis an initial guess (and approximate zero of both f and g) then we may approximate f and g by linear functions
14
Motivation We can express this in matrix form as where M is the matrix of derivatives defined by If M is invertible then a reasonable next guess is Question Why is this guess reasonable ?
15
Motivation Example For f, g on slide 13 and Question What should the next guess be ? Question What happed to the residual ?
16
The General Newton Method We change notation To obtain Newton’s method we let x = current estimate Taylor’s Theorem & Chain Rule Imply and choose the next estimate to be where makes the right side above = 0
17
The General Newton Method For a general system on n-equations in n-variables
18
Eigenvalue-Eigenvector Application For then Result If constructby is an eigenvector of a simple eigenvalue is nonsingular. Proof corresponding to a then
19
MATLAB for Newton Eig-Eig function [v,l] = newt(A,v0,l0,niter) % function [v,l] = newt(A,v0,l0,niter) % % Inputs: % A is a complex 4 x 4 matrix % v0 initial eigenvector estimate % l0 initial eigenvalue estimate % niter = number of iterations % Outputs: % v = eigenvector % l = eigenvalue Id = eye(4); % 4 x 4 identity matrix x = [v0;l0]; % ‘system’ vector v = x(1:4); l = x(5); for k = 1:niter B=A -l*Id; J = [B -v;v' 0]; res = [B*v;.5*v'*v-.5]; x = x - J\res; v = x(1:4); l = x(5); end v = x(1:4); l = x(5); Question In the k-loop what name is given to the derivative matrix ? Question Under what the conditions is res = 0 ?
20
Computation of Real Eig-Eig >> [A V diag(D)] ans = 1.9547 -1.8721 -1.3984 0.0892 0.6631 -0.1716 0.6807 0.2596 0.2492 -1.8721 8.9893 0.8771 -0.1485 0.0725 -0.0121 0.2902 -0.9541 0.6239 -1.3984 0.8771 1.4833 -0.1597 0.7231 -0.0845 -0.6695 -0.1476 2.5406 0.0892 -0.1485 -0.1597 0.6239 0.1791 0.9815 0.0650 0.0209 9.6375 >> v0 = [.5.3 -.5.1]' >> l0 = 3 >> niter = 4; >> [v,l] = newt(A,v0,l0,niter); >> v-V(:,3) ans = 1.0e-010 * 0.0689 0.0234 -0.0534 -0.1097 >> l-D(3,3) ans = -6.6591e-012
21
Computation of Complex Eig-Eig >> [A diag(D)] -0.4326 -1.1465 0.3273 -0.5883 2.1559 -1.6656 1.1909 0.1746 2.1832 -1.3857 0.1253 1.1892 -0.1867 -0.1364 -0.0423 + 0.8071i 0.2877 -0.0376 0.7258 0.1139 -0.0423 - 0.8071i >> V 0.3410 0.7501 -0.0291 - 0.3762i -0.0291 + 0.3762i -0.8442 0.4292 0.0498 + 0.4033i 0.0498 - 0.4033i -0.4056 -0.4921 0.6197 0.6197 -0.0806 0.1051 -0.2490 - 0.4964i -0.2490 + 0.4964i
22
Computation of Complex Eig-Eig >> (v0./V(:,3))/(v0(1)/V(1,3)) 1.0000 1.2388 + 0.4599i 0.8272 + 0.6597i 0.8024 + 0.4619i >> l0/D(3,3) 0.8437 - 0.1394i >> [v,l] = newt(A,v0,l0,4); >> (v./V(:,3))/(v(1)/V(1,3)) 1.00000000000000 0.99999999999683 - 0.00000000001017i 0.99999999999611 - 0.00000000000016i 0.99999999999773 - 0.00000000000524i >> l/D(3,3) 1.00000000000190 + 0.00000000000014i
23
Taylor’s Theorem in Several Variables Taylor’s theorem applied to yields and the chain rule implies that this matrix of second derivatives of G is the Hessian of G at x
24
Optimization Theorem Ifis continuously differentiable and has a local minimum / local maximum at then is positive / negative definite,and the Hessian Ifis twice continuously differentiable, thenhas a local minimum / local maximum at Proof If G has a local minimum at p, then henceand similarly for a local maximum. Positive / negative definite means> / < 0 whenever hence pos / neg def and implies thereforehas a local minimum / maximum at
25
Homework Due Lab 5 (Week 12, 5-9 November) 1.Write a MATLAB Program to implement the secant method and use it to (a) do Problem 1 on page 96, (b) do 15 iterations to solve the problem in Example 3.3.1 on page 91 and for that example verify that the error satisfies the approximation 3.31 on page 92 and estimate the value of the constant c numerically and compare with the formula at the bottom of page 92. You should study Problem 8 on pages 96-97 to know how the error estimate is derived and its relationship to the Fibonacci sequence 2. Write a MATLAB program to compute the function f in slide 11 and use it with the program for the secant method (that you wrote for Problem 1) to compute a value of s such that f(s)= 0.7 Suggestion: Start with s(1) =.33 and s(2) =.30 3. Do problems 1, 2, and 3 on page 364 of the textbook 4. Extra Credit: Write a MATLAB program to compute the position of an airplane from its approximate position and its distance from three GPS satellites. Synthesize some data and test the program.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.