Root Finding Methods Fish 559; Lecture 15 a.

Slides:



Advertisements
Similar presentations
Numerical Computation Lecture 4: Root Finding Methods - II United International College.
Advertisements

Lecture 5 Newton-Raphson Method
Numerical Solution of Nonlinear Equations
Line Search.
Optimization.
EKF, UKF TexPoint fonts used in EMF.
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Revision.
Open Methods Chapter 6 The Islamic University of Gaza
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Newton's Method for Functions of Several Variables
Why Function Optimization ?
Chapter 4 Roots of Polynomials.
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo Solving ODE.
Solving Non-Linear Equations (Root Finding)
Lecture 8 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
559 Fish 559; Lecture 25 Root Finding Methods. 559 What is Root Finding-I? Find the value for such that the following system of equations is satisfied:
Dr. Jie Zou PHY Chapter 2 Solution of Nonlinear Equations: Lecture (II)
4 Numerical Methods Root Finding Secant Method Modified Secant
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 7 - Chapter 25.
CSE 2813 Discrete Structures Solving Recurrence Relations Section 6.2.
Computers in Civil Engineering 53:081 Spring 2003 Lecture #8 Roots of Equations: Systems of Equations.
Ordinary Differential Equations
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Numerical Methods and Optimization C o u r s e 1 By Bharat R Chaudhari.
Answers for Review Questions for Lectures 1-4. Review Lectures 1-4 Problems Question 2. Derive a closed form for the estimate of the solution of the equation.
Quiz 2.
Numerical Integration Methods
Newton-Raphson Method
Newton-Raphson Method
Part 7 - Chapter 25.
Numerical Methods and Analysis
Non-linear Minimization
Newton’s Method for Systems of Non Linear Equations
Ordinary Differential Equations
LECTURE 3 OF SOLUTIONS OF NON -LINEAR EQUATIONS.
CS B553: Algorithms for Optimization and Learning
First order non linear pde’s
CSE 575 Computer Arithmetic Spring 2003 Mary Jane Irwin (www. cse. psu
Computational Methods EML3041
Chapter 6.
Numerical Analysis Lecture 7.
Numerical Analysis Lecture 45.
Roots of equations Class VII.
Outline Single neuron case: Nonlinear error correcting learning
Newton-Raphson Method
Computers in Civil Engineering 53:081 Spring 2003
Chapter 7 Optimization.
Part 7 - Chapter 25.
Newton-Raphson Method
6.5 Taylor Series Linearization
SOLUTION OF NONLINEAR EQUATIONS
Section 4.8: Newton’s Method
Algorithms and Convergence
Solving Equations 3x+7 –7 13 –7 =.
Numerical Integration Methods
Part 4 - Chapter 13.
Newton-Raphson Method
Engineering Analysis ENG 3420 Fall 2009
Chapter 6.
1 Newton’s Method.
Pivoting, Perturbation Analysis, Scaling and Equilibration
Presentation transcript:

Root Finding Methods Fish 559; Lecture 15 a

What is Root Finding-I? Find the value for such that the following system of equations is satisfied: This general problem emerges very frequently in stock assessment and management. We will first consider the case i=1 as it is the most common case encountered.

What is Root Finding-II? Typical examples in fisheries assessment and management include: Find K for a Schaefer model so that if the Schaefer model is projected from K in year 0 to year m, the biomass in year m equals Z. Find the catch limit so that the probability of recovery equals a pre-specified value. Find F0.1 so that:

Methods for Root Finding There are several methods for finding roots, the choice of among these depends on: The cost of evaluating the function. Whether the function is differentiable (it must be continuous and monotonic for most methods). Whether the derivative of the function is easily computable. The cost of programming the algorithm.

The Example We wish to find the value of x which satisfies the equation:

Derivative-free methods

The Bisection Method-I

The Bisection Method-II

The False Positive Method-I

The False Positive Method-II The initial vectors need not bound the solution

Brent’s Method (The method of choice) The false positive method assumes approximate linear behavior between the root estimates; Brent’s method assumes quadratic behavior, i.e.: The number of function calls can be much less than for the bisection and false positive methods (at the cost of a more complicated computer program). Brent’s method underlies the R function uniroot.

Brent’s Method

Derivative-based methods

Newton’s Method-I (Single-dimension case) Consider the Taylor series expansion of the function f: Now for “small” values of  and for “well-behaved” functions we can ignore the 2nd and higher order terms. We wish to find so: Newton’s method involves the iterative use of the above equation.

Newton’s Method-II Note that Newton’s method may diverge rather than converge. This makes it of questionable value for general application.

Ujevic et al’s method

Multi-dimensional problems-I There is no general solution to this type of problem f=0 g=0 f=0 g=0 g=0

Multi-dimensional problems-II There are two “solutions” to the problem: find the vector so that the following system of equations is satisfied: Use a multiple-dimension version of the Newton-Raphson method; Treat the problem as a non-linear minimization problem.

Multi-dimensional problems-III (the multi-dimensional Newton-Raphson method) The Taylor series expansion about is: This can be written as a series of linear equations:

Multi-dimensional problems-IV (the multi-dimensional Newton-Raphson method) Given a current vector , it can be updated according to the equation:

Multi-dimensional problems-V (use of optimization methods) Rather than attempting to solve the system of equations using, say, Newton’s method, it is often more efficient to apply an optimization method to minimize the quantity: