Newton-Gauss Algorithm iii) Calculation the shift parameters vector R (p 0 )dR(p 0 )/dR(p 1 )dR(p 0 )/dR(p 2 )=- - p1p1 p2p2 - … - The Jacobian Matrix.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Instabilities of SVD Small eigenvalues -> m+ sensitive to small amounts of noise Small eigenvalues maybe indistinguishable from 0 Possible to remove small.
Lecture 5 Newton-Raphson Method
Nonlinear Regression Ecole Nationale Vétérinaire de Toulouse Didier Concordet ECVPT Workshop April 2011 Can be downloaded at
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
A B C k1k1 k2k2 Consecutive Reaction d[A] dt = -k 1 [A] d[B] dt = k 1 [A] - k 2 [B] d[C] dt = k 2 [B] [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k.
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Methods For Nonlinear Least-Square Problems
Optimization Methods One-Dimensional Unconstrained Optimization
Constrained Fitting Calculation the rate constants for a consecutive reaction with known spectrum of the reactant A = (A A + A B + A C ) + R = C E T =
Revision.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Newton's Method for Functions of Several Variables
Why Function Optimization ?
UNCONSTRAINED MULTIVARIABLE
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Rudolf Žitný, Ústav procesní a zpracovatelské techniky ČVUT FS 2010 Error analysis Statistics Regression Experimental methods E EXM8.
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
Curve-Fitting Regression
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function.
Solution of Nonlinear Functions
Dan Simon Cleveland State University Jang, Sun, and Mizutani Neuro-Fuzzy and Soft Computing Chapter 6 Derivative-Based Optimization 1.
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
Newton-Raphson Method. Figure 1 Geometrical illustration of the Newton-Raphson method. 2.
Linearization and Newton’s Method. I. Linearization A.) Def. – If f is differentiable at x = a, then the approximating function is the LINEARIZATION of.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Variations on Backpropagation.
Curve Fitting Introduction Least-Squares Regression Linear Regression Polynomial Regression Multiple Linear Regression Today’s class Numerical Methods.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Part 3 Chapter 12 Iterative Methods
Method of Least Squares Advanced Topic of Lecture on Astrometry.
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Project on Newton’s Iteration Method Presented by Dol Nath Khanal Project Advisor- Professor Dexuan Xie 05/11/2015.
An Introduction to Model-Free Chemical Analysis Hamid Abdollahi IASBS, Zanjan Lecture 3.
CH 5: Multivariate Methods
ENGG 1801 Engineering Computing
Non-linear Least-Squares
Collaborative Filtering Matrix Factorization Approach
Variations on Backpropagation.
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Nonlinear regression.
6.5 Taylor Series Linearization
X.6 Non-Negative Matrix Factorization
6.6 The Marquardt Algorithm
Nonlinear Fitting.
Learning Theory Reza Shadmehr
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
Variations on Backpropagation.
Mathematical Foundations of BME
Section 3: Second Order Methods
Linear Algebra Lecture 16.
Numerical Analysis – Solving Nonlinear Equations
Pivoting, Perturbation Analysis, Scaling and Equilibration
Multiple linear regression
Presentation transcript:

Newton-Gauss Algorithm iii) Calculation the shift parameters vector R (p 0 )dR(p 0 )/dR(p 1 )dR(p 0 )/dR(p 2 )=- - p1p1 p2p2 - … - The Jacobian Matrix p1p1 =- - p2p2 - … -

Newton-Gauss Algorithm iii) Calculation the shift parameters vector The Jacobian Matrix p1p1 =- p2p2 r(p 0 ) J pp r(p 0 ) = - J  p  p = - (J T J )-1 J T r(p 0 )  p = - J + r(p 0 )

Newton-Gauss Algorithm iii) Calculation the shift parameters vector The Jacobian Matrix R(k 1,k 2 ) J(k 1 ) r(k 1,k 2 ) J(k 2 ) Vectorised J(k 1 ) Vectorised J(k 2 )

Newton-Gauss Algorithm iii) Calculation the shift parameters vector p = p 0 +  p = -J(0.3)  k 1 - J(0.15)  k 2 r(0.3,0.15) = -  k 1 - k2k2  p = - J + r(0.3, 0.15)  p = [ ] p = [ ] + [ ] = [ ] ssq_old = ssq =

Newton-Gauss Algorithm iv) Iteration until convergence Convergence Criterion Depending on the data, ssq can be very small or very large. Therefore, a convergence criterion analyzing the relative change in ssq has to be applied. The iterations are stopped once the absolute change is less than a preset value, , typically  =10 -4 ssq old - ssq Abs ( ) ≤  ssq old

Newton-Gauss Algorithm guess parameters, p=p start Calculate residuals, r(p) and sum of squares, ssq ssq const.? Calculate Jacobian, J Calculate shift vector  p, and p = p +  p End, display results yes no

Error Estimation The availability of estimates for the standard deviations of the fitted parameters is a crucial advantage of the Newton- Gauss algorithm. Hess matrix H = J T J The inverted Hessian matrix H -1, is the variance- covariance matrix of the fitted parameters. The diagonal elements contain information on the parameter variances and the off-diagonal elements the covariances.  i =  A (d i,i ) 0.5  A = ( ) 0.5 nt × n – (np + nc × n ) ssq

ng function Using Newton-Gauss algorithm for multivariate fitting

r_cons function Introducing the model for consecutive kinetics to ng function

kinfit5 Executing the ng function Initial estimates

? Exactly read the ng.m, r_cons.m and kinfit5.m files and explain them

Rank deficiency and fitting Second order kinetics A + B C k [A] + [C] = [A] 0 [B] + [C] = [B] 0 [B] 0 =  [A] 0 [B] + [C] =  [A] +  [C]  [A] - [B] + (  [C] = 0 Rank deficiency in concentration profiles Linear dependency

A = C E + R [A] 0 = 1[B] 0 = 1.5k = 0.3 E = C \ A

Calculated pure spectra according to E = C \ A

Reconstructed dataMeasured data Residuals

? Use ng function for determination of pK a of weak acid HA

The Marquardt modification Generally, The Newton-Gauss method converges rapidly, quadratically near the minimum. However, if the initial estimates are poor, the functional approximation by the Taylor series expansion and the linearization of the problem becomes invalid. This can lead to divergence of the ssq and failure of the algorithm. H = J T J  p = - (H + mp × I) -1 J T r(p 0 ) The Marquardt parameter (mp) is initially set to zero. If divergence of the ssq occurs, then the mp is introduce (given a value of 1) and increased (multiplication by 10 per iteration) until the ssq begins to converge. Increasing the mp shortens the shift vector and direct it to the direction of steepest descent. Once the ssq convergences the magnitude of the mp is reduced (division by 3 per iteration) and eventually set to zero when the break criterion is reached.

Newton-Gauss method and poor estimates of parameters Original parameters: k 1 =0.4 k 2 =0.2 Estimated parameters: k 1 =4 k 2 =2 Measured data Considered model: Consecutive kinetic

Kinfit5.m

Newton-Gauss-Levenberg-Marquardt Algorithm guess parameters, p=p start initial value for mp Calculate residuals, r(p) and sum of squares, ssq ssq old ssq Calculate Jacobian, J Calculate shift vector  p, and p = p +  p End, display results = > mp=0 < mp ×10mp / 3 yes no

Newton-Gauss-Levenberg-Marquardt Algorithm for non-linear curve fitting nglm.m

kinfit6.m

? Use nglm function for determination of rate constant of a second order kinetics

? Are the calculated error of parameters dependent to initial estimates?