Download presentation
Presentation is loading. Please wait.
Published byDamon Hodge Modified over 9 years ago
1
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares sense to a model m. For example consider the exponential decaying model: m(x,t)=x 1 e -x 2 t where x 1 and x 2 are unknowns. Here n is 2. If x 2 were known, the model would be linear. Define the residual r of m components = r i (x) = y i - m(x,t i ) and wish to minimize.5.
2
Why consider this special case: common problem Derivatives have special structure Let J be the Jacobian of r. (Each column of J would give a derivative of an element of x for r). The gradient g = J T r. The matrix of second partials H= J T J + S where S is zero for exact fit. For the model x 1 e -x 2 t J has the form
3
Gauss Newton Let H = J T J ( i.e. ignore second term in Hessian) Perform Newton Iteration: Until convergence: Let s be the solution of (J(x) T J(x))s = -J(x) T r(x) Set x=x+ s But is just the normal equation form of the linear least squares problem to find s. (J(x) T J(x))s = -J(x) T r(x)
4
Gauss Newton on exponential example with model x 1 e -x 2 t If data was t y 0.02.0 1.00.7 2.00.3 3.00.1 and initially x= [1 0] T so initially J = = Course of iterations X||r|| 2 2 1.000.002.390 1.690-0.6100.212 1.975-0.9300.007 1.994-1.0040.002 1.995-1.0090.002
6
Levenberg-Marquardt Let H = J T J + kI Perform Newton Iteration: Until convergence: Let s be the solution of (J(x) T J(x)+kI)s = -J(x) T r(x) Set x=x+ s Rational If k is big, just get gradient step which is good far from solution Data could be noisy and second term is smoother. Project Alert: how do you choose k
7
How to get derivatives of difficult function: Automatic differentiation- differentiate the program- Hot topic- good project Numerical Differentiation Bite the bullet and hope you can analytically differentiate the function accurately
8
Numerical Differentiation of sin(1.0) Using derivative= (f(x+h)-f(x))/h As h get smaller, truncation error decreases but roundoff error increases. Choosing h becomes an art
9
Linear Programming Example: company, which makes steel bands and steel coils, needs to allocate next weeks time on a rolling mill. Bands Coils Rate of Production200 tons/hr.140 tons/hr. Profits per ton: $25$30 Orders:6000 tons4000 tons Make x tons of Bands and y tons of Coils to maximize 25x +30y such that x/200 + y/140 <= 40 0 <= x <= 6000 and 0 <= y <= 4000
10
Linear Programming: maximizing linear function subject to linear constraints Quadratic Programming: maximizing quadratic function subject to linear constraints Mathematical programming- maximizing general functions subject to general constraints
11
Approaches to Linear Programming 1. (Simplex-Dantzig-1940s)Solution lies on boundary of region, so go from vertex to vertex continuing to increase function. Each iteration involves solving a linear system- 0(n 3 ) multiplications As one jumps to next vertex the linear system loses one row and column and gains one row and column-0(n 2 ) multiplications (Golub/Bartels-1970) 2. (Karmarkar-1983)Scale steepest ascent by distance to constraint and go almost to boundary Requires fewer iterations, structure of system does not change
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.