Download presentation
Presentation is loading. Please wait.
1
Newton’s method for finding local minima
Class XII
2
Last time We started to consider a problem of finding minima of a function of several variables Protein folding as an example of optimization problem Local and global minima Notion of constrained minimization – the variable space is constrained to a smaller subspace
3
Simplest derivation If F(x) has a minimum at xm , then F’(xm) = 0.
Solve for F’(xm) = 0 using Newton’s method for finding roots. Recall that for Z(x) = 0, Newton’s iterations are xn+1 = xn + h, where h = - Z(xn)/Z’(xn) Now, substitute F’(x) for Z(x) To get h = - F’(xn)/F’’(xn) So, Newton’s iterations for minimization are: xn+1 = xn - F’(xn)/F’’(xn)
4
Rationalize Newton’s Method
Let’s Taylor expand F(x) around its minimum, xm: F(x) = F(xm) + F’(xm)(x – xm) + (1/2!)*F’’(xm)(x – xm)2 + + (1/3!)*F’’’(xm)(x – xm)3 + … Neglect higher order terms for |x – xm| << 1, you must start with a good initial guess anyway. Then F(x) ≈ F(xm) + F’(xm)(x – xm) + (1/2)*F’’(xm)(x – xm)2 But recall that we are expanding around a minimum, so F’(xm) =0. Then F(x) ≈ F(xm) + (1/2)*F’’(xm)(x – xm)2 So, most functions look like a parabola around their local minima!
5
Approximation for finding position of the minimum
F(x) ≈ F(xm) + (1/2)*F’’(xm)(x – xm)2 Let us approximate F(x) ≈ f(x) where f(x) = b + a (x – x(m1))2 and then differentiate f(x) f’(x) = 2a (x – x(m1)) From here x(m1) - x = - f’(x) / 2a Let us consider the function f(x) and its derivatives f’(x) and f”(x) at x = x(0) and, comparing the terms of F(x) and f(x) , let us approximate a ≈ (1/2) f”(x(0)) This allows us to evaluate the position of the minimum x(m1) x(m1) = x(0) - f’(x(0)) /f”(x(0))
6
x(n+1) = x(n) - F’(x(n)) /F”(x(n))
Replacing f(x) by F(x) (recall that F(x) ≈ f(x) ) we have an iterative procedure: x(n+1) = x(n) - F’(x(n)) /F”(x(n)) where each found position of the minimum x(n+1) for the quadratic approximation to F(x) ≈ f(x) = b + a (x – x(m1))2 is the starting point for a next approximation of the minimum of function F(x)
7
Newton’s Method = Approximate the function with a parabola, iterate.
Xm
8
Newton’s Method
9
Newton’s Method
10
Newton’s Method
11
Newton’s method. More informative derivation.
Math 685/CSI 700 Spring 08 Newton’s method. More informative derivation. Newton’s method for finding minimum has QUADRATIC (fast) convergence rate for many (but not all!) functions, but must be started close enough to solution to converge. George Mason University, Department of Mathematical Sciences
12
Possible problems of Newton’s method minimization
x(n+1) = x(n) - F’(x(n)) /F”(x(n)) There is also a possibility that F’’(xm) = 0 at the minimum. In this case higher order terms of the Taylor series, (1/3!)*F’’’(xm)(x – xm)3 + … , determine the behavior of F(x) near the minimum xm
13
Example Math 685/CSI 700 Spring 08
George Mason University, Department of Mathematical Sciences
14
Convergence. If x0 is chosen well (close to the min.), and the function is close to a quadratic near minimum, the convergence is fast, and quadratic. E.g: |x0 – xm | < 0.1, Next iteration: |x1– xm | < (0.1)2 Next: |x(2)– xm | < ((0.1)2) 2= 10-4 Next: 10-8, etc. Convergence is NOT guaranteed, same pitfalls as with locating the root (run-away, cycles, does not converge to xm, etc. ) Sometimes, convergence may be very poor even for “nice looking”, smooth functions! ) Combining Newton’s with slower, but more robust methods such as “Golden Section” often helps. No numerical method will work well for truly ill-conditioned (pathological) cases. At best, expect significant errors, at worst – completely wrong answer!
15
Choosing stopping criteria
The usual: |xn+1 – xn | < TOL, or, better, |xn+1 – xn | < TOL*|xn+1 + xn |/2 to ensure that the given TOL is achieved for the relative accuracy. Generally, TOL ~ sqrt(εmach) or larger if full theoretical precision is not needed. But not smaller. TOL can not be smaller because, in general, F(x) is quadratic around the minimum xm, so when you are h < sqrt(εmach) away from xm, |F(xm) – F(xm+h)| < εmach, and you can no longer distinguish between values of F(x) so close to the minimum xm
16
The stopping criterion: more details
To be more specific: we must stop iterations when |F(xn) – F(xm)| ~ εmach*|F(xn) |, else we can no longer tell F(xn) from F(xm) and further iterations are meaningless. Since |F(xn) – F(xm)| ≈ |(1/2)*F’’(xm)(xn – xm)2| , we get for this smallest |(xn – xm)| ~ sqrt(εmach)* sqrt(|2F(xn) / F’’(xm) |) ≈ ≈ sqrt(εmach)* sqrt(|2F(xn) / F’’(xn) |) = = sqrt(εmach)*|xn|*sqrt(|2F(xn) /(xn2 F’’(xn)) |) . For many functions the quantity in green | | ~1, which gives us the rule from the previous slide: stop when |xn+1 – xn | < sqrt(εmach)* |xn+1 + xn |/2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.