Advanced Computer Graphics Optimization Part 2 Spring 2002 Professor Brogan.

Slides:



Advertisements
Similar presentations
Nelder Mead.
Advertisements

Roundoff and truncation errors
Optimization.
Regularization David Kauchak CS 451 – Fall 2013.
Simulated Annealing General Idea: Start with an initial solution
Optimization : The min and max of a function
Simulated Annealing Methods Matthew Kelly April 12, 2011.
Optimization of thermal processes
Optimization 吳育德.
Chapter 3. Interpolation and Extrapolation Hui Pan, Yunfei Duan.
Pythagorean Theorem Created by: Matthew Funke 8 th Grade Math Teacher Central Middle School West Melbourne, FL.
Random numbers and optimization techniques Jorge Andre Swieca School Campos do Jordão, January,2003 second lecture.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
MIT and James Orlin © Nonlinear Programming Theory.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Iterative Improvement Algorithms
Optimization Mechanics of the Simplex Method
Search and Optimization Methods Based in part on Chapter 8 of Hand, Manilla, & Smyth David Madigan.
Math for CSTutorial 5-61 Tutorial 5 Function Optimization. Line Search. Taylor Series for R n Steepest Descent.
Advanced Topics in Optimization
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
What is Optimization? Optimization is the mathematical discipline which is concerned with finding the maxima and minima of functions, possibly subject.
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Solving Non-Linear Equations (Root Finding)
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
Chapter 6 Finding the Roots of Equations
Introduction This chapter gives you several methods which can be used to solve complicated equations to given levels of accuracy These are similar to.
The Basics and Pseudo Code
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
The Bisection Method. Introduction Bisection Method: Bisection Method = a numerical method in Mathematics to find a root of a given function.
02/17/10 CSCE 769 Optimization Homayoun Valafar Department of Computer Science and Engineering, USC.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
Chapter 10 Minimization or Maximization of Functions.
Lecture 9 State Space Gradient Descent Gibbs Sampler with Simulated Annealing.
A Different Solution  alternatively we can use the following algorithm: 1. if n == 0 done, otherwise I. print the string once II. print the string (n.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Intro. ANN & Fuzzy Systems Lecture 37 Genetic and Random Search Algorithms (2)
MATH342: Numerical Analysis Sunjae Kim.
St. Edward’s University
Recursion notes Chapter 8.
Non-linear Minimization
Local Search Algorithms
Pythagorean Theorem MCC8.G.6-8: Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems.
Newton’s method for finding local minima
Pythagorean Theorem Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and.
Local search algorithms
Pythagorean Theorem MCC8.G.6-8: Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems.
Prof. Ramin Zabih Beyond binary search Prof. Ramin Zabih
More on Search: A* and Optimization
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Stochastic Methods.
Presentation transcript:

Advanced Computer Graphics Optimization Part 2 Spring 2002 Professor Brogan

Simulated Annealing Review of Metropolis Procedure –Evaluation function –Current state –Temperature (and annealing schedule) –Change in state

Computing New States Lacking derivatives (and when data isn’t smooth) –Doesn’t matter With derivatives –If you know which way is downhill (better) Then why risk losing that improvement by picking a new direction randomly

Going Downhill If you always use derivative to go downhill –Metropolis step is ineffective (you’re always going to take the better answer) Process turns into a simple gradient descent algorithm

Gradient Descent Follow derivatives downhill Performs poorly in narrow valleys And performs worse as the solution is reached

Best of Both Worlds Current state = Simplex of N+1 points –Not just one point Envision simplex with lines connecting N+1 points –In three dimensions it is a tetrahedron The goal is for minimum to be spanned by points At each iteration, replace one point with a lower-valued one

Simplex Method Diameter of simplex gets smaller with each iteration Stop when diameter reaches tolerance

Example Find minimum of f(x,y) Simplex is a triangle –(N+1 = 3) Start with –A = (x 1, y 1 ), B = (x 2, y 2 ), C = (x 3, y 3 ) Reflection –Compute a new point, D = (x 4, y 4 ) that is reflection of highest point (A in this example) through midpoint of other two points D = B + C - A

Example Expansion –If D is smaller than A –Then move was good –Try going further in same direction –E = 2D – (B + C) / 2

Example Contraction –If D has same value as A, then find F and G F = (2A + B + C) / 4 G = (2D +B + C) / 4 –The smallest of F and G is kept

Example Shrinkage If neither F nor G are smaller than A Then the side connecting A and C must move towards B in order to shrink the simplex –H = (A + B) / 2 –I = (B + C) / 2

Back to Simulated Annealing Remember our state is N+1 points We add a positive, logarithmically distributed random variable, proportional to the temperature, T, to each point in state We subtract a similar random variable from the function value of each new point that is tried as a replacement point

Simulated Annealing As temp goes to 0, this becomes Simplex At other T values –Brownian motion of simplex shape and sampling new, random points

Want to read more? Numerical Recipes in C mlhttp:// ml Source code can be found online too

Golden Search in 1D The root of a function (where it equals 0) can be found when two points bracket it –One point has value > 0 –Other point has value < 0 –Somewhere in middle, function has value = 0

Finding Min is Different 3 points are needed to find min –f(a) > f(b) –f(c) > f(b) –Min is certainly between a and c Similar to root finding algorithms like Bisection method –Choose a new point x that is between a and b or b and c

Updating Points Try a point x between b and c If f(x) < f(b) –Points = (b, x, c) Else –Points = (a, b, x) Continue until distance between outer points is small

Tolerance What is small enough span? (1-e)b < b < (1+e)b –Where e = 3x10 -8 (float) or 3x (double) –Shape of f(x) near b is given by Taylor series –Thus, second term is tiny compared to the first and will act just like 0 when added to it –So keep e large (1x10 -4 (float))

Golden Mean Where should new intermediate point be for (a, b, c)? Let b be fraction w from between a and c Suppose the next trial point were fraction z beyond b: (x – b) / (c – a) Next segment will either be of length w+z or 1- w To minimize worst case possibility, select z so these two are equal: z = 1 – 2w

Golden Mean Resulting point is –|b-a| = |x-c| z will be in second portion if w < ½ Because w was actually selected using this same algorithm in previous step –x (at dist z) should be same fraction of way from b to c (if that was bigger seg) as was b from a to c (=w) –z / (1-w) = w Solve for w = (Golden mean)