Simulated Annealing.

Slides:



Advertisements
Similar presentations
Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Advertisements

Decision Support Andry Pinto Hugo Alves Inês Domingues Luís Rocha Susana Cruz.
Local Search Algorithms
G5BAIM Artificial Intelligence Methods
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Simulated Annealing Premchand Akella. Agenda Motivation The algorithm Its applications Examples Conclusion.
Acoustic design by simulated annealing algorithm
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Simulated Annealing Student (PhD): Umut R. ERTÜRK Lecturer : Nazlı İkizler Cinbiş
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
Local search algorithms
Local search algorithms
MAE 552 – Heuristic Optimization Lecture 8 February 8, 2002.
Optimization Techniques
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
Simulated Annealing 10/7/2005.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
1 Simulated Annealing Terrance O ’ Regan. 2 Outline Motivation The algorithm Its applications Examples Conclusion.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Planning operation start times for the manufacture of capital products with uncertain processing times and resource constraints D.P. Song, Dr. C.Hicks.
Stochastic Relaxation, Simulating Annealing, Global Minimizers.
MAE 552 – Heuristic Optimization Lecture 6 February 4, 2002.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
By Rohit Ray ESE 251.  Most minimization (maximization) strategies work to find the nearest local minimum  Trapped at local minimums (maxima)  Standard.
Elements of the Heuristic Approach
1 IE 607 Heuristic Optimization Simulated Annealing.
The Basics and Pseudo Code
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
Simulated Annealing G.Anuradha.
Introduction to Simulated Annealing Study Guide for ES205 Xiaocang Lin & Yu-Chi Ho August 22, 2000.
Vaida Bartkutė, Leonidas Sakalauskas
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Probabilistic Algorithms Evolutionary Algorithms Simulated Annealing.
Optimization Problems
Ramakrishna Lecture#2 CAD for VLSI Ramakrishna
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Lecture 18, CS5671 Multidimensional space “The Last Frontier” Optimization Expectation Exhaustive search Random sampling “Probabilistic random” sampling.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.
Optimization Problems
Scientific Research Group in Egypt (SRGE)
Simulated Annealing Chapter
Simulated Annealing Premchand Akella.
Heuristic Optimization Methods
Van Laarhoven, Aarts Version 1, October 2000
Optimization Techniques Gang Quan Van Laarhoven, Aarts Sources used.
By Rohit Ray ESE 251 Simulated Annealing.
G5BAIM Artificial Intelligence Methods
Maria Okuniewski Nuclear Engineering Dept.
Heuristic search INT 404.
Optimization Problems
CSE 589 Applied Algorithms Spring 1999
School of Computer Science & Engineering
Introduction to Simulated Annealing
Temperature Parallel Simulated Annealing with Adaptive Neighborhood for Continuous Optimization problem Mitsunori MIKI Tomoyuki HIROYASU Masayuki KASAI.
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Greg Knowles ECE Fall 2004 Professor Yu Hu Hen
Simulated Annealing & Boltzmann Machines
Presentation transcript:

Simulated Annealing

Iterative Improvement 1 General method to solve combinatorial optimization problems Principle: Start with initial configuration Repeatedly search neighborhood and select a neighbor as candidate Evaluate some cost function (or fitness function) and accept candidate if "better"; if not, select another neighbor Stop if quality is sufficiently high, if no improvement can be found or after some fixed time

Iterative Improvement 2 Needed are: A method to generate initial configuration A transition or generation function to find a neighbor as next candidate A cost function An Evaluation Criterion A Stop Criterion

Iterative Improvement 3 Simple Iterative Improvement or Hill Climbing: Candidate is always and only accepted if cost is lower (or fitness is higher) than current configuration Stop when no neighbor with lower cost (higher fitness) can be found Disadvantages: Local optimum as best result Local optimum depends on initial configuration Generally no upper bound on iteration length

Hill climbing

Convergence of simulated annealing

How to cope with disadvantages Repeat algorithm many times with different initial configurations Use information gathered in previous runs Use a more complex Generation Function to jump out of local optimum Use a more complex Evaluation Criterion that accepts sometimes (randomly) also solutions away from the (local) optimum

Simulated Annealing Use a more complex Evaluation Function: Do sometimes accept candidates with higher cost to escape from local optimum Adapt the parameters of this Evaluation Function during execution Based upon the analogy with the simulation of the annealing of solids

Other Names Monte Carlo Annealing Statistical Cooling Probabilistic Hill Climbing Stochastic Relaxation Probabilistic Exchange Algorithm

Analogy Slowly cool down a heated solid, so that all particles arrange in the ground energy state At each temperature wait until the solid reaches its thermal equilibrium Probability of being in a state with energy E : Pr { E = E } = 1/Z (T ) . exp (-E / kB.T ) E Energy T Temperature kB Boltzmann constant Z(T) Normalization factor (temperature dependant)

Simulation of cooling (Metropolis 1953) At a fixed temperature T : Perturb (randomly) the current state to a new state E is the difference in energy between current and new state If E < 0 (new state is lower), accept new state as current state If E  0 , accept new state with probability Pr (accepted) = exp (- E / kB.T) Eventually the systems evolves into thermal equilibrium at temperature T ; then the formula mentioned before holds When equilibrium is reached, temperature T can be lowered and the process can be repeated

Performance SA is a general solution method that is easily applicable to a large number of problems "Tuning" of the parameters (initial c, decrement of c, stop criterion) is relatively easy Generally the quality of the results of SA is good, although it can take a lot of time Results are generally not reproducible: another run can give a different result SA can leave an optimal solution and not find it again (so try to remember the best solution found so far) Proven to find the optimum under certain conditions; one of these conditions is that you must run forever

Theoretical Results If the temperature T is kept constant, then the transition matrix,Pij , is independent of the iteration number. Correspond to a homogeneous Markov chain The stationary distribution q(i) , corresponds to Boltzmann distribution. The limit as T→0 of this stationary distribution is a uniform distribution over set of optimal solutions. The SA algorithm converges to the set of globally optimal solutions. But this result does not tell us anything about the number of iterations required to achieve convergence.

Algorithm 2. Select an initial temperature T>0; 1.      Select an objective function E(x); 2.      Select an initial temperature T>0; 3.      Set temperature change counter t=0; 4.      Repeat 4.1  Set repetition counter n=0; 4.2  Repeat 4.2.1   Generate state xi+1, a neighbor of xi; 4.2.2   Calculate E=E(xi+1)E(xi); 4.2.3   If E<0 then xi=xi+1; 4.2.4   else if random(0,1)<exp(E/T) then xi=xi+1; 4.2.5   n=n+1; 4.3  Until n=r(t); 4.4  t=t+1; 4.5  T=T(t); Until stopping criterion true. 

Kirkpatrick et al. (1983) Generic Choices Set initial value T to be large enough T(t+1)= T(t),  :0.8~0.99 N(t): a sufficient number of transitions corresponding to thermal equilibrium constant or proportional to the size of the neighborhood Stopping criterion when the solution obtained at each temperature change is unaltered for a number of consecutive temperature changes. (T…T/10)

SA Cooling Schedule Starting Temperature Final Temperature Temperature Decrement Iterations at each temperature

SA Cooling Schedule - Starting Temperature Must be hot enough to allow moves to almost neighbourhood state (else we are in danger of implementing hill climbing) Must not be so hot that we conduct a random search for a period of time Problem is finding a suitable starting temperature

SA Cooling Schedule - Final Temperature Final Temperature - Choosing It is usual to let the temperature decrease until it reaches zero However, this can make the algorithm run for a lot longer. Therefore, the stopping criteria can either be a suitably low temperature or when the system is “frozen” at the current temperature (i.e. no better or worse moves are being accepted)

SA Cooling Schedule - Temperature Decrement Theory states that we should allow enough iterations at each temperature so that the system stabilises at that temperature Unfortunately, theory also states that the number of iterations at each temperature to achieve this might be exponential to the problem size Doing a large number of iterations at a few temperatures, a small number of iterations at many temperatures or a balance between the two

SA Cooling Schedule - Temperature Decrement Linear temp = temp - x Geometric temp = temp * x Experience has shown that α should be between 0.8 and 0.99, with better results being found in the higher end of the range. Of course, the higher the value of α, the longer it will take to decrement the temperature to the stopping criterion

SA Cooling Schedule - Iterations Iterations at each temperature A constant number of iterations at each temperature The formula used by Lundy is t = t/(1 + βt) where β is a suitably small value

Conclusions It is very easy to implement. It can be generally applied to a wide range of problems. SA can provide high quality solutions to many problems. Care is needed to devise an appropriate neighborhood structure and cooling scheduler to obtain an efficient algorithm.