Van Laarhoven, Aarts Version 1, October 2000

Slides:



Advertisements
Similar presentations
Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Advertisements

Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Simulated Annealing Premchand Akella. Agenda Motivation The algorithm Its applications Examples Conclusion.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Simulated Annealing Student (PhD): Umut R. ERTÜRK Lecturer : Nazlı İkizler Cinbiş
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
Local search algorithms
Local search algorithms
Two types of search problems
MAE 552 – Heuristic Optimization Lecture 8 February 8, 2002.
Optimization Techniques
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Simulated Annealing 10/7/2005.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
1 Simulated Annealing Terrance O ’ Regan. 2 Outline Motivation The algorithm Its applications Examples Conclusion.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Planning operation start times for the manufacture of capital products with uncertain processing times and resource constraints D.P. Song, Dr. C.Hicks.
MAE 552 – Heuristic Optimization Lecture 6 February 4, 2002.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
By Rohit Ray ESE 251.  Most minimization (maximization) strategies work to find the nearest local minimum  Trapped at local minimums (maxima)  Standard.
Elements of the Heuristic Approach
1 IE 607 Heuristic Optimization Simulated Annealing.
Simulated Annealing.
1.[Start] Generate random population of N chromosomes (suitable solutions for the problem) 2.[Fitness] Evaluate the fitness f(x) of each chromosome x in.
The Basics and Pseudo Code
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory Mixed Integer Problems Most optimization algorithms deal.
Simulated Annealing.
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
Introduction to Simulated Annealing Study Guide for ES205 Yu-Chi Ho Xiaocang Lin Aug. 22, 2000.
Introduction to Simulated Annealing Study Guide for ES205 Xiaocang Lin & Yu-Chi Ho August 22, 2000.
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Optimization Problems
Ramakrishna Lecture#2 CAD for VLSI Ramakrishna
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Simulated Annealing (SA)
Optimization Problems
Scientific Research Group in Egypt (SRGE)
Optimization via Search
Local optimization technique
Simulated Annealing Chapter
Simulated Annealing Premchand Akella.
Heuristic Optimization Methods
Optimization Techniques Gang Quan Van Laarhoven, Aarts Sources used.
Local Search Algorithms
Tabu Search Review: Branch and bound has a “rigid” memory structure (i.e. all branches are completed or fathomed). Simulated Annealing has no memory structure.
By Rohit Ray ESE 251 Simulated Annealing.
G5BAIM Artificial Intelligence Methods
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Maria Okuniewski Nuclear Engineering Dept.
Markov chain monte carlo
Randomized Hill Climbing
Heuristic search INT 404.
Optimization Problems
CSE 589 Applied Algorithms Spring 1999
School of Computer Science & Engineering
Introduction to Simulated Annealing
More on Search: A* and Optimization
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Greg Knowles ECE Fall 2004 Professor Yu Hu Hen
Simulated Annealing & Boltzmann Machines
Presentation transcript:

Van Laarhoven, Aarts Version 1, October 2000 Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000

Iterative Improvement 1 General method to solve combinatorial optimization problems Principle: Start with initial configuration Repeatedly search neighborhood and select a neighbor as candidate Evaluate some cost function (or fitness function) and accept candidate if "better"; if not, select another neighbor Stop if quality is sufficiently high, if no improvement can be found or after some fixed time

Iterative Improvement 2 Needed are: A method to generate initial configuration A transition or generation function to find a neighbor as next candidate A cost function An Evaluation Criterion A Stop Criterion

Iterative Improvement 3 Simple Iterative Improvement or Hill Climbing: Candidate is always and only accepted if cost is lower (or fitness is higher) than current configuration Stop when no neighbor with lower cost (higher fitness) can be found Disadvantages: Local optimum as best result Local optimum depends on initial configuration Generally no upper bound on iteration length

Hill climbing

How to cope with disadvantages Repeat algorithm many times with different initial configurations Use information gathered in previous runs Use a more complex Generation Function to jump out of local optimum Use a more complex Evaluation Criterion that accepts sometimes (randomly) also solutions away from the (local) optimum

Simulated Annealing Use a more complex Evaluation Function: Do sometimes accept candidates with higher cost to escape from local optimum Adapt the parameters of this Evaluation Function during execution Based upon the analogy with the simulation of the annealing of solids

Other Names Monte Carlo Annealing Statistical Cooling Probabilistic Hill Climbing Stochastic Relaxation Probabilistic Exchange Algorithm

Analogy Slowly cool down a heated solid, so that all particles arrange in the ground energy state At each temperature wait until the solid reaches its thermal equilibrium Probability of being in a state with energy E : Pr { E = E } = 1/Z(T) . exp (-E / kB.T) E Energy T Temperature kB Boltzmann constant Z(T) Normalization factor (temperature dependant)

Simulation of cooling (Metropolis 1953) At a fixed temperature T : Perturb (randomly) the current state to a new state E is the difference in energy between current and new state If E < 0 (new state is lower), accept new state as current state If E  0 , accept new state with probability Pr (accepted) = exp (- E / kB.T) Eventually the systems evolves into thermal equilibrium at temperature T ; then the formula mentioned before holds When equilibrium is reached, temperature T can be lowered and the process can be repeated

Simulated Annealing Same algorithm can be used for combinatorial optimization problems: Energy E corresponds to the Cost function C Temperature T corresponds to control parameter c Pr { configuration = i } = 1/Q(c) . exp (-C(i) / c) C Cost c Control parameter Q(c) Normalization factor (not important)

Homogeneous Algorithm initialize; REPEAT perturb ( config.i  config.j, Cij); IF Cij < 0 THEN accept ELSE IF exp(-Cij/c) > random[0,1) THEN accept; IF accept THEN update(config.j); UNTIL equilibrium is approached sufficient closely; c := next_lower(c); UNTIL system is frozen or stop criterion is reached

Inhomogeneous Algorithm Previous algorithm is the homogeneous variant: c is kept constant in the inner loop and is only decreased in the outer loop Alternative is the inhomogeneous variant: There is only one loop; c is decreased each time in the loop, but only very slightly

Parameters Choose the start value of c so that in the beginning nearly all perturbations are accepted (exploration), but not too big to avoid long run times The function next_lower in the homogeneous variant is generally a simple function to decrease c, e.g. a fixed part (80%) of current c At the end c is so small that only a very small number of the perturbations is accepted (exploitation) If possible, always try to remember explicitly the best solution found so far; the algorithm itself can leave its best solution and not find it again

Markov Chains 1 Markov Chain: Sequence of trials where the outcome of each trial depends only on the outcome of the previous one Markov Chain is a set of conditional probabilities: Pij (k-1,k) Probability that the outcome of the k-th trial is j, when trial k-1 is i Markov Chain is homogeneous when the probabilities do not depend on k

Markov Chains 2 When c is kept constant (homogeneous variant), the probabilities do not depend on k and for each c there is one homogeneous Markov Chain When c is not constant (inhomogeneous variant), the probabilities do depend on k and there is one inhomogeneous Markov Chain

Performance SA is a general solution method that is easily applicable to a large number of problems "Tuning" of the parameters (initial c, decrement of c, stop criterion) is relatively easy Generally the quality of the results of SA is good, although it can take a lot of time Results are generally not reproducible: another run can give a different result SA can leave an optimal solution and not find it again (so try to remember the best solution found so far) Proven to find the optimum under certain conditions; one of these conditions is that you must run forever