MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.

Slides:



Advertisements
Similar presentations
G5BAIM Artificial Intelligence Methods
Advertisements

Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Simulated Annealing Premchand Akella. Agenda Motivation The algorithm Its applications Examples Conclusion.
Simulated Annealing General Idea: Start with an initial solution
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
Tabu Search for Model Selection in Multiple Regression Zvi Drezner California State University Fullerton.
MAE 552 – Heuristic Optimization Lecture 8 February 8, 2002.
Optimization Techniques
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization
Simulated Annealing 10/7/2005.
1 Simulated Annealing Terrance O ’ Regan. 2 Outline Motivation The algorithm Its applications Examples Conclusion.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Planning operation start times for the manufacture of capital products with uncertain processing times and resource constraints D.P. Song, Dr. C.Hicks.
MAE 552 – Heuristic Optimization Lecture 6 February 4, 2002.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
By Rohit Ray ESE 251.  Most minimization (maximization) strategies work to find the nearest local minimum  Trapped at local minimums (maxima)  Standard.
Elements of the Heuristic Approach
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
1 IE 607 Heuristic Optimization Simulated Annealing.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
1 Simulated Annealing Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
Simulated Annealing.
Doshisha Univ., Kyoto, Japan CEC2003 Adaptive Temperature Schedule Determined by Genetic Algorithm for Parallel Simulated Annealing Doshisha University,
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
FORS 8450 Advanced Forest Planning Lecture 11 Tabu Search.
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
Simulated Annealing G.Anuradha.
Introduction to Simulated Annealing Study Guide for ES205 Yu-Chi Ho Xiaocang Lin Aug. 22, 2000.
Introduction to Simulated Annealing Study Guide for ES205 Xiaocang Lin & Yu-Chi Ho August 22, 2000.
Vaida Bartkutė, Leonidas Sakalauskas
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Probabilistic Algorithms Evolutionary Algorithms Simulated Annealing.
Local Search and Optimization Presented by Collin Kanaley.
Ramakrishna Lecture#2 CAD for VLSI Ramakrishna
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
A study of simulated annealing variants Ana Pereira Polytechnic Institute of Braganca, Portugal Edite Fernandes University of Minho,
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Intro. ANN & Fuzzy Systems Lecture 37 Genetic and Random Search Algorithms (2)
Scientific Research Group in Egypt (SRGE)
Optimization via Search
Simulated Annealing Chapter
Simulated Annealing Premchand Akella.
Department of Computer Science
Heuristic Optimization Methods
Van Laarhoven, Aarts Version 1, October 2000
Optimization Techniques Gang Quan Van Laarhoven, Aarts Sources used.
Local Search Algorithms
G5BAIM Artificial Intelligence Methods
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Maria Okuniewski Nuclear Engineering Dept.
Heuristic search INT 404.
CSE 589 Applied Algorithms Spring 1999
School of Computer Science & Engineering
Introduction to Simulated Annealing
More on Search: A* and Optimization
Boltzmann Machine (BM) (§6.4)
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Artificial Intelligence
Greg Knowles ECE Fall 2004 Professor Yu Hu Hen
Simulated Annealing & Boltzmann Machines
Stochastic Methods.
Presentation transcript:

MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

Simulated Annealing A psuedo-code of this algorithm might look like this. T=current temperature Do i=1,k Generate a random displacement for a particle. Calculate the change in energy,  E = E’-E If (  E  0) then it’s a downhill move to lower energy so accept and update configuration else it’s an uphill move so generate random number P’[0,1] compare with Pr(  E)=exp(-  E/K B T) if (P’<Pr(  E) then accept move and update configuration else reject move – keep original configuration endif enddo

The SA Algorithm SA is a application of the Metropolis Algorithm to function optimization. It assumes a similarity between the physical annealing of a solid and the global optimization of a function by the following: 1.The value of an objective function can be viewed as the energy of a solid. 2.The values of the Design Variables can be viewed as the configuration of the particles of a solid. So, optimizing a function is analogous to finding the ground state of a solid.

The SA Algorithm A parameter, T, called the control parameter is used in place of the temperature as the Metropolis algorithm is used for function optimization. In physical annealing, T has a true physical meaning - the temperature of the material undergoing the annealing process. In function optimization, the parameter T, is simply an artificial control parameter that governs both the jumps that move out of local minima and the search for the global optimum. SA can be considered as a sequence of Metropolis algorithms evaluate for a decreasing sequence of the control parameter, T.

The SA Algorithm 1.For a high value of T, the objective is ‘melted’ and thus most uphill moves are accepted which allows a large-scale random search to be performed. 2.As the value of T decreases, fewer uphill moves are accepted. At this stage, searches are confined to a smaller region of the design space and the hill-jumping behavior is somewhat limited. However some local optima can still be avoided. 3.As the control parameter, T, approaches zero, almost no uphill moves are accepted and the solution almost ‘frozen’ to its final form. At this stage, SA acts like a traditional downhill only technique.

The SA Algorithm At each value of the control parameter, SA accepts or rejects a new configuration by using the Metropolis algorithm. The difference between the values of the evaluation function at two configurations is:  f = f (X’) - f (X) X is the latest accepted solution. X’ is the trial configuration

The SA Algorithm If  f  0 - Accept the new configuration and use as a starting point for your next move. If  f > 0: Generate a random number P’=U[0,1] Calculate the probability of acceptance of the move Where T k is the k th value of the control parameter after the starting value of the control parameter. If P’ < Pr(  f ), the new configuration is accepted, otherwise it is rejected.

The SA Algorithm To achieve ‘thermal equilibrium’ at each value of the control parameter, the SA process must go through sufficiently many iterations for the objective function to reach a steady state. Then as the control parameter approaches zero, the algorithm converges asymptotically to the global optimum.

The SA Algorithm T 0 :m 10, m 20, m 30, m 40, …………………………………m m0 T 1 :m 11, m 21, m 31, m 41, …………………………………m m1 T 2 :m 12, m 22, m 32, m 42, …………………………………m m2 T 3 :m 13, m 23, m 33, m 43, …………………………………m m3 T 4 :m 14, m 24, m 34, m 44, …………………………………m m4 T 5 :m 15, m 25, m 35, m 45, …………………………………m m5 ….. T n :m 1n, m 2n, m 3n, m 4n, …………………………………m mn n=number of levels in cooling schedule m=number of transitions in each Markov chain

The SA Algorithm The following musty be specified in implementing SA: 1.An unambiguous description for the evaluation function (analogous to energy) and possible constraints. 2.A clear representation of the design vector (analogous to the configuration of a solid) over which an optimum is sought. 3.A ‘cooling schedule’ – this includes the starting value of the control parameter, T o, and rules to determine when the current value of the control parameter should be reduced and by how much (‘the decrement rule’) and a stopping criterion to determine when the optimization process should be terminated.

The SA Algorithm 4.A ‘move set generator’ which generates candidate points. 5.An ‘acceptance criterion; which decides whether or not a new move is accepted. Steps 4 and 5 together are called a ‘transition mechanism’ which results in the transformation of a current state into a subsequent one.

The SA Algorithm 4.A ‘move set generator’ which generates candidate points. 5.An ‘acceptance criterion; which decides whether or not a new move is accepted. Steps 4 and 5 together are called a ‘transition mechanism’ which results in the transformation of a current state into a subsequent one.

The SA Algorithm The SA algorithm is outlined as follows: Step 1 Input the starting value of the control parameter (temperature) T k and set k=0. Step 2 Choose a starting point (initial configuration) X 0 and calculate the value of the objective function (energy) at X 0, f(X 0 ). Then set X=X 0 and f(X)=f(X 0 ).

The SA Algorithm Step 3 Use the transition mechanism to generate a random point X’ and compute f(X’). Evaluate  f = f(X’) – f(X). If  f  0: accept X’, set X=X’, set f(X)=f(X’); else generate a random number P’ from [0,1], compare P’ with Pr(  f )=exp(-  f /T k ), if P’<Pr(  f ): accept X’, set X = X’, set f(X)=f(X’); else: reject X’ and keep the original point; endif;

The SA Algorithm Step 4 Use the cooling schedule to decide if the steady state (thermal equilibrium) of the objective Function has been reached at the current value of the control parameter. If it is true: reduce the control parameter by the decrement rule, set k = k+1 else: go to Step 3 endif.

The SA Algorithm Step 5 Use the stopping criterion to decide if the simulated annealing algorithm has to be terminated. If it is true: stop; else: go to Step 3; endif.

The SA Algorithm Common Stopping Criteria 1.If X best does not change for successive Markov chains, then stop. 2.Fixed length cooling schedule – algorithms automatically stops when T reaches a certain level 3.Maximum number of function evaluations

The SA Algorithm 2 Loops in the SA Algorithm There is an inner loop that generates a sequence of trial points unit “thermal equilibrium” is reached at that value of the control parameter. There is also an outer loop that constantly decreases the control parameter and checks to see if the optimization process should be terminated.

A B C Start with a ball at point A. Shake it up and it might jump out of A and into B. Give it another shake (adding energy) and it might go to C. This is the general idea behind SAs.

The SA Algorithm – Convergence Issues Since the convergence of an stochastic algorithm is asymptotic SA asymptotically obtain a global optimum with probability of 1. The larger the number of samples, the higher the probability of the algorithm finding the global optimum. In general an infinite number of moves is required to obtain the exact global optimum. In practical implementations, this is not realizable and the asymptotic convergence must be approximated. This is done using a proper cooling mechanism which will be discussed next.

A cooling schedule is used to achieve convergence to a global optimum in function optimization. Cooling schedule describes how control parameter T changes during optimization process. First let us look at the concept of acceptance ratio, X(T k ). X(T k ) = (# of Accepted Moves / # of Attempted Moves) If T is large almost all moves are accepted –X(T k )->1 As T decreases: –X(T k )->1 For maximum efficiency, it is important to set the proper value of T o. Cooling Schedules

Simulated Annealing – Cooling Schedule Steps in a cooling schedule: 1.Choose the starting value of the control parameter, T 0. It should be large enough to melt the objective function, to leap over all peaks. This is accomplished by ensuring that the initial X(T 0 ) is close to 1.0 (most random moves are accepted). 2.Start the SA Algorithm At some T 0 and execute for some number of transitions and check X(T 0 ). If not close to 1.0 multiply T k by a factor greater than 1.0 and execute again. Repeat until X(T 0 ) close to 1.0.