Download presentation
Presentation is loading. Please wait.
1
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002
2
http://www.stockheim.net/cosa.html
3
Simulated Annealing A psuedo-code of this algorithm might look like this. T=current temperature Do i=1,k Generate a random displacement for a particle. Calculate the change in energy, E = E’-E If ( E 0) then it’s a downhill move to lower energy so accept and update configuration else it’s an uphill move so generate random number P’[0,1] compare with Pr( E)=exp(- E/K B T) if (P’<Pr( E) then accept move and update configuration else reject move – keep original configuration endif enddo
4
The SA Algorithm SA is a application of the Metropolis Algorithm to function optimization. It assumes a similarity between the physical annealing of a solid and the global optimization of a function by the following: 1.The value of an objective function can be viewed as the energy of a solid. 2.The values of the Design Variables can be viewed as the configuration of the particles of a solid. So, optimizing a function is analogous to finding the ground state of a solid.
5
The SA Algorithm A parameter, T, called the control parameter is used in place of the temperature as the Metropolis algorithm is used for function optimization. In physical annealing, T has a true physical meaning - the temperature of the material undergoing the annealing process. In function optimization, the parameter T, is simply an artificial control parameter that governs both the jumps that move out of local minima and the search for the global optimum. SA can be considered as a sequence of Metropolis algorithms evaluate for a decreasing sequence of the control parameter, T.
6
The SA Algorithm 1.For a high value of T, the objective is ‘melted’ and thus most uphill moves are accepted which allows a large-scale random search to be performed. 2.As the value of T decreases, fewer uphill moves are accepted. At this stage, searches are confined to a smaller region of the design space and the hill-jumping behavior is somewhat limited. However some local optima can still be avoided. 3.As the control parameter, T, approaches zero, almost no uphill moves are accepted and the solution almost ‘frozen’ to its final form. At this stage, SA acts like a traditional downhill only technique.
7
The SA Algorithm At each value of the control parameter, SA accepts or rejects a new configuration by using the Metropolis algorithm. The difference between the values of the evaluation function at two configurations is: f = f (X’) - f (X) X is the latest accepted solution. X’ is the trial configuration
8
The SA Algorithm If f 0 - Accept the new configuration and use as a starting point for your next move. If f > 0: Generate a random number P’=U[0,1] Calculate the probability of acceptance of the move Where T k is the k th value of the control parameter after the starting value of the control parameter. If P’ < Pr( f ), the new configuration is accepted, otherwise it is rejected.
9
The SA Algorithm To achieve ‘thermal equilibrium’ at each value of the control parameter, the SA process must go through sufficiently many iterations for the objective function to reach a steady state. Then as the control parameter approaches zero, the algorithm converges asymptotically to the global optimum.
10
The SA Algorithm T 0 :m 10, m 20, m 30, m 40, …………………………………m m0 T 1 :m 11, m 21, m 31, m 41, …………………………………m m1 T 2 :m 12, m 22, m 32, m 42, …………………………………m m2 T 3 :m 13, m 23, m 33, m 43, …………………………………m m3 T 4 :m 14, m 24, m 34, m 44, …………………………………m m4 T 5 :m 15, m 25, m 35, m 45, …………………………………m m5 ….. T n :m 1n, m 2n, m 3n, m 4n, …………………………………m mn n=number of levels in cooling schedule m=number of transitions in each Markov chain
11
The SA Algorithm The following musty be specified in implementing SA: 1.An unambiguous description for the evaluation function (analogous to energy) and possible constraints. 2.A clear representation of the design vector (analogous to the configuration of a solid) over which an optimum is sought. 3.A ‘cooling schedule’ – this includes the starting value of the control parameter, T o, and rules to determine when the current value of the control parameter should be reduced and by how much (‘the decrement rule’) and a stopping criterion to determine when the optimization process should be terminated.
12
The SA Algorithm 4.A ‘move set generator’ which generates candidate points. 5.An ‘acceptance criterion; which decides whether or not a new move is accepted. Steps 4 and 5 together are called a ‘transition mechanism’ which results in the transformation of a current state into a subsequent one.
13
The SA Algorithm 4.A ‘move set generator’ which generates candidate points. 5.An ‘acceptance criterion; which decides whether or not a new move is accepted. Steps 4 and 5 together are called a ‘transition mechanism’ which results in the transformation of a current state into a subsequent one.
14
The SA Algorithm The SA algorithm is outlined as follows: Step 1 Input the starting value of the control parameter (temperature) T k and set k=0. Step 2 Choose a starting point (initial configuration) X 0 and calculate the value of the objective function (energy) at X 0, f(X 0 ). Then set X=X 0 and f(X)=f(X 0 ).
15
The SA Algorithm Step 3 Use the transition mechanism to generate a random point X’ and compute f(X’). Evaluate f = f(X’) – f(X). If f 0: accept X’, set X=X’, set f(X)=f(X’); else generate a random number P’ from [0,1], compare P’ with Pr( f )=exp(- f /T k ), if P’<Pr( f ): accept X’, set X = X’, set f(X)=f(X’); else: reject X’ and keep the original point; endif;
16
The SA Algorithm Step 4 Use the cooling schedule to decide if the steady state (thermal equilibrium) of the objective Function has been reached at the current value of the control parameter. If it is true: reduce the control parameter by the decrement rule, set k = k+1 else: go to Step 3 endif.
17
The SA Algorithm Step 5 Use the stopping criterion to decide if the simulated annealing algorithm has to be terminated. If it is true: stop; else: go to Step 3; endif.
18
The SA Algorithm Common Stopping Criteria 1.If X best does not change for successive Markov chains, then stop. 2.Fixed length cooling schedule – algorithms automatically stops when T reaches a certain level 3.Maximum number of function evaluations
19
The SA Algorithm 2 Loops in the SA Algorithm There is an inner loop that generates a sequence of trial points unit “thermal equilibrium” is reached at that value of the control parameter. There is also an outer loop that constantly decreases the control parameter and checks to see if the optimization process should be terminated.
20
A B C Start with a ball at point A. Shake it up and it might jump out of A and into B. Give it another shake (adding energy) and it might go to C. This is the general idea behind SAs.
21
The SA Algorithm – Convergence Issues Since the convergence of an stochastic algorithm is asymptotic SA asymptotically obtain a global optimum with probability of 1. The larger the number of samples, the higher the probability of the algorithm finding the global optimum. In general an infinite number of moves is required to obtain the exact global optimum. In practical implementations, this is not realizable and the asymptotic convergence must be approximated. This is done using a proper cooling mechanism which will be discussed next.
22
A cooling schedule is used to achieve convergence to a global optimum in function optimization. Cooling schedule describes how control parameter T changes during optimization process. First let us look at the concept of acceptance ratio, X(T k ). X(T k ) = (# of Accepted Moves / # of Attempted Moves) If T is large almost all moves are accepted –X(T k )->1 As T decreases: –X(T k )->1 For maximum efficiency, it is important to set the proper value of T o. Cooling Schedules
23
Simulated Annealing – Cooling Schedule Steps in a cooling schedule: 1.Choose the starting value of the control parameter, T 0. It should be large enough to melt the objective function, to leap over all peaks. This is accomplished by ensuring that the initial X(T 0 ) is close to 1.0 (most random moves are accepted). 2.Start the SA Algorithm At some T 0 and execute for some number of transitions and check X(T 0 ). If not close to 1.0 multiply T k by a factor greater than 1.0 and execute again. Repeat until X(T 0 ) close to 1.0.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.