Download presentation
Presentation is loading. Please wait.
1
Fast Evolutionary Optimisation Temi avanzati di Intelligenza Artificiale - Lecture 6 Prof. Vincenzo Cutello Department of Mathematics and Computer Science University of Catania
2
Fast Evolutionary Optimisation - Lecture 52 Exercise Sheet Function Optimisation by Evolutionary Programming Fast Evolutionary Programming Computational Studies Summary
3
Fast Evolutionary Optimisation - Lecture 53 Global Optimisation
4
Fast Evolutionary Optimisation - Lecture 54 Global Optimisation by Mutation- Based EAs 1. Generate the initial population of individuals, and set k = 1. Each individual is a real-valued vector, (x i ). 2. Evaluate the fitness of each individual. 3. Each individual creates a single offspring: for j = 1,…, n, where xi(j) denotes the j-th component of the vectors x i. N(0,1) denotes a normally distributed one-dimensional random number with mean zero and standard deviation one. N j (0,1) indicates that the random number is generated anew for each value of j. 4. Calculate the fitness of each offspring. 5. For each individual, q opponents are chosen randomly from all the parents and offspring with an equal probability. For each comparison, if the individual's fitness is no greater than the opponent's, it receives a “win”.
5
Fast Evolutionary Optimisation - Lecture 55 6. Select the best individuals (from 2 ) that have the most wins to be the next generation. 7. Stop if the stopping criterion is satisfied; otherwise, k = k + 1 and go to Step 3.
6
Fast Evolutionary Optimisation - Lecture 56 Why N(0,1)? The standard deviation of the Normal distribution determines the search step size of the mutation. It is a crucial parameter. Unfortunately, the optimal search step size is problem-dependent. Even for a single problem, different search stages require different search step sizes. Self-adaptation can be used to get around this problem partially.
7
Fast Evolutionary Optimisation - Lecture 57 Function Optimisation by Classical EP (CEP) EP = Evolutionary Programming 1. Generate the initial population of individuals, and set k = 1. Each individual is taken as a pair of real-valued vectors, (x i, i ), i {1,…, }. 2. Evaluate the fitness score for each individual (x i, i ), i {1,…, } of the population based on the objective function, f(x i ). 3. Each parent (x i, i ), i = 1,…, , creates a single offspring (x’ i, ’ i ), by: for j = 1,…,n where x j (j), x j ’ (j), j (j) and j ’ (j) denote the j-th component of the vectors x j, x j ’, j and j ’, respectively. N(0,1) denotes a normally distributed one-dimensional random number with mean zero and standard deviation one. N j (0,1) indicates that the random number is generated anew for each value of j. The factors and ’ have commonly set to
8
Fast Evolutionary Optimisation - Lecture 58 4. Calculate the fitness of each offspring (x i ’, i ’ ), i {1,…, }. 5. Conduct pairwise comparison over the union of parents (x i, i ), and offspring (x i ’, I ’ ), i {1,…, }. For each individual, q opponents are chosen randomly from all the parents and offspring with an equal probability. For each comparison, if the individual's fitness is no greater than the opponent's, it receives a “win.” 6. Select the individuals out of (x i, i ), and (x i ’, i ’ ), i {1,…, } that have the most wins to be parents of the next generation. 7. Stop if the stopping criterion is satisfied; otherwise, k = k + 1 and go to Step 3.
9
Fast Evolutionary Optimisation - Lecture 59 What Do Mutation and Self-Adaptation Do
10
Fast Evolutionary Optimisation - Lecture 510 Fast EP The idea comes from fast simulated annealing. Use a Cauchy, instead of Gaussian, random number in Eq.(1) to generate a new offspring. That is, where j is an Cauchy random number variable with the scale parameter t = 1, and is generated anew for each value of j. Everything else, including Eq.(2), are kept unchanged in order to evaluate the impact of Cauchy random numbers.
11
Fast Evolutionary Optimisation - Lecture 511 Cauchy Distribution Its density function is where t > 0 is a scale parameter. The corresponding distribution function is
12
Fast Evolutionary Optimisation - Lecture 512 Gaussian and Cauchy Density Functions
13
Fast Evolutionary Optimisation - Lecture 513 Test Functions 23 functions were used in our computational studies. They have different characteristics. Some have a relatively high dimension. Some have many local optima.
14
Fast Evolutionary Optimisation - Lecture 514
15
Fast Evolutionary Optimisation - Lecture 515 Experimental Setup Population size 100. Competition size 10 for selection. All experiments were run 50 times, i.e., 50 trials. Initial populations were the same for CEP and FEP.
16
Fast Evolutionary Optimisation - Lecture 516 Experiments on Unimodal Functions The value of t with 49 degrees of freedom is significant at = 0,05 by a two-tailed test.
17
Fast Evolutionary Optimisation - Lecture 517 Discussions on Unimodal Functions FEP performed better than CEP on f 3 -f 7. CEP was better for f 1 and f 2. FEP converged faster, even for f 1 and f 2 (for a long period).
18
Fast Evolutionary Optimisation - Lecture 518 Experiments on Multimodal Functions f 8 -f 13 The value of t with 49 degrees of freedom is significant at = 0,05 by a two-tailed test.
19
Fast Evolutionary Optimisation - Lecture 519 Discussions on Multimodal Functions f 8 -f 13 FEP converged faster to a better solution. FEP seemed to deal with many local minima well.
20
Fast Evolutionary Optimisation - Lecture 520 Experiments on Multimodal Functions f 14 -f 23 The value of t with 49 degrees of freedom is significant at = 0,05 by a two-tailed test.
21
Fast Evolutionary Optimisation - Lecture 521 Discussions on Multimodal Functions f 14 -f 23 The results are mixed! FEP and CEP performed equally well on f 16 and f 17. They are comparable on f 15 and f 18 – f 20. CEP performed better on f 21 – f 23 (Shekel functions). Is it because the dimension was low so that CEP appeared to be better?
22
Fast Evolutionary Optimisation - Lecture 522 Experiments on Low-Dimensional f 8 -f 13 The value of t with 49 degrees of freedom is significant at = 0,05 by a two-tailed test.
23
Fast Evolutionary Optimisation - Lecture 523 Discussions on Low-Dimensional f 8 -f 13 FEP still converged faster to better solutions. Dimensionality does not play a major role in causing the difference between FEP and CEP. There must be something inherent in those functions which caused such difference.
24
Fast Evolutionary Optimisation - Lecture 524 The Impact of Parameter t on FEP | Part I For simplicity, t = 1 in all the above experiments for FEP. However, this may not be the optimal choice for a given problem. Table 1: The mean best solutions found by FEP using different scale parameter t in the Cauchy mutation for functions f 1 (1500), f 2 (2000), f 10 (1500), f 11 (2000), f 21 (100), f 22 (100) and f 23 (100). The values in “()” indicate the number of generations used in FEP. All results have been averaged over 50 runs.
25
Fast Evolutionary Optimisation - Lecture 525 The Impact of Parameter t on FEP | Part II Table 2: The mean best solutions found by FEP using different scale parameter t in the Cauchy mutation for functions f 1 (1500), f 2 (2000), f 10 (1500), f 11 (2000), f 21 (100), f 22 (100) and f 23 (100). The values in “()” indicate the number of generations used in FEP. All results have been averaged over 50 runs.
26
Fast Evolutionary Optimisation - Lecture 526 Why Cauchy Mutation Performed Better Given G(0,1) and C(1), the expected length of Gaussian and Cauchy jumps are: It is obvious that Gaussian mutation is much localised than Cauchy mutation.
27
Fast Evolutionary Optimisation - Lecture 527 Why and When Large Jumps Are Beneficial (Only one dimensional case is considered here for convenience's sake.) Take the Gaussian mutation with G(0, 2 ) distribution as an example, i.e., the probability of generating a point in the neighbourhood of the global optimum x * is given by : where > 0 is the neighbourhood size and is often regarded as the step size of the Gaussian mutation. Figure 4 illustrates the situation.
28
Fast Evolutionary Optimisation - Lecture 528 Figure 4: Evolutionary search as neighbourhood search, where x * is the global optimum and > 0 is the neighbourhood size.
29
Fast Evolutionary Optimisation - Lecture 529 An Analytical Result It can be shown that when x* - + > . That is, the larger is, the larger if x* - + > . On the other hand, if x* - + > , then which indicates that decreases, exponentially, as increases.
30
Fast Evolutionary Optimisation - Lecture 530 Empirical Evidence I Table 3: Comparison of CEP's and FEP's final results on f 21 when the initial population is generated uniformly at random in the range of 0 x i 10 and 2.5 x i 5.5. The results were averaged over 50 runs. The number of generations for each run was 100.
31
Fast Evolutionary Optimisation - Lecture 531 Empirical Evidence II Table 4: Comparison of CEP's and FEP's final results on f 21 when the initial population is generated uniformly at random in the range of 0 x i 10 and 0 x i 100 and a i 's were multiplied by 10. The results were averaged over 50 runs. The number of generations for each run was 100.
32
Fast Evolutionary Optimisation - Lecture 532 Summary Cauchy mutation performs well when the global optimum is far away from the current search location. Its behaviour can be explained theoretically and empirically. An optimal search step size can be derived if we know where the global optimum is. Unfortunately, such information is unavailable for real-world problems. The performance of FEP can be improve by a set of more suitable parameters, instead of copying CEP's parameter setting. Reference X. Yao, Y. Liu and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, 3(2):82-102, July 1999.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.