Download presentation
Presentation is loading. Please wait.
1
Tutorial 4 (Lecture 12) Remainder from lecture 10: The implicit fitness sharing method Exercises – any solutions? Questions?
2
Fitness Sharing -- questions, insights Sharing function -At which step of the evolutionary algorithm do we use this? -What if i==j? -Is this for minimisation or maximisation? -Ask your own question about it…
3
Sharing and Fitness Function f 1 (x,y) = e (- 0.7 * (x+2.0) *(x+2.0)) * e (-0.9 * y * y) + 2.0 * e (- (x -5.0) * (x-6.0)) * e (-(y-2.0) * (y-2.0)) f 2 (x,y) = 1 + e (- 0.7 * (x+2.0) *(x+2.0)) * e (-0.9 * y * y) + 2.0 * e (- (x -5.0) * (x-6.0)) * e (-(y-2.0) * (y- 2.0)) -What's the difference? -What implications will this have on the sharing?
4
Past programming exercise (remember?) Write an evolutionary algorithm to find the maximum of the function f(x,y)=exp(-0.7*(x+2)*(x+2))*exp(-0.9*y*y)+2*exp(-(x- 5)*(x-6))*exp(-(y-2)*(y-2)) for values –10<x,y<10. Create two versions: (1) binary representation on e.g. 2x16 bits; (2) real-valued representation. Run a few runs and describe the differences in search behaviour. (e.g. how long does it take to find the optimum, does it always find the global optimum, how close it gets to the optimum) Implement it without self-adaptation first. Hints & Code see last tutorial notes
5
Answers Your representation influences very much how close you can get to the optimum. Using binary representation, the genotype length is the first factor. If you allow, say, 4 bits per variable, you can only get 16 different values, so your resolution is 10 - (-10) / 16 = 1.25, so in the worst case you can only get to within 0.7 of your optimum. If you allow too many bits, mutations to the lowest bits will move your search points very little, so search gets slower. Hamming cliffs (where you have to flip more than one bit to reach the nearest neighbour) further make binary search difficult, using a hamming code representation helps. With real valued representations, the mutation operation is not dependent on your bit length. Gaussian and Cauchy mutation give you nicely distributed mixes of small and large mutations, and you can control the spread with the strategy parameter. Averaging crossover also works well for this test case. The probability of finding the global optimum depends on a number of factors, like population size, mutation rate, strategy parameter used, and selection mechanism (remember tournament vs. roulette wheel).
6
Question. How is your programming exercise, which involved self-adaptation, coming along? Hint (For lazy Java users only) Here's some example code to use self-adaptive mutation in real representation. Compared to the previous example code, not much has changed: The individual now has an additional array to store the strategy parameter, together with methods to access it: http://www.cs.bham.ac.uk/~txs/teaching/2002/evo-computation/03-Tutorial-1- solutions/self-adaptive/Individual.java.html The main EA has changed in a few places: the initialization of the individuals now also initializes the strategy parameter, and the mutation now uses Cauchy mutation with self- adaptation. Also, since the problem is now a minimization problem, the selection routines are modified to allow switching between minimization and maximization. http://www.cs.bham.ac.uk/~txs/teaching/2002/evo-computation/03-Tutorial-1- solutions/self-adaptive/EA.java.html Note that the selection mechanism is different from that used in the FEP paper, we still use tournament selection and 'best-survives' survival here (the mechanism in the paper is much simpler). The fitness function implements f8: http://www.cs.bham.ac.uk/~txs/teaching/2002/evo-computation/03-Tutorial-1- solutions/self-adaptive/FitnessFunction.java.html The random helper class again (unchanged): http://www.cs.bham.ac.uk/~txs/teaching/2002/evo-computation/03-Tutorial-1- solutions/self-adaptive/RandomHelper.java.html You can disable the minimum strategy parameter by setting it to 0. Try and see what happens!
7
Question. Suppose you have a genotype of length n. Describe in words, what is the difference between n-1 point crossover and uniform crossover? Answer Uniform crossover flips a coin for each gene, and decides if it is swapped or not. Any combination of swaps is possible. N-1- point crossover swaps after every position, so exactly every second gene is swapped (completely deterministic, and not very useful) Question. Different selection operators produce different selection pressure. Explain this in words, in terms of exploration versus exploitation. Give some examples. Hint Think of the examples of Roulette Wheel versus tournament selection schemes.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.