Presentation is loading. Please wait.

Presentation is loading. Please wait.

Feature based Diversity Optimization

Similar presentations


Presentation on theme: "Feature based Diversity Optimization"— Presentation transcript:

1 Feature based Diversity Optimization
Wanru Gao^, Samadhi Nallaperuma* and Frank Neumann^ ^University of Adelaide and University of Sheffield* Presentation at Dagsthul seminar 11 – October

2 Introduction Understanding the behavior of heuristic search methods is a challenge. This understanding is essential for performance prediction. A general framework to construct a diverse set of instances that are hard or easy for a given search heuristic and a problem. characterizes algorithms and their performance for a given problem based on fea- tures of problem instances. provides a tool for bridging the gap between pure experimental investigations and mathematical methods for analysing the performance of search algorithms.

3 Feature based Diversity Optimization
LetI 1,..., k be the elements of P and f(Ii) be their features values and f (Ii ) ∈ [0, R]. f(I1) ≤ f(I2) ≤ ≤ f(Ik). The diversity contribution of an instance I to a population of instances P is d(I,P) = c(I,P) c(I , P ) is a contribution based on other individuals in the population. Let Ii be an individual for which f(Ii) != f(I1) and f(Ii) != f(Ik). c(Ii,P) = (f(Ii) − f(Ii−1)) · (f(Ii+1) − f(Ii)) If f(Ii) = f(I1) or f(Ii) = f(Ik), we set c(Ii,P) = R2 if there is no other individual I != Ii in P with f(I) = f(Ii) and c(Ii,P) = 0 otherwise.

4 Experimental Setup

5 Experimental Setup μ = 30 and λ = 5
Consider TSP and 2OPT algorithm as a case study . α_easy = 1 for instances of size 25 and 50, and α_easy = 1.03 for instances of size 100. α_hard =1.15,1.18,1.2 for instances of size n = 25, 50, 100. n = 25, 50, 100 and each of the 7 features easy and hard instances are generated Normal mutation with σ = with probability 0.9 and σ = 0.05 with probability 0.1 in a mutation step.

6 Results The boxplots for 2 example feature values of a population consisting of 100 different hard or easy TSP instances of different number of cities without(a) or with(b) diversity mechnism

7 Results Some two features provide a good classification between hard and easy instances

8 Results Some two features do not provide a clear separation between the hard and easy instances

9 Results Some three features provide a good classification between hard and easy instances

10 Classification based on multiple feature combinations
The weighted population diversity for a certain set of features {f1,f2,...,fk} is defined as the weighted sum of the normalised population diversity over these k features.

11 Results of multiple feature combinations

12 Experimental setup – multiple feature combinations
μ = 30 and = 5. The instance sizes 25, 50 and 100 some of the good three-feature combinations are chosen for exploration. The weight distributions for {f1 , f2 , f3 } are {1, 1, 1}, {2, 1, 1}, {1, 2, 1}, {1, 1, 2}, {2, 2, 1}, {2, 1, 2}, {1, 2, 2}. The same hardness thresholds are used in these experiments as previous.

13 Results of multiple feature combinations

14 Summary A new methodology of evolving easy/hard instances which are diverse with respect to feature sets of the optimization problem at hand. Covers a much wider range in the feature space than previous methods. Provides instances which are diverse with respect to the investigated features. Good evaluation of the diverse over single or multiple feature values. Large set of diverse instances can be classified quite well into easy and hard instances when considering a suitable combination of three features.

15 Acknoledgements European Union Seventh Framework Programme (FP7/ ) under grant agreement no (SAGE) Australian Research Council under grant agreement DP

16 References O. Mersmann, B. Bischl, H. Trautmann, M. Wagner,J. Bossek, and F. Neumann. A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem. Annals of Mathematics and Artificial Intelligence, 69(2):151–182, 2013. K. Smith-Miles and L. Lopes. Measuring instance difficulty for combinatorial optimization problems. Computers & OR, 39(5):875–889, K. Smith-Miles, J. van Hemert, and X. Y. Lim. Understanding TSP difficulty by learning from evolved instances. In 4th International Conference on Learning and Intelligent Optimization (LION), LION’10, pages 266–280. Springer, 2010. J. I. van Hemert. Evolving combinatorial problem instances that are difficult to solve. Evolutionary Computation, 14(4):433–462, Dec

17 Thank You!


Download ppt "Feature based Diversity Optimization"

Similar presentations


Ads by Google