Download presentation
Presentation is loading. Please wait.
1
Discrete ABC Based on Similarity for GCP
Department of Computer Science, Graduate School of Systems and Information Engineering, University of Tsukuba, Japan Kui Chen Division of Information Engineering, Faculty of Engineering, Information and System, University of Tsukuba, Japan Hitoshi Kanoh
2
Process of Presentation
Problem definition Related work Proposed method Experiments Conclusion TPNC 2016
3
Problem Definition In this part, we define graph coloring problem and objective function TPNC 2016
4
Graph Coloring Problem
Graph coloring problem: given an undirected graph with vertices and edges, each vertex is assigned one of colors so that none of the vertices connected with an edge have the same color. The coding scheme of GCP in our paper. TPNC 2016
5
Conflict of Graph Definition of conflict
For example, the conflict of the graph given in page 4 is 1. TPNC 2016
6
Definition of Objective Function
The objective function converges to optimal value 1 if there is no such two adjacent vertices that are assigned the same color. TPNC 2016
7
Related Work In this part, we introduced original artificial bee colony algorithm and some related work on graph coloring problem. TPNC 2016
8
Original Artificial Bee Colony
Original artificial bee colony is an optimization algorithm in swarm intelligence. It consists of 3 bee groups: Employed bees: explore the search space and discover food sources randomly. Onlooker bees: select a food source with a probability according to its fitness and update the food source. Scout bee: abandon a food source which cannot be improved over a predefined iteration number limit and generate a new food source randomly. *Food sources: the candidate solutions of GCP. *limit: the only parameter which should be set by user. TPNC 2016
9
Original Update Strategy
Original ABCβs update strategy π,πβ{1, 2, 3, β¦, π} (N is swarm size) πβ{1, 2, 3,β¦, π} ( is vertices number) πβ π π is a uniform random number in [-1, 1] The original update strategy is used to solve continuous optimization problems only, our goal is to discretize it directly without any hybrid algorithm. TPNC 2016
10
Related Work on GCP (1) Takuya Aoki, PSO algorithm with transition probability based on hamming distance for graph coloring problem, 2015. Problems: experiment on graph 3 coloring problems shows that HDPSO is an efficient method and obtains better result than GA. However, when the graph size becomes large, its performance turns to be worse. TPNC 2016
11
Related Work on GCP (2) Iztok Fister Jr, A hybrid artificial bee colony algorithm for graph 3-coloring, 2012. Problems: though hybrid ABC outperforms the original ABC, it is designed for solving graph 3-coloring problems only and difficult to be applied to other discrete optimization problems. TPNC 2016
12
Proposed Method In this part, we give the proposed method: Discrete Artificial Bee Colony Based on Similarity. TPNC 2016
13
Definition of Similarity
The Similarity is defined as below: : the Hamming distance between two solutions. 2. rn: a uniformly distributed random number between [0, 1]. 3. n: the vertices number in a given graph. 4. Similarity: describes the similar degree between two solutions. TPNC 2016
14
Update Strategy β The Procedure
Procedure update-strategy(swarm) : for each in swarm : randomly select a solution calculate the Similarity between and generate a random number r in [0, 1] if r < Similarity : randomly choose u components from and replace the corresponding components of by them if fitness( ) > fitness( ) : replace by else : keep unchanged TPNC 2016
15
Update Strategyβ Key Points
There are two key points in our update strategy: Parameter u should not be too large. We found that if u is very large, the candidate solutions will become very similar and the diversity of swarm is lost. So we set the u value to a small integer. Random number rn is used to extend the search range so that more candidate solutions can take part in the improvement of current solution. TPNC 2016
16
Main Procedure Procedure proposed-ABC : initialization while (optimal solution is not found) and (loop time < max iteration times) : send employed bees (explore the search space) send onlooker bees (update candidate solutions and find optimal solution) send scout bee (abandon a solution which cannot be improved in limit loop times) TPNC 2016
17
Experiments In this part, we design some experiments to compare our method with HDPSO and show the advantages of the proposed method. TPNC 2016
18
Constraint Density The constraint density is the level of difficulty for a given graph. The rough relationship between constraint density and difficulty is below [Hogg, 1994]: TPNC 2016
19
Parameter Dependence β limit (1)
First of all, we must examine the dependence of the success rate on the parameters u and limit. To examine parameter limit, we use below settings: Parameters Value Swarm size (N) 200 Graph size (n) 90 Constraint density (d) 2.5 (the most difficult problem) Max iteration 10000 u 3 Run 50 TPNC 2016
20
Parameter Dependence β limit (2)
The result is below: The optimal limit value is 90. TPNC 2016
21
Parameter Dependence β u (1)
Then, we fix limit on its optimal value 90 and examine success rate on different u. The parameters settings are: Parameters Value Swarm size (N) 200 Graph size (n) 90 Constraint density (d) 2.5 (the most difficult problem) Max iteration 10000 limit Run 50 TPNC 2016
22
Parameter Dependence β u (2)
The result is below: The optimal u value is 2. TPNC 2016
23
Comparative Study The proposed method is compared with HDPSO on different constraint density to evaluate its performance. The two algorithms are compared according to two measures: Success rate Average evaluation times of objective function TPNC 2016
24
Parameters Settings Parameters Value Swarm size (N) 200 Graph size (n)
90, 120 and 150 Constraint density (d) From 1.5 to 10 Max iteration 10000 u 2 (use the optimal value) limit 90 (use the optimal value) Run 100 TPNC 2016
25
Success Rate (n = 90) The result is below: TPNC 2016
26
Success Rate (n = 120) The result is below: TPNC 2016
27
Success Rate (n = 150) The result is below: TPNC 2016
28
Average Evaluation (n = 90)
The result is below: TPNC 2016
29
Average Evaluation (n = 120)
The result is below: TPNC 2016
30
Average Evaluation (n = 150)
The result is below: TPNC 2016
31
Conclusion In the final part, we give our conclusion and the future work. TPNC 2016
32
Conclusion We have introduce a new discrete ABC with Similarity. From the experiment, we find: The success rate of the proposed ABC is much higher than HDPSO. For example, when n = 150, the success rate of proposed ABC and HDPSO are 20% and 1% respectively. The proposed ABC is much faster than HDPSO. Even if the success rates are both 100% when d is larger than 4, the average evaluation of the proposed ABC are much lower than HDPSO. The proposed ABC is effective and outperforms HDPSO dramatically. TPNC 2016
33
Future Work The Similarity has been used in discrete PSO and ABC. So we think it may also be applied to discretize other swarm optimization algorithms. TPNC 2016
34
Acknowledgments We wish to thank Dr. Claus Aranha of University of Tsukuba for his helpful comments and suggestions. This work is supported by JSPS KAKENHI Grant Number 15K00296. TPNC 2016
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.