B. Jayalakshmi and Alok Singh 2015 A hybrid artificial bee colony algorithm for the cooperative maximum covering location problem B. Jayalakshmi and Alok Singh 2015
The Problem -The Cooperative Cover Location Problems: the Planar Case comes about in the paper by Berman, Drezner, and Krass (2009). -This is a Special Case of the Maximum Covering problem -In this case, facilities emit a “signal” that dissipates as distance increases and if the “signal” strength exceeds a certain threshold when it reaches a demand then the demand is covered. -The cooperative problem assumes that facilities cooperate to provide coverage to nodes.
Formulation of the Cooperative Maximum Covering Location (CMCLP) Problem -Let G be an undirected graph G= (V,E) where V = (1, …, n) is the set of vertices and E = (e1, … en) is the set of edges - p = number of facilities and T is the coverage threshold -All nodes have non-negative weight 𝑤 𝑖 and all edges have a positive length 𝑙 𝑘 -Objective is to maximize the sum total of the weights of the covered demands 𝑓 𝑋 𝑝 , 𝑇 = 𝑖: Φ i 𝑥 𝑝 ≥𝑇 𝑤 𝑖 -The signal strength is defined by a function of the total signals received by a point Φ i 𝑥 𝑝 = 𝑘=1 Φ(𝑑 i 𝑥 𝑘 -We can locate facilities on edges or nodes
Example Cooperative Maximum Covering Problem Consider the graph in figure 1 with p = 3 and a threshold T = .5 and a signal strength function of Φ 𝑑 =max{0, 1 − 𝑑 10 } Solving the problem produces the following solution illustrated on the graph X = {(4), ([7,8], .6, ([1,2]), .4) where p1 is on node 4, p2 on edge 7,8 with t = .6 and p3 is on edge 1,2 with t = .4. Here examining node 9 shows how the cooperative coverage problem works. As location 9 is not covered by either point 2 or point 3 on its own. But that the signal strength from Φ(p3) = 1− 6.6 10 = .34 And from p2 to node 9 Φ(p2) = 1− 7.8 10 = .22 which together produces a signal strength of .56 at 9 and leads to an objective value of 29 as all demands are covered. 1.6 P1 1.8 P2
Artificial Bee Covering (ABC) Algorithm First proposed by Karaboga in 2005 -This algorithm creates three groups of “bees”: scouts, employed, and onlooker -Initialize the algorithm by sending scouts to find initial food sources -Repeat: Send Employed Bees onto food sources and determine nectar amounts Calculate the probability value of the sources with which they are preferred by the onlooker bees Send onlooker bees onto food sources and determine their nectar amounts Stop the exploitation process of the sources exhausted by the bees Randomly, send the scouts to find new food sources Memorize the best food source found so far Stop
Bee Employment In the employed bee phase, the employed bees generate food sources in the proximity of its associated food source and evaluate food quality If the new food source is better than the employed bee moves to that food sources The bee employment phase ends when all employed bees finish this process than the onlooker phase begins
Bee Onlookers Employed bees share their information with onlookers Onlookers select food based on quality based on the fitness of the solution After onlookers select food sources they then determine food sources in the proximity and amongst all food sources in all neighborhoods the best quality food source is determined.
How and Why this Approach In employed bee phase all solutions are equally likely to improve While in the observer phase good quality solutions are more likely to improve than poor quality solutions The inclination toward selecting good quality solutions while searching in the proximity of good solutions for better solutions lends strength to the algorithm’s search for an optimal solution The scout bees randomly searching for new food sources aims to avoid the trap of local optima where our employed and observer bees might get honey trapped.
ABC Algorithm and the CMCLP In this approach, a solution X’ is considered better than X if it has a better objective value or if it provides a larger total coverage 𝑖𝜖V Φ 𝑖 𝑋′ > 𝑖𝜖V Φ 𝑖 𝑋 Food source selection for onlookers selects two food sources from by uniform random distribution and then the better of the two is selected with probability 𝑝 𝑜𝑛𝑙 and the worse 1 - 𝑝 𝑜𝑛𝑙 in a binary tournament.
Determining an Initial Solution Start from an empty solution and add one facility at a time In each iteration to add a facility to the current partial solution S compute the set of points Y that provide exact or greater coverage for yet uncovered nodes V’. x points considered on edges at relative intervals of .1. Evaluate points based on how much coverage they provide and choose R best points from the set Y and select one of the points at random 𝑌=𝑉 𝑈 {𝑥 𝜖 𝐺 Φ 𝑖 𝑥 ≥ 𝑇 𝑖 (𝑋) for some 𝑖 𝜖 𝑉’ After locating a facility the threshold is updated as 𝑇 𝑖 𝑆 = max 0, 𝑇− Φ 𝑖 𝑆 for 𝑖 𝜖 𝑉
Neighborhood Solution Generation To generate a solution 𝑋 ′ in the neighborhood of X, the author deletes F facilities from X randomly Then the algorithm adds F facilities from a set 𝐹 𝑛𝑒𝑤 that contains all points x in the graph that provide exact coverage for at least one yet uncovered node in 𝑉′. 𝐹 𝑛𝑒𝑤 =𝑍 −𝑋 𝑍=𝑉 𝑈 𝑥 𝜖 𝐺 Φ 𝑖 𝑥 = 𝑇 𝑖 (𝑋)} These facilities to add are chosen from a set of R best points randomly and 𝐹 𝑛𝑒𝑤 is updated after each facility is added. This manner is similar to greedy search with the exception of the random aspect.
Other Features of the Algorithm If an employed bees solution does not improve for a specified number of iterations then the associated employed bee becomes a scout There are no restrictions on the number of scout bees in an iteration The scout bee is reemployed by assigning it to a new solution generated in the same way as our initial solution
Local Search as a Means of Improvement Local search utilized to improve the algorithm Each facility x in a solution S is considered one by one and the best point to relocate 𝑏 𝑥 is determined Then the set 𝐹 𝑛𝑒𝑤 for the solution 𝑆\{𝑥} is computed to find the location of the next facility If the solution 𝑆\{𝑥} 𝑈 𝑏 𝑥 is better than S then we replace S with the new solution
Pseudocode
Computational Results Setup The authors compare their results on the same instance done in the paper: Cooperative Covering Problems on Networks by Averbakh, Berman, Krass, Kalcsics, and Nickel (2014) They generate five instances of nodes {40, 60, 80, 100, 120, 140, 160, 180, 200} with average degrees {5,6,7}. They solve the problem with p = 3, 4, 5 for n = 40, 60, 80: p =4,5,6 for n = 100, 120, 140: and p =5,6,7 for n = 160, 180, 200 They use three T for signal threshold [.6, .8, 1]. These formulations result in 3625 instances. They assume 10 employed bees and 20 observer bees. The limit on employed search is set at 50. 𝑝 𝑜𝑛𝑙 = .9 for U (the fraction of diameter of the network) = .15, .25, .35 and T = .6, while 𝑝 𝑜𝑛𝑙 is set to .8 for all other combinations R= 20 and F = 2 for n = 40; 60; 80 and p = 3; 4; 5, F = 3 for n = 100; 120; 140 and p = 4; 5; 6, F = 4 for n = 160; 180; 200 and p = 5; 6; 7 The algorithm terminates after 500 iterations. All computations done in C on a Linux based Intel Core i5 2400
Computational Results The authors compare their results with the interchange algorithms proposed in the paper on Cooperative Covering Problems on Networks (2014) The improvement is measured by 100 ∗ Φ 𝑖 𝑆 𝐴 − Φ 𝑖 𝑆 𝐵 Φ 𝑖 𝑆 𝐵 where 𝑆 𝐴 is the solution from Algorithm A and 𝑆 𝐵 from algorithm B R is the average execution On the largest instance N = 200
Conclusion The ABC algorithm outperforms the existing interchange algorithms in solution quality but are slower than the existing methods. This is the first metaheuristic presented for the CMCLP problem as tabu search and neighborhood that were experimented with only resulted in slight improvement over interchange and did not represent full metaheuristics. This papers shows how population based metaheuristics offer an appropriate tool in solving the CMCLP problem.
Questions ?