Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course: Logic Programming and Constraints

Similar presentations


Presentation on theme: "Course: Logic Programming and Constraints"— Presentation transcript:

1 Course: Logic Programming and Constraints
Appendix B: 1.Tabu Search for Graph Coloring 2.Variable Neighborhood Search Course: Logic Programming and Constraints

2 1.Tabu search for graph coloring
Definitions and notations Given a graph G = (V, E) with vertex set V and edge set E, and given an integer k, a k-coloring of G is a function c : V  {1,2,…,k}. The value c(x) of a vertex x is called the color of x. The vertices with color i (1  i  k) define a color class, denoted Vi. If two adjacent vertices x and y have the same color i, vertices x and y, the edge [x, y] and color i are said conflicting. The k-coloring without conflicting edges is said legal and its color classes are called stable sets. The Graph Coloring Problem (GCP) is to determine the smallest integer k, called chromatic number of G, such that there exist a legal k-coloring of G.

3 How to solve the GCP Given a fixed integer k, the problem of determining whether there exists a legal k-coloring of G is called k-GCP. An algorithm that solves the k-GCP can be used to solve the GCP. Scheme: 1. Use a local search algorithm for finding a legal coloring c. 2. Set k* equal to the number of colors used in c and k := k* -1. 3. Solve the k-GCP; if a legal k-coloring is found then go to 2, else return k*.

4 The Tabucol algorithm Tabucol was introduced in 1987 by Hertz and de Werra. Tabucol is a tabu search algorithm for the k-GCP. It first generates an initial random k-coloring; which contains a large number of conflicting edges. Then, the algorithm iteratively modifies the color of a single vertex, the objective being to decrease the number of conflicting edges until a legal k-coloring is obtained. A tabu list is used in order to escape from local optima and to avoid short term cycling.

5 The search space S is the set of k-colorings of a given graph G.
A solution c S is a partition of the vertex set into k-subsets V1, …,Vk. The evaluation function f measures the number of conflicting edges. For a solution c = (V1, …,Vk) in S, f(c) = ki=1|Ei|, where Ei denotes the set of edges with both endpoints in Vi (i.e. the number of conflicting edges). The goal of Tabucol is to determine a k-coloring c such that f(c) = 0. An elementary transformation, called 1-move, consists in changing the color of a single vertex.

6 The k-coloring c’ = c + (v, i) can be described as follows:
For a vertex v and a color i  c(v), we denote (v, i) the 1-move that assign color i to v and the solution resulting from this 1-move is denoted c  (v, i). The k-coloring c’ = c + (v, i) can be described as follows: c’(v) = i c’(w) = c(w) for all w V – {v} The neighborhood N(c) of a solution c S is defined as the set of k-colorings that can be reached from c by applying a single 1-move. N(c) contains |V|(k -1) solutions. The performance of a 1-move (v, i) on a solution c can be measured by (v, i) = f(c  (v, i)) – f(c). Tabucol involves the notion of critical vertices (i.e. vertices involved in a conflicting edges). F(c) is the number of conflicting vertices in c. A 1-move (v, i) involves a critical vertex c is said critical. Tabucol performs only critical 1-moves.

7 When the 1-move (v,i) is applied on c, then (v, c(v)) becomes a tabu 1-move for L+  F(c) iterations. The duration of the tabu status of (v, c(v)) depends on the number of conflicting vertices in c and on two parameters L and . A 1-move (v,i) is defined as candidate if it is both critical and not tabu, or if f(c  (v,i)) = 0. The last condition is a very elementary aspiration criterion. At each iteration, Tabucol performs the best candidate 1-move. The algorithm stops as soon as f(c) = 0 and it returns c, which is a legal k-coloring. Note: The size of tabu list increases with the number conflicting vertices. (L = [0, 9] and  = 0.6)

8 Algorithm Tabucol Build a random solution c; Set c* := c and iter = 0;
Input: A graph G = (V, E) and an integer k > 0. Parameters: MaxIter, L and . Output: Solution c* Build a random solution c; Set c* := c and iter = 0; Set the tabu list to the empty list; Repeat until f(c) = c or iter = MaxIter Set iter := iter + 1; Choose a candidate 1-move (v, i) with minimum value (v,i); Introduce move (v, c(v)) into the tabu list for L+  F(c) iterations; Set c := c  (v, i); if f(c) < f(c*) then set c* := c;

9 2. Variable Neighborhood Search
Main idea of VNS Variable neighborhood search (VNS) is a metaheuristic, proposed by Mladenovic and Hansen, in 1997. It’s based on a simple principle: a systematic change of neighborhood within the search. Many extensions have been made, to allow solving large problem instances.  Many applications.

10 Basic Scheme Nk, (k = 1,2,…, kmax), a finite set of pre-selected neighborhood with Nk(x) the set of solutions in the k-th neighborhood of x. Neighborhoods Nk may be induced from one or more metric (or quasi metric) functions introduced into a solution space S. An optimal solution xopt is a feasible solution where a minimum of the cost function f is reached. We call x’X (feasible set) a local minimum of f w.r.t Nk, if there is no solution x Nk(x’)  X such that f(x) < f(x’). VNS is based on three simple facts: Fact 1: A local minimum w.r.t. one neighborhood structure is not necessary so with another. Fact 2: A global minimum is a local minimum w.r.t. all possible neighborhood structures. Fact 3: For many problems local minima w.r.t. one or several Nk are relatively close to each other.

11 Basic Scheme (cont.) In order to find the local minimum by using several neighborhoods, facts 1-3 can be used in three different ways: (i) deterministic, (ii) stochastic and (iii) both deterministic and stochastic. The Variable neighborhood descent (VND) method is obtained if change of neighborhood is performed in a determistic way. The Reduced VNS (RVNS) method is obtained if random points are selected from Nk(x), without being followed by descent.

12 Steps of the basic Variable Neighborhood Descent
Initialization. Select the set of neighborhood structures Nk for k = 1,2,…,kmax, that will be used in the descent; find an initial solution x. Repeat the following sequence until no improvement is obtained: (1) Set k = 1; (2) Repeat the following steps until k = kmax: (a) Exploration of neighborhood: find the best neighbor x’ of x (x’ Nk(x)); (b) Move or not If the solution thus obtained x’ is better than x, set x x’ and k  1; otherwise, set k  k+1;

13 Reduced VNS The Reduced VNS (RVNS) method is obtained if random points are selected from Nk(x), without being followed by descent. RVNS is useful for very large instances for which local search is costly. The best value for the parameter kmax is often 2.

14 Reduced VNS Initialization. Select the set of neighborhood structures Nk for k = 1,2,…,kmax, that will be used in the search; find an initial solution x; choose a stopping condition; Repeat the following sequence until the stopping condition is met: (1) Set k = 1; (2) Repeat the following steps until k = kmax: (a) Shaking: generate a point x’ at random from the k-th neighborhood of of x (x’ Nk(x)); (b) Move or not. If this point is better than the incumbent, move there (x  x’), and continue the search with N1 (k  1); otherwise, set k  k+1;

15 The stop condition may be:
Maximum CPU time allowed Maximum number of iterations Maximum number of iterations between two improvements. The point x’ is generated at random in step 2a in order to avoid cycling, which may occur if any deterministic rule was used. The basic VNS (VNS) method combines deterministic and stochastic changes of neighborhood. In the basic VNS, the local search step (2b) may be replaced by VND. Using VNS/VND leads to successful applications.

16 Steps of the basic VNS Initialization. Select the set of neighborhood structures Nk for k = 1,2,…,kmax, that will be used in the search; find an initial solution x; choose a stopping condition; Repeat the following sequence until the stopping condition is met: (1) Set k = 1; (2) Repeat the following steps until k = kmax: (a) Shaking: generate a point x’ at random from the k-th neighborhood of of x (x’ Nk(x)); (b) Local search. Apply some local search method with x’ as initial solution; denote with x’’ the so obtained local optimum; (b) Move or not. If this point is better than the incumbent, move there (x  x’’), and continue the search with N1 (k  1); otherwise, set k  k +1;

17 C. Avanthay, A. Hertz and N. Zufferey, A variable neighborhood search for graph coloring, European Journal of Operation Research 151, pp , 2003. They designed 12 different large neighborhoods. Experiments reported show that the method is more efficient than using Tabucol.


Download ppt "Course: Logic Programming and Constraints"

Similar presentations


Ads by Google