Download presentation
Presentation is loading. Please wait.
Published byJonas Barratt Modified over 10 years ago
1
Michele Samorani Manuel Laguna
2
PROBLEM : In meta-heuristic methods that are based on neighborhood search, whenever a local optimum is encountered, the search has to escape from it –Tabu Search uses tabu lists and other strategies –Other variations are: Path Relinking, Randomization, Restarting FACT: E scape directions are set by a-priori rules Problem
3
Goal of this work Class of problems 1.Take a few instances 2.Consider the local optima 3.Learn 4. Given another instance 5. Use the knowledge to tackle it by using “smart” constraints General framework for any class of problems
4
How to learn the constraints How to apply them Results Conclusions Outline
5
HOW TO LEARN THE CONSTRAINTS
6
Collect many local optima from instances of the same class of problems For each local optimum A i, consider the local optima nearby, B k ( k in N(i) ), forming pairs (A i, B k ) Denote each pair with ‘-’ if the objective function improves from A i to B k, ‘+’ otherwise How to learn the constraints
7
A1A1 A2A2 A3A3 A4A4 B1B1 B2B2 B3B3 B4B4 B5B5 B6B6 B7B7 B8B8 B9B9 B 10 B 11 B 12 - - - + + + + + + + - - How to learn the constraints
8
Constrained Task Allocation Problem (CTAP): –Assign m tasks to n CPUs minimizing the total cost Costs: –Fixed cost: if CPU j is used, we pay S j –Communication cost: if tasks p and q are in different CPUs, we pay c(p,q) Example: CTAP
9
Suppose S 2 > S 3. Consider this move: This move is unlikely to be performed because: 1.We would introduce the fixed cost S 3 2.We would introduce communication cost c 5,6 CPU1 T1,T2, T3, T4 CPU2 T5, T6 CPU3 CPU1 T1,T2, T3, T4 CPU2 T5 CPU3 T6 Example: CTAP
10
Suppose S 2 > S 3. Consider this move: But at the next move, we could move T 5 too We want to learn rules like: “if there is an empty CPU y that can accommodate the tasks assigned to CPU x, and it has a smaller fixed cost, move the tasks from x to y” CPU1 T1,T2, T3, T4 CPU2 T5, T6 CPU3 CPU1 T1,T2, T3, T4 CPU2 T5 CPU3 T6 Condition on local optimum Condition on pair of local optima Example: CTAP
11
A Rule R t is a pair of conditions (F t, G t ) R t has to be applied at a local optimum L and has the following form : “If L satisfies condition F t, then go towards a solution S such that (L, S) satisfies condition G t ” How to learn the constraints
12
A1A1 A2A2 A3A3 A4A4 B1B1 B2B2 B3B3 B4B4 B5B5 B6B6 B7B7 B8B8 B9B9 B 10 B 11 B 12 - - - + + + + + + + - - Finding a rule R t = finding a subset of initial local optima such that: 1.they can be distinguished from the other local optima through condition F t 2.If F is satisfied, then there are ‘-’ pairs satisfying condition G t F t = 1 G t = 1 How to learn the constraints
13
A1A1 A2A2 A3A3 A4A4 B1B1 B2B2 B3B3 B4B4 B5B5 B6B6 B7B7 B8B8 B9B9 B 10 B 11 B 12 - - - + + + + + + + - - F t = 1 G t = 1 Mathematical Model for 1 rule Constraints on F and G: F (or G)=1 and F (or G)=0 must be the outputs of a binary classifier
14
USE THE CONSTRAINTS TO ENHANCE THE SEARCH
15
FG F1F1 G1G1 F2F2 G2G2 …… If L satisfies condition F t, then go towards a solution S such that (L, S) satisfies condition G t Output of learning
16
Local optimum L satisfies F t ESCAPE Tabu Search Set O.F: max G t (L, S) G t < 0 EXPLORATION Tabu Search Set O.F: Real O.F. Set constraint G t (L, S) > 0 1.Value(S) ≥ Value(L) 2.Step < maxSteps We can’t satisfy G t after maxSteps maxSteps reached Value(S) < Value(L) Unsuccessful escape Successful escape Enforcing the constraints
17
EXPERIMENTS AND RESULTS
18
108 instances of CTAP – A.Lusa, CN Potts (2008) Problem Set
19
“Which between tabu search and smart escape is more effective to escape from a local optimum?” For 30 times: –Find 1 rule (F 1, G 1 ) using 30% of the local optima –For each local optimum L of the remaining local optima (70%): Try to escape from L using: –smart constraints –a simple tabu search And see in which “valley” we are through a local search Experiment 1 – better LO? L M
20
Accuracy = 81.58% Experiment 1 – Results
22
Which one yields the greatest improvement from the initial local optimum to the final local optimum?
23
Experiment 2 – better search? Compare the following: –GRASP + tabu search (max not improving moves = m) –GRASP + smart search (with 1 or with 2 rules) Whenever you find a local optimum: –If a suitable rule is available, apply the corresponding constraint with maxSteps = m –Otherwise, run a tabu search (max moves = m) Run for 50 times on 72 instances, and record the best solution found
24
Experiment 2 – Results Comparison to Tabu Search New Best Known Solutions
25
Additional experiments on the Matrix Bandwidth Minimization Problem This problem is equivalent to labeling the vertices of an undirected graph so that the maximum difference between the labels of any pair of adjacent vertices is minimized We considered the data set of Martí et al. (2001), which is composed by 126 instances A simple tabu search performs well on 115 instances, and poorly on 11 We used 30 of the easy instances as training set and the 11 hard ones as test set For 50 times: –For each test instance: Generate a random solution. Run a regular Tabu Search Run a Smart Tabu Search (Data Mining Driven Tabu Search – DMDTS) Record the number of wins, ties, losses
26
Additional experiments on the Matrix Bandwidth Minimization Problem Wins - losses 7 wins, 1 loss, 3 ties
27
CONCLUSIONS
28
Conclusions We showed that: –It is possible to learn offline from other instances of the same class of problems –It is possible to effectively exploit this knowledge by dynamically introducing guiding constraints during a tabu search
29
Research Opportunities Improve learning part (heuristic algorithm) Improve constraints enforcement Apply this idea to other neighborhood searches Explore the potential of this idea on other problems
30
Thank you for your attention
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.