Download presentation
Presentation is loading. Please wait.
1
Guided Local Search – CP Meets OR
Edward Tsang CP-AI-OR ’02 Constraint Satisfaction and Optimisation Group, University of Essex Guided Local Search – CP meets OR Guided Local Search is a meta-heuristic search method for optimisation. It was generalised from GENET, a neural network approach for constraint satisfaction. Guided local search has been successfully applied to a wide range of problems. In this talk, I shall explain the principles of Guided Local Search, and its latest development, which results in it being relatively insensitive to search parameters. I shall also explain its relationship with other general optimisation methods in both Operations Research and Constraint Programming.
2
Summary: GLS, CP+OR Lagrangian: continuous Penalty-based Methods
From OR DLM for SAT: discrete opt. Tabu Search Simulated Annealing Meta-heuristics methods Soft Taboo Aspiration NN (AI) for satisfiability GLS for optimisation Genetic Algorithms Changing GA behaviour Hill Climbing e.g. 2-Opt in TSP Changing HC behaviour Friday, 18 January 2019 GLS: CP Meets OR
3
Stochastic methods, Motivation
Complete methods suffer from combinatorial explosion Many problems require optimisation Suitable for partial constraint satisfaction problems may return near solutions or near optimal solutions Requirement: spend as much time as one please Stochastic methods satisfy the needs (E.g.: HC, SA, Tabu Search, GA, NN, GLS) Friday, 18 January 2019 GLS: CP Meets OR
4
Background: Local Search
Ingredients: Cost function Neighbourhood function Strategy for visiting neighbours e.g. steepest ascent Problems: local optimum Plateau When to stop? Ok with satisfiability But not optimization local max. global max. Cost function plateau neighbourhood Friday, 18 January 2019 GLS: CP Meets OR
5
GENET: Neural Network for Constraint Satisfaction
Build inhibitory connections Let the network converge to solutions Friday, 18 January 2019 GLS: CP Meets OR
6
GLS Overview GLS for optimization Metaheuristic method: Strategy:
Generalization of GENET, from satisfiability to optimization Metaheuristic method: To sit on local search methods To help them to escape local optima Strategy: At local optimum, change the objective function Make local optima non-optima, then continue with local search Friday, 18 January 2019 GLS: CP Meets OR
7
GLS: Augmented Cost Function
Identifying solution features, e.g. Edges used associate costs and penalties to features Given cost function g to minimize Augmented Cost Function h(s) = g(s) + l × S (pi × Ii(s)) l is parameter to GLS Ii(s) = 1 if s exhibits feature i; 0 otherwise pi is penalty for feature i, initialized to 0 Friday, 18 January 2019 GLS: CP Meets OR
8
The GLS Algorithm Iterative local search In a local minimum
Ii(s*) = 1 if s* exhibits feature i; 0 otherwise ci = cost of feature i Iterative local search In a local minimum Select Features Maximize utility Increase penalties (strengthen constraints) Resume Local Search from Local Minimum pi = penalty of feature I (init. to 0) Friday, 18 January 2019 GLS: CP Meets OR
9
GLS on TSP Local search: 2-opting l = a g(t* ) / N
Features: n2 Features cost = distance given e.g. tour [1,5,3,4,6,2] GLS on TSP Local search: 2-opting l = a g(t* ) / N a = parameter to tune, within (0, 1] t* = first local minimum produced by local search; g(t*) = cost of t* N = # of cities Friday, 18 January 2019 GLS: CP Meets OR
10
Components in GLS Local search strategy Features, costs
Also needed in HC, SA, Tabu Search Features, costs Sometimes come naturally from cost function Main parameter: l Experimental results sometimes sensitive to l Our practice: l = a g(first local optimum) Question: how to tune a (l -coefficient)? Friday, 18 January 2019 GLS: CP Meets OR
11
GLS + Aspiration Aspiration: if G(s) is better than best so far, then move to s even if H(s) is inferior Work for MaxSAT and QAP but not SAT Result generally improved at high l value G: Original Cost Function H: Augmented Cost Function Friday, 18 January 2019 GLS: CP Meets OR
12
GLS + Randomness With probability Pr make random move
Results improved in QAP at low l value No effect on GLS SAT / MAX SAT Randomness: when is it useful? Friday, 18 January 2019 GLS: CP Meets OR
13
GLS + Aspiration + Randomness
Result: performance is less sensitive to l value Aspiration should become a standard feature of GLS Randomness sometimes helps Where/when will they succeed? Friday, 18 January 2019 GLS: CP Meets OR
14
Some GLS Applications Radio Length Frequency Assignment
BT’s work force scheduling Quadratic assignment SAT / MAXSAT Vehicle Routing Logic Programming (Melbourne, Singapore, Hong Kong) Train scheduling (King’s College, London) Bus scheduling (Leeds) Bin Packing (University of Copenhagen) Friday, 18 January 2019 GLS: CP Meets OR
15
Meta-heuristic Methods
Tabu Search GLS Simulated Annealing DLM Penalty-based methods Genetic Algorithms Changing hill climbing behaviour mainly to escape local optima Hill Climbing e.g. 2-Opt in TSP Friday, 18 January 2019 GLS: CP Meets OR
16
GLS & Tabu Search TS is a class of algorithms
Various ways to manipulate Taboo List GLS is a more specific algorithm Penalties in GLS are Soft Taboos Taboos are normally hard constraints in TS GLS borrowed taboo list from TS GLS+ borrowed aspiration idea from TS Hybrid GLS+TS used in ILOG Dispatcher Friday, 18 January 2019 GLS: CP Meets OR
17
GLS & Filled Function Method
Augmented function to minimize, h’ = h + f Minimize (augmented) function h Local minimum x* At local minimum, add filled function f (penalty) Friday, 18 January 2019 GLS: CP Meets OR
18
Lagrangian Method For continuous constrained optimization
Minimize f(x) subject to gi(x) = 0 Lagrangian function: F(x, ) = f(x) + i = 1, n i gi(x) Where Lagrange multipliers i are introduced Vary x in order to minimize F Escape local minimum by varying Saddle point: F cannot be decreased by varying x, nor increased by varying Friday, 18 January 2019 GLS: CP Meets OR
19
DLM in SAT Lagrangian methods for discrete optimization
many detailed adjustments needed e.g. re-define gradients, reducing , etc. In general, no guarantee to settle in global optimal DLM: define Lagrangian function for SAT: F(x, ) = i = 1, n (1+i ) Ui(x) Indicator of whether clause i is satisfied by x: Ui(x) In SAT, global minimum can be recognized This is exploited by DLM (So could Tabu Search and GLS) Source: Shang & Wah Journal of Global Optimization, 12, 1998, 61-99 Friday, 18 January 2019 GLS: CP Meets OR
20
GLS & Genetic Algorithms
GGA results? GLS results Performance ( better) Guided Genetic Algorithm Hybrid GLS - GA Aims: To extend the domain of GLS To improve efficiency & effectiveness of GAs To improve robustness of GLS Friday, 18 January 2019 GLS: CP Meets OR
21
Using GLS Penalties in GGA
When penalty of feature F increased to k +k Add k to relevant loci 1 3 2 Fitness Template Affect: Crossover Mutation 1 Chromosome High value in fitness template instability Friday, 18 January 2019 GLS: CP Meets OR
22
Summary: GLS, CP+OR Lagrangian: continuous Penalty-based Methods
From OR DLM for SAT: discrete opt. Tabu Search Simulated Annealing Meta-heuristics methods Soft Taboo Aspiration NN (AI) for satisfiability GLS for optimisation Genetic Algorithms Changing GA behaviour Hill Climbing e.g. 2-Opt in TSP Changing HC behaviour Friday, 18 January 2019 GLS: CP Meets OR
23
Funded by: University of Essex, EPSRC
The End ZDC GLS SAT, MAXSAT, QAP Edward Tsang, Chang Wang, Jim Doran, James Borrett Andrew Davenport, Kangming Zhu, Chris Voudouris, Tung Leng Lau, John Ford, Patrick Mills, Richard Bradwell Funded by: University of Essex, EPSRC Friday, 18 January 2019 GLS: CP Meets OR
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.