An Algorithm for Multi-Criteria Optimization in CSPs

Slides:



Advertisements
Similar presentations
Algorithm Design Methods (I) Fall 2003 CSE, POSTECH.
Advertisements

Algorithm Design Methods Spring 2007 CSE, POSTECH.
G5BAIM Artificial Intelligence Methods
1 Finite Constraint Domains. 2 u Constraint satisfaction problems (CSP) u A backtracking solver u Node and arc consistency u Bounds consistency u Generalized.
Biased Random Key Genetic Algorithm with Hybrid Decoding for Multi-objective Optimization Panwadee Tangpattanakul, Nicolas Jozefowiez, Pierre Lopez LAAS-CNRS.
Multi-Objective Optimization NP-Hard Conflicting objectives – Flow shop with both minimum makespan and tardiness objective – TSP problem with minimum distance,
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Multi-objective optimization multi-criteria decision-making.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Multiobjective Optimization Chapter 7 Luke, Essentials of Metaheuristics, 2011 Byung-Hyun Ha R1.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Branch and Bound Algorithm for Solving Integer Linear Programming
26 April 2013Lecture 5: Constraint Propagation and Consistency Enforcement1 Constraint Propagation and Consistency Enforcement Jorge Cruz DI/FCT/UNL April.
Nogood Recording for Static and Dynamic Constraint Satisfaction Problems Thomas Schiex, Gerard Verfaillie C.E.R.T.-O.N.E.R.A.(France)
Linear programming. Linear programming… …is a quantitative management tool to obtain optimal solutions to problems that involve restrictions and limitations.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
CP Summer School Modelling for Constraint Programming Barbara Smith 1.Definitions, Viewpoints, Constraints 2.Implied Constraints, Optimization,
The X-Tree An Index Structure for High Dimensional Data Stefan Berchtold, Daniel A Keim, Hans Peter Kriegel Institute of Computer Science Munich, Germany.
A two-stage approach for multi- objective decision making with applications to system reliability optimization Zhaojun Li, Haitao Liao, David W. Coit Reliability.
GRID’2012 Dubna July 19, 2012 Dependable Job-flow Dispatching and Scheduling in Virtual Organizations of Distributed Computing Environments Victor Toporkov.
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
Kanpur Genetic Algorithms Laboratory IIT Kanpur 25, July 2006 (11:00 AM) Multi-Objective Dynamic Optimization using Evolutionary Algorithms by Udaya Bhaskara.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
R-Trees: A Dynamic Index Structure For Spatial Searching Antonin Guttman.
1 An Arc-Path Model for OSPF Weight Setting Problem Dr.Jeffery Kennington Anusha Madhavan.
Branch and Bound Searching Strategies
Distributed Control and Autonomous Systems Lab. Sang-Hyuk Yun and Hyo-Sung Ahn Distributed Control and Autonomous Systems Laboratory (DCASL ) Department.
Journal of Computational and Applied Mathematics Volume 253, 1 December 2013, Pages 14–25 Reporter : Zong-Dian Lee A hybrid quantum inspired harmony search.
Introduction to Algorithms: Brute-Force Algorithms.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
Mehdi Kargar Department of Computer Science and Engineering
Debugging Constraint Models with Metamodels and Metaknowledge
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
Algorithm Design Methods
The minimum cost flow problem
Lecture 11: Tree Search © J. Christopher Beck 2008.
Priority Queues Chuan-Ming Liu
Design and Analysis of Algorithm
ME 521 Computer Aided Design 15-Optimization
Empirical Comparison of Preprocessing and Lookahead Techniques for Binary Constraint Satisfaction Problems Zheying Jane Yang & Berthe Y. Choueiry Constraint.
Analysis and design of algorithm
CS 3343: Analysis of Algorithms
Assignment I TSP with Simulated Annealing
Analysis & Design of Algorithms (CSCE 321)
Computer Science cpsc322, Lecture 13
Exam 2 LZW not on syllabus. 73% / 75%.
Week 4 Jan 29th 2016.
Analysis & Design of Algorithms (CSCE 321)
L12. Network optimization
Multi-Objective Optimization
Constraint Satisfaction Problems
Doshisha Univ., Kyoto Japan
Greedy Algorithms TOPICS Greedy Strategy Activity Selection
Greedy Algorithms Alexandra Stefan.
Constraints and Search
Branch and Bound Searching Strategies
Algorithm Design Methods
Algorithm Design Methods
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
R-trees: An Average Case Analysis
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Algorithm Design Methods
Presentation transcript:

An Algorithm for Multi-Criteria Optimization in CSPs Marco Gavanelli University of Ferrara Italy 15th ECAI: July 21-26 2002, Lyon, France

Outline Constraint Optimization Problems Multi-criteria Optimization Problems Current Approaches New Method Case studies Multi-Knapsack Randomly generated problems

Constraint Optimization Problem A COP a CSP (X,D,C) with a cost function f to maximize f:D1 x ... x DN ® St where (St, £) is a total order An assignment A is an optimal solution to a COP iff it is a solution of the CSP and ¬$A’ s.t. f(A’) > f(A) Total order among solutions: Only the best assignment satisfying constraints is considered solution of the COP

Branch & Bound Method for solving COPs Translates a COP into a sequence of CSP 1. repeat 2. find a feasible assignment for CSP: A 3. let f* = f(A) 4. impose f > f* 5. until search successful Can be considered as a set of NoGoods Def. A nogood is a (partial) assignment A such that there is no unenumerated solution containing A

B&B as NoGood learning The function f can be considered as a variable of the CSP linked with a constraint to other variables E.g. A,B :: 0..4, f(A,B) = A+B can be considered as F :: 0..8, F = A+B if we know a feasible assignment (A1,B 0), we can infer the nogoods: {F0},{F1}

Multi-criteria Optimization Problems A MOP is a CSP (X,D,C) with functions f  f1,f2,...,fn, that “should be optimized at the same time” The user is not able to synthesize the functions into only one usually, tradeoff solutions are considered more interesting, extreme solutions are seldom accepted In most cases there is not only one optimal point

Non-Dominated Frontier In a MOP, the concept of better solution turns into the concept of Domination: X £d Y Û " k=1..n, Xk £ Yk A Solution of the CSP is Pareto-Optimal or Non-Dominated iff ¬$A’ s.t. f(A) <d f(A’) Only points in the nondominated frontier are interesting to the user f1 Criterion Space f2

Current Approaches: Models Transform Partial Order into Total Order introducing assumptions: Hierarchies Linear Combinations Distance from the ideal point Interactive Methods (MP) Methods that require properties of the problem structure: linear problems continuous/differentiable constraints/functions

Current Approaches: Methods Incomplete Methods: Tabu Search, Genetic Algorithms, ... van Wassenhove-Gelders [WS80]: Split the criterion space into strips and optimize only one function. 1. repeat 2. B&B: maximize f1 ->  3. NoGood: f2 > f2() 4. until search successful ARTICLE{WG, author = "L.N. van~Wassenhove and L.F.~Gelders", title = "Solving a bicriterion scheduling problem", journal = "European Journal of Operational Research", year = "1980", volume = "4", pages = "42 -- 48", }

van Wassenhove-Gelders Search Tree max(f1) f1 f2

van Wassenhove-Gelders: restart Search Tree max(f1) f1 f2

van Wassenhove-Gelders: restart 3 Search Tree max(f1) f1 f2

van Wassenhove-Gelders: restart 3 Search Tree max(f1) f1 f2

van Wassenhove-Gelders: Limitations  2 Objective functions  Restarts the search for each non-dominated solution found  Complete (finds the whole non-dominated frontier)  General (no assumptions on the problem structure)

Multi-B&B Search Tree f1 f2

Optimization NoGood Extension of the concept of NoGood for Multi-Criteria Optimization Def. An Optimization NoGood is an assignment {F1 v1,…, Fk vk} such that u1 v1,…, uk vk, {F1 u1,…, Fk uk} is a nogood f1 (v1 ,v2) f2

Propagation of optimization nogoods

Propagation of nogoods Each time the check of optimization nogoods is activated, up to N of the recorded nogoods can reduce domains The problem reduces to finding if N points are in the forbidden area Use efficient, spatial data structures

Point Quad-Trees

Point Quad-Trees: Features  Access in O(log(#Opt NoGoods))  Easily extendable to N dimensions (Oct-Trees, ...)  Insertion of new points in O(log(#Opt NoGoods)) Drawbacks:  Elimination of one point implies re-insertion of some of its children  Efficiency depends on the balancing of the tree

Point Quad-Trees for MOP Points that dominate the node are searched in one of the children Dominated area is one of the children We only delete points if they become dominated, i.e., if we insert another point

2D Quad-Trees for MOP In general, deleting a point means: Finding a new root for the sub-tree re-arranging the children In our case The new root is the inserted point The children that need re-arrangement are dominated

Case Study: Multi-Knapsack Problem pi,j: profit of object j according to knapsack i wi,j: weight of object j according to knapsack i ci: capacity of knapsack i "i, Sj xjwi,j £ ci xj Î{0,1} max(f1,...,fN): fi= Sj xjpi,j

Multi-Knapsack Problem

Multi-Knapsack

Multi-Knapsack > 2D

Randomly Generated Problems Problems generated with parameters: N=10, D=15, P = 50%, Q =10% .. 90% Linear functions with random weights. Average of 10 problems each Tightness [WG] PCOP 90 0.07 0.079 80 0.282 0.287 70 0.4922 0.4977 60 1.22 1.16 50 2.184 1.658 40 19.81 2.879 30 75.463 21.454 20 149.57 20.766 10 141.57 20.122 Problems generated with parameters: Number of variables: N=10 Size of domains: D=15 Constraint density: P = 50% Constraint Tightness: Q varying from 10% to 90% Linear functions with random weights

Randomly-Generated Problems Ratio = Number of nondominated solutions ~ number of restarts Equal computation time

Conclusions & Future Work Extension of B&B for multicriteria with Quad-Trees in CSP complete (finds all the nondominated frontier) general (no assumptions on the functions, constraints, ...) N criteria Future Work: Hybridization with methods in MP, local search, genetic algorithms, ...

Thanks for listening Thank you! Marco Gavanelli