An Empirical Study of the Performance

Slides:



Advertisements
Similar presentations
Non-Binary Constraint Satisfaction Toby Walsh Cork Constraint Computation Center.
Advertisements

Foundations of Constraint Processing Lookahead Schemas 1 Foundations of Constraint Processing CSCE421/821, Spring
Constraint Satisfaction Problems
1 Finite Constraint Domains. 2 u Constraint satisfaction problems (CSP) u A backtracking solver u Node and arc consistency u Bounds consistency u Generalized.
Artificial Intelligence Constraint satisfaction problems Fall 2008 professor: Luigi Ceccaroni.
Constraint Processing Techniques for Improving Join Computation: A Proof of Concept Anagh Lal & Berthe Y. Choueiry Constraint Systems Laboratory Department.
1 Refining the Basic Constraint Propagation Algorithm Christian Bessière and Jean-Charles Régin Presented by Sricharan Modali.
Foundations of Constraint Processing, Spring 2008 Evaluation to BT SearchApril 16, Foundations of Constraint Processing CSCE421/821, Spring 2008:
Constraint Systems Laboratory Oct 21, 2004Guddeti: MS thesis defense1 An Improved Restart Strategy for Randomized Backtrack Search Venkata P. Guddeti Constraint.
Statistical Regimes Across Constrainedness Regions Carla P. Gomes, Cesar Fernandez Bart Selman, and Christian Bessiere Cornell University Universitat de.
An Empirical Study of the Performance of Preprocessing and Look-ahead Techniques for Solving Finite Constraint Satisfaction Problems Zheying Jane Yang.
An Approximation of Generalized Arc-Consistency for Temporal CSPs Lin Xu and Berthe Y. Choueiry Constraint Systems Laboratory Department of Computer Science.
Improving Backtrack Search For Solving the TCSP Lin Xu and Berthe Y. Choueiry Constraint Systems Laboratory Department of Computer Science and Engineering.
A Constraint Satisfaction Problem (CSP) is a combinatorial decision problem defined by a set of variables, a set of domain values for these variables,
 i may require adding new constraints, except for… o i =1  domain filtering o i =   constraint filtering Robert Woodward & Berthe Y. Choueiry Constraint.
Foundations of Constraint Processing Evaluation to BT Search 1 Foundations of Constraint Processing CSCE421/821, Spring
Solvable problem Deviation from best known solution [%] Percentage of test runs ERA RDGR RGR LS Over-constrained.
Efficient Techniques for Searching the Temporal CSP Lin Xu and Berthe Y. Choueiry Constraint Systems Laboratory Department of Computer Science and Engineering.
Constraint Satisfaction Problems
Foundations of Constraint Processing, Fall 2005 Sep 20, 2005BT: A Theoretical Evaluation1 Foundations of Constraint Processing CSCE421/821, Fall 2005:
A Constraint Satisfaction Problem (CSP) is a combinatorial decision problem defined by a set of variables, a set of domain values for these variables,
Constraint Systems Laboratory March 26, 2007Reeson–Undergraduate Thesis1 Using Constraint Processing to Model, Solve, and Support Interactive Solving of.
26 April 2013Lecture 5: Constraint Propagation and Consistency Enforcement1 Constraint Propagation and Consistency Enforcement Jorge Cruz DI/FCT/UNL April.
MAC and Combined Heuristics: Two Reasons to Forsake FC (and CBJ?) on Hard Problems Christian Bessière and Jean-Charles Régin Presented by Suddhindra Shukla.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 2 Ryan Kinworthy CSCE Advanced Constraint Processing.
N Model problem ä specify in terms of constraints on acceptable solutions ä define variables (denotations) and domains ä define constraints in some language.
On Algorithms for Decomposable Constraints Kostas Stergiou Ian Gent, Patrick Prosser, Toby Walsh A.P.E.S. Research Group.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
Problem Solving with Constraints Lookahead Schemas 1 Foundations of Constraint Processing CSCE496/896, Fall
Constraint Systems Laboratory 11/26/2015Zhang: MS Project Defense1 OPRAM: An Online System for Assigning Capstone Course Students to Sponsored Projects.
Constraint Systems Laboratory R.J. Woodward 1, S. Karakashian 1, B.Y. Choueiry 1 & C. Bessiere 2 1 Constraint Systems Laboratory, University of Nebraska-Lincoln.
Principles of Intelligent Systems Constraint Satisfaction + Local Search Written by Dr John Thornton School of IT, Griffith University Gold Coast.
Quality of LP-based Approximations for Highly Combinatorial Problems Lucian Leahu and Carla Gomes Computer Science Department Cornell University.
Foundations of Constraint Processing, Fall 2004 October 3, 2004Interchangeability in CSPs1 Foundations of Constraint Processing CSCE421/821, Fall 2004:
Shortcomings of Traditional Backtrack Search on Large, Tight CSPs: A Real-world Example Venkata Praveen Guddeti and Berthe Y. Choueiry The combination.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Consistency Methods for Temporal Reasoning
A First Practical Algorithm for High Levels of Relational Consistency
A paper on Join Synopses for Approximate Query Answering
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Title: Suggestion Strategies for Constraint- Based Matchmaker Agents
Robert Glaubius and Berthe Y. Choueiry
Empirical Comparison of Preprocessing and Lookahead Techniques for Binary Constraint Satisfaction Problems Zheying Jane Yang & Berthe Y. Choueiry Constraint.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Constraints and Search
Foundations of Constraint Processing
Intelligent Backtracking Algorithms: A Theoretical Evaluation
Foundations of Constraint Processing All questions to Piazza
Heuristic Ordering for Asynchronous Backtracking
Evaluation of (Deterministic) BT Search Algorithms
More on Constraint Consistency
Lookahead Schemas Foundations of Constraint Processing
Incorporating Constraint Checking Costs in Constraint Satisfaction Problem Suryakant Sansare.
Constraints and Search
Chapter 5: General search strategies: Look-ahead
Evaluation of (Deterministic) BT Search Algorithms
Intelligent Backtracking Algorithms: A Theoretical Evaluation
Intelligent Backtracking Algorithms: A Theoretical Evaluation
Constraint satisfaction problems
Constraint Satisfaction Problems & Its Application in Databases
Intelligent Backtracking Algorithms: A Theoretical Evaluation
Intelligent Backtracking Algorithms: A Theoretical Evaluation
Artificial Intelligence
Foundations of Constraint Processing All questions to Piazza
Evaluation of (Deterministic) BT Search Algorithms
Intelligent Backtracking Algorithms: A Theoretical Evaluation
Revisiting Neighborhood Inverse Consistency on Binary CSPs
Revisiting Neighborhood Inverse Consistency on Binary CSPs
Reformulating the Dual Graphs of CSPs
Constraint satisfaction problems
Presentation transcript:

An Empirical Study of the Performance July 30, 2003 An Empirical Study of the Performance of Preprocessing and Look-ahead Techniques for Solving Finite Constraint Satisfaction Problems Zheying Jane Yang Constraint Systems Laboratory Department of Computer Science and Engineering University of Nebraska-Lincoln Under the supervision of Prof. B.Y. Choueiry

Outline Motivation & results Background Experimental design and empirical study Results and analysis Conclusions & relation to previous work Summary of contributions Future work

1. Motivation CSP used to model NP-complete decision problems Search (exponential) is necessary, improved with Preprocessing algorithms: remove inconsistent combinations prior to search Look-ahead algorithms: remove inconsistencies during search Preprocessing Search w/ look-ahead CSP Smaller CSP Solution(s)

Algorithms studied & goal Several competing algorithms Preprocessing: removes inconsistencies prior to search Arc-consistency: AC3, AC2001 Neighborhood inverse consistency (NIC), requires search Look-ahead: filters search space during search Forward checking (FC) Maintaining arc-consistency (MAC) Controversies about their relative performance exist Our goal is to characterize empirically the relative performance of combinations of preprocessing and look-ahead schemes as a function of the CSP’s constraint probability & tightness

Current beliefs & results Our results: We specify when the above claims hold or not Bessière & Régin 2001 AC3 vs AC2001 AC2001 is better Claims Issue Freuder & Elfe 1996 NIC is better AC vs NIC MAC vs FC Haralick & Elliott 1980 Nadel 1989 Bessière & Régin 1996 FC is better MAC is much better MAC is the winner in large spase CSPs FC is the winner in dense CSPs Sabin & Freuder 1994 Grant 1997 Gent & Prosser 2000 Preprocessing Look-ahead No winner performs extremely well on all types of CSPs

2. Background

What is a CSP? A CSP problem is defined as a triple P=(V,D,C) Z Y X {(b,c), (b,d)} {(b,e), (a,f)} {(c,e), (d,f)} {c,d} {e,f} {a,b} A set of variables For each variable, a finite set of possible values. A set of constraints restriction the values that the variables can take simultaneously. The goal is to find One solution All solutions

Solving CSPs with systematic BT search Backtrack search is sound and complete, but exponential expands a partial solution explores every combination systematically Improve performance of search Preprocessing (constraint filtering/propagation) Look-ahead (constraint filtering during search) Backtracking: chronological/intelligent Variable & value ordering: static/dynamic, heuristic etc. Preprocessing (constraint filtering/propagation) Removes inconsistent combinations, which reduces the size of the problem and the corresponding space 2. Intelligent backtraking 3. Variable ordering and value ordering (etc=symmetry, decomposition)

Solving a CSP We study the performance of Preprocessing Look-ahead Preprocessing Following an assignment, removes values from the domains of the future variables that are inconsistent with the current assignment. Enforces different levels of local consistency by deleting inconsistent values from variable domains Algorithms: AC3, AC2001, NIC Algorithms: FC & MAC Say a few words about AC3/2001: guarantee the same property, but are different implementations. AC2001 uses special structures Say that NIC guarantees a stronger consistency than AC in general, but it requires some Introduce: past, current,future variables Say how FC and MAC work. Forward Checking (FC) algorithm Checks forward each time to make a new instantiation Each time a variable is assigned a value, it will restrict the domains of future variables. Remove inconsistent values from the current domain. If current domain is wiped out backtrack occurs. We study the performance of Preprocessing Hybrid search = preprocessing x look-ahead algorithms

Preprocessing a graph coloring problem  {R, G} {G} {R, G, B} V1 V3 V2 V3 {R, G} {G} {R, G, B} V1 V2 {R, G} {G} {R, B} V1 V3 V2   {R} {G} {R, B} V1 V3 V2 {R} {G} {B} V1 V3 V2 

Solving CSP using forward checking (FC) {R, G} {G} {R} V1 V3 V2 Start

Solving CSP using forward checking (FC) Start Domain wiped-out backtrack

Solving CSP using forward checking (FC) {R, G} {G} V1 V3 V2 Start Domain wiped-out backtrack

Solving CSP using forward checking (FC) {R, G} {G} {B} V1 V3 V2 Start

Solving CSP using forward checking (FC) Start V1 V2 V3 V1 {B} V2 V3 {R} {G}

Solving CSP using forward checking (FC) Start Solution!

3. Experimental design & empirical study

Algorithms tested Preprocessing  Look-ahead AC3 AC2001 FC MAC-AC3 NIC  Preprocessing: 5 algorithms Look-ahead: 3 algorithms Hybrid search: 7 combinations

Working assumptions Considered only binary constraints Restricted to finding first solution Restricted to chronological backtracking Use least domain (LD) as variable ordering heuristic Variable ordering done dynamically No value ordering heuristic, too costly in general

Problems tested Random CSPs We need many instances with similar characteristics in order to analyze the performance of search Real-world problems cannot be controlled by explicit parameters to enable statistical analysis of the performance Generated instances connected graphs instances guaranteed solvable

Control parameters < n, d, C, t > Number of variables n We choose n = 50 Domain size d (uniform) We choose d = 10. Thus, problem size is 1050, relatively large Constraint tightness t = (uniform) We vary t = [0.1, 0.2, …, 0.9] and t = [0.05, 0.1, …, 0.95] Number of constraints C determines constraint probability p= , Cmax= n(n-1)/2 We vary C = [20, 490] corresponding to p = [0.024, 0.4] We report C = 30, 50, 80, 120, 130, 245, 340, 490 Disallowed tuples All possible tuples C Cmax

Comparisons Evaluation criteria Filtering effectiveness measures reduction of CSP size Number of constraint checks #CC measures filtering effort Number of nodes visited #NV measures backtracking effort CPU time [ms] measures overall performance Since constraints are defined in extension, CPU time reflects #CC Preprocessing: Filtering effectiveness, #CC, CPU time Hybrid search: #CC, #NV, CPU time

Sampling and code characteristics Evaluated performance by running each algorithm on 30 CSP instances of equal characteristics, calculating average & standard deviation values for each evaluation criterion Generated approximately 6,000 CSP instances Implemented in JAVA: 21 JAVA classes 4,000 lines of code Experiments carried out on PrairieFire.unl.edu

4. Results and analysis Part I: Preprocessing Part II: Hybrids

Preprocessing: filtering effectiveness NIC reduces search space at a large amount

Preprocessing results » Better than,  comparable p < 0.05 0.05  p < 0.1 0.1  p < 0.2 p  0.2 Filtering effectiveness Filtering effectiveness increases with p NIC-MAC-x » NIC-FC » AC3  AC2001

CC (NICx) >> CC (AC3, AC2001) Preprocessing: #CC, sparse CSPs CC (AC3) > CC (AC2001) CC (NICx) >> CC (AC3, AC2001)

Preprocessing: #CC, denser CSPs NIC becomes prohibitive When p=0.2, NICx are all costly NIC should never be combined with MAC

Preprocessing results » Better than,  comparable p < 0.05 0.05  p < 0.1 0.1  p < 0.2 p  0.2 Filtering effectiveness Filtering effectiveness increases with p NIC-MAC-x » NIC-FC » AC3  AC2001 #CC AC2001 » AC3 » NICx Never use MAC with NIC

Preprocessing: CPU time When p > 0.2, NIC-x is too costly

Preprocessing results » Better than,  comparable p < 0.05 0.05  p < 0.1 0.1  p < 0.2 p  0.2 Filtering effectiveness Filtering effectiveness increases with p NIC-MAC-x » NIC-FC » AC3  AC2001 #CC AC2001 » AC3 » NICx Never use MAC with NIC CPU time AC3 » AC2001 » NICx

Preprocessing: summary When to use what p < 0.05 0.05  p < 0.1 0.1  p < 0.2 p  0.2 ACx Preprocessing not effective Not effective Effective NIC-x Quite effective Effective, but MAC too costly Too costly, avoid AC2001 is not significantly better than AC3, and is not worth the extra data structures In general, we disagree with Bessière and Régin NIC-x: powerful filtering, but too costly. Use it on large problems when checking constraints is cheap

Hybrids: #CC, p = 5%, 11% When p < 0.05: ACx is not effective,ACx-FC are costly, NIC-FC-FC is OK, NIC-MAC-x is best When p > 0.10: avoid MAC, stick with FC

Hybrids results » Better than,  comparable When to use what NIC-MAC-Acx » NIC-FC-FC » AC-MAC-x » ACx-FC NIC-FC-FC » NIC-MAC-ACx  AC-MAC-x CPU time #CC & When to use what p < 0.05 0.05  p < 0.10 0.10  p < 0.15 0.15  p < 0.2 p  0.2 FC Avoid FC FC dominates MAC MAC dominates Avoid MAC NIC NIC helps NIC still helps ACx ACx useless

Hybrids: #CC, p= 15%, 28% As p increases: MAC deteriorates, NIC becomes expensive, use ACx-FC

Hybrids results » Better than,  comparable When to use what NIC-MAC-Acx » NIC-FC-FC » AC-MAC-x » ACx-FC NIC-FC-FC » NIC-MAC-ACx  AC-MAC-x ACx-FC » NIC-MAC-Acx CPU time #CC & When to use what p < 0.05 0.05  p < 0.10 0.10  p < 0.15 0.15  p < 0.2 p  0.2 FC Avoid FC FC dominates MAC MAC dominates Avoid MAC NIC NIC helps NIC still helps NIC deteriorates Avoid NIC ACx ACx useless AC-x starts to work ACx helps

Hybrids: #NV At phase transition, NIC and MAC do powerful filtering but influence of MAC is stronger

NIC-MAC-Acx » AC-MAC-x AC-MAC-x » NIC-MAC-ACx Hybrids: summary » Better than,  comparable p < 0.05 0.05  p < 0.10 0.10  p < 0.15 0.15  p < 0.2 p  0.2 NIC-MAC-Acx » NIC-FC-FC » AC-MAC-x » ACx-FC NIC-FC-FC » NIC-MAC-ACx  AC-MAC-x ACx-FC » NIC-MAC-Acx #NV NIC-MAC-Acx » AC-MAC-x » NIC-FC-FC » ACx-FC AC-MAC-x » NIC-MAC-ACx » ACx-FC » NIC-FC-FC CPU time #CC & When to use what p < 0.05 0.05  p < 0.10 0.10  p < 0.15 0.15  p < 0.2 p  0.2 FC Avoid FC FC dominates MAC MAC dominates Avoid MAC NIC NIC helps NIC still helps NIC deteriorates Avoid NIC ACx ACx useless AC-x starts to work ACx helps

5. Conclusions Preprocessing Look-ahead AC3 vs. AC2001: AC2001 is not significantly better than AC3, and is not worth the extra data structures In general, we disagree with Bessière and Régin NIC-x: powerful filtering, but too costly Use it on large problems when checking constraints is cheap Look-ahead MAC vs. FC: performance depends on constraint probability and tightness. MAC only wins in low p and high t In general we disagree with results of Sabin, Freuder, Bessière & Régin

Relation to previous work (I) NIC is better than AC Freuder & Elfe 1996 The instances tested were all with low probability (p < 0.05). In this region, AC is ineffective. MAC is better than FC Sabin & Freuder 1994 They tested CSPs with low probability (p = 0.018-0.09), and relatively high constraint-tightness (t = 0.15 – 0.675). In our study, MAC is effective in this region, but not outside it. MAC is better than FC Bessière & Régin 1996 The instances tested were also in the region of low probability (p = 0.017, 0.024, 0.074, 0.08, 0.1, 0.12, 0.15), except instance#1 and instance#2, with relative high probability (p = 0.3, and 0.84). But here they test only 2 instances

Relation to previous work (II) Gent & Prosser 2000 questioned validity of previous results on MAC. They concluded that: in large, sparse CSPs with tight constraints, MAC is winner in dense CSPs with loose constraints, FC is winner. Grant 1997 showed that FC is winner on small CSPs with all range of probabilities All concluded that: “A champion algorithm which perform extremely well on all types of problems does not exist.” Our characterizations are more thorough and precise.

6. Summary of contributions Random generator that guarantees solvability Empirical evaluation of the performance of 7 combinations of preprocessing and look-ahead Uncovered (restricted) validity conditions of previously reported results Summarized best working conditions for preprocessing and look-ahead algorithms Developed a Java library with 7 hybrid algorithms

7. Directions for future work Compare to other, less common filtering algorithms, e.g. SRPC, PC, SAC, Max-RPC Debryune & Bessière 2001 Combining these preprocessing algorithms with intelligent backtrack search algorithms Validate results on larger CSPs, real-world applications, non-binary Test and analyze the effect of the topology of constraint networks on the performance of search

Acknowledgments Dr. Berthe Y. Choueiry (advisor) Dr. Sebastian Elbaum Dr. Peter Revesz Ms. Catherine L. Anderson Mr. Daniel Buettner Ms. Deborah Derrick (proof reading) Mr. Eric Moss Mr. Lin Xu Ms. Yaling Zheng Mr. Hui Zou

THANK YOU!