Download presentation
Presentation is loading. Please wait.
Published byBethanie Foster Modified over 9 years ago
2
Balance and Filtering in Structured Satisfiability Problems Henry Kautz University of Washington joint work with Yongshao Ruan (UW), Dimitris Achlioptas (MSR), Carla Gomes (Cornell), Bart Selman (Cornell), Mark Stickel (SRI) CORE – UW, MSR, Cornell
3
Speedup Learning Machine learning historically considered Learning to classify objects Learning to search or reason more efficiently Speedup Learning Speedup learning disappeared in mid-90’s Last workshop in 1993 Last thesis 1998 What happened? EBL (without generalization) “solved” rel_sat (Bayardo), GRASP (Silva 1998), Chaff (Malik 2001) – 1,000,000 variable verification problems EBG too hard algorithmic advances outpaced any successes
4
Alternative Path Predictive control of search and reasoning Learn statistical model of behavior of a problem solver on a problem distribution Use the model as part of a control strategy to improve the future performance of the solver Synthesis of ideas from Phase transition phenomena in problem distributions Decision-theoretic control of reasoning Bayesian modeling
5
Big Picture Problem Instances Solver static features runtime Learning / Analysis Predictive Model dynamic features resource allocation / reformulation control / policy
6
Case Study: Beyond 4.25 Problem Instances Solver static features runtime Learning / Analysis Predictive Model
7
Phase transitions & problem hardness Large and growing literature on random problem distributions Peak in problem hardness associated with critical value of some underlying parameter 3-SAT: clause/variable ratio = 4.25 Using measured parameter to predict hardness of a particular instance problematic! Random distribution must be a good model of actual domain of concern Recent progress on more realistic random distributions...
8
Quasigroup Completion Problem (QCP) NP-Complete Has structure is similar to that of real-world problems - tournament scheduling, classroom assignment, fiber optic routing, experiment design,... Start with empty grad, place colors randomly Generates mix of sat and unsat instances
9
Phase Transition Almost all unsolvable area Fraction of pre-assignment Fraction of unsolvable cases Almost all solvable area Complexity Graph Phase transition 42%50%20%42%50%20% Underconstrained area Critically constrained area Overconstrained area
10
Quasigroup With Holes (QWH) Start with solved problem, then punch holes Generates only SAT instances Can use to test incomplete solvers Hardness peak at phase transition in size of backbone (Achlioptas, Gomes, & Kautz 2000)
11
New Phase Transition in Backbone % Backbone % holes Computational cost % of Backbone
12
Easy-Hard-Easy pattern in local search % holes Computational Cost Walksat Order 30, 33, 36 “Over” constrained area Underconstrained area
13
Are we ready to predict run times? Problem: high variance log scale
14
Deep structural features Rectangular Pattern (Hall 1945) Aligned Pattern new result! Balanced Pattern TractableVery hard Hardness is also controlled by structure of constraints, not just the fraction of holes
15
Random versus balanced Balanced Random
16
Random versus balanced Balanced Random
17
Random vs. balanced (log scale) Balanced Random
18
Morphing balanced and random order 33
19
Considering variance in hole pattern order 33
20
Time on log scale order 33
21
Balanced patterns yield (on average) problems that are 2 orders of magnitude harder than random patterns Expected run time decreases exponentially with variance in # holes per row or column Same pattern (differ constants) for DPPL! At extreme of high variance (aligned model) can prove no hard problems exist Effect of balance on hardness
22
Morphing random and rectangular order 36
23
Morphing random and rectangular order 33 artifact of walksat
24
Morphing Balanced Random Rectangular 0.1 1 10 100 05101520 variance time (seconds) order 33
25
Intuitions In unbalanced problems it is easier to identify most critically constrained variables, and set them correctly Backbone variables
26
Are we done? Not yet... Observation 1: While few unbalanced problems are hard, quite a few balanced problems are easy To do: find additional structural features that predict hardness Introspection Machine learning (Horvitz et al. UAI 2001) Ultimate goal: accurate, inexpensive prediction of hardness of real-world problems
27
Are we done? Not yet… Observation 2: Significant differences in the SAT instances in hardest regions for the QCP and QWH generators QWH QCP (sat only)
28
Biases in Generators An unbiased SAT-only generator would sample uniformly at random from the space of all SAT CSP problems Practical CSP generators Incremental arc-consistency introduces dependencies Hard to formally model the distribution QWH generator Clean formal model Slightly biased toward problems with many solutions Adding balance makes small, hard problems
29
balanced QCPbalanced QWH random QCPrandom QWH
30
balanced QCPbalanced QWH random QCPrandom QWH
31
balanced QCPbalanced QWH random QCPrandom QWH
32
balanced QCPbalanced QWH random QCPrandom QWH
33
Conclusions One small part of an exciting direction for improving power of search and reasoning algorithms Hardness prediction can be used to control solver policy Noise level (Patterson & Kautz 2001) Restarts (Horvitz et al (CORE team ) UAI 2001) Lots of opportunities for cross-disciplinary work Theory Machine learning Experimental AI and OR Reasoning under uncertainty Statistical physics
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.