UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint.

Slides:



Advertisements
Similar presentations
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
Advertisements

Time-Space Tradeoffs in Resolution: Superpolynomial Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell.
P-Optimal CSP Solvers Applied to Software Security P-Optimal CSP Solvers Applied to Software Security Ahmed Abdel Mohsen and Karl Lieberherr College of.
“Using Weighted MAX-SAT Engines to Solve MPE” -- by James D. Park Shuo (Olivia) Yang.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View SAT.
1/30 SAT Solver Changki PSWLAB SAT Solver Daniel Kroening, Ofer Strichman.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Algorithms and Data Review Fall 2010 Karl Lieberherr 1CS 4800 Fall /7/2010.
Fast FAST By Noga Alon, Daniel Lokshtanov And Saket Saurabh Presentation by Gil Einziger.
CS21 Decidability and Tractability
1 Boolean Satisfiability in Electronic Design Automation (EDA ) By Kunal P. Ganeshpure.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
GRASP-an efficient SAT solver Pankaj Chauhan. 6/19/ : GRASP and Chaff2 What is SAT? Given a propositional formula in CNF, find an assignment.
The Theory of NP-Completeness
Analysis of Algorithms CS 477/677
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
CP-AI-OR-02 Gomes & Shmoys 1 The Promise of LP to Boost CSP Techniques for Combinatorial Problems Carla P. Gomes David Shmoys
SAT Solving Presented by Avi Yadgar. The SAT Problem Given a Boolean formula, look for assignment A for such that.  A is a solution for. A partial assignment.
SAT Solver Math Foundations of Computer Science. 2 Boolean Expressions  A Boolean expression is a Boolean function  Any Boolean function can be written.
T Ball (1 Relation) What Your Robots Do Karl Lieberherr CSU 670 Spring 2009.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
Scott Perryman Jordan Williams.  NP-completeness is a class of unsolved decision problems in Computer Science.  A decision problem is a YES or NO answer.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Software Development Developing a MAX-CSP Solver Karl Lieberherr.
Boolean Satisfiability and SAT Solvers
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
SDG Mittagsseminar1 Using Artificial Markets to Teach Computer Science Through Trading Robots How to get students interested in algorithms, combinatorial.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
SDG Mittagsseminar1 Using Artificial Markets to Teach Computer Science Through Trading Robots How to get students interested in algorithms, combinatorial.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE COS302 MICHAEL L. LITTMAN FALL 2001 Satisfiability.
Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint.
UBC March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
UBC March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Quality of LP-based Approximations for Highly Combinatorial Problems Lucian Leahu and Carla Gomes Computer Science Department Cornell University.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
Maximum Density Still Life Symmetries and Lazy Clause Generation Geoffrey Chu, Maria Garcia de la Banda, Chris Mears, Peter J. Stuckey.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 7 Time Complexity Some slides are in courtesy.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
SAT Solving As implemented in - DPLL solvers: GRASP, Chaff and
PhD March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.
Approximation Algorithms based on linear programming.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
Chapter 10 NP-Complete Problems.
Computability and Complexity
Propositional Calculus: Boolean Algebra and Simplification
Complexity 6-1 The Class P Complexity Andrei Bulatov.
Inside Microsoft Research
CS21 Decidability and Tractability
GRASP-an efficient SAT solver
Presentation transcript:

UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart Title inspired by a paper by Carla Gomes / David Shmoys

UBC March Where we are Introduction Look-forward (look-ahead polynomials) Look-backward (superresolution) SPOT: how to use the look-ahead polynomials (look-forward) together with superresolution (look-backward).

UBC March Problem Snapshot SAT: classic problem in complexity theory SAT & MAX-SAT Solvers: working on CNFs (a multi-set of disjunctions). CSP: constraint satisfaction problem –Each constraint uses a Boolean relation. –e.g. a Boolean relation 1in3(x y z) is satisfied iff exactly one of its parameters is true. CSP & MAX-CSP Solvers: working on CSP instances (a multi-set of constraints).

UBC March Related Work C. P. Gomes and D. B. Shmoys. The Promise of LP to Boost CSP Techniques for Combinatorial Problems. In Proceedings of the 4th International Workshop on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CP- AI-OR'02), pages , 2002.

UBC March Gomes/Shmoys They use LP relaxation to derive probabilities how to set the variables. We use averaging relaxation to derive probabilities how to set the variables.

UBC March Gomes/Shmoys A central feature of their algorithm is that they maintain two different formulations: the CSP formulation and the LP formulation. A central feature of our algorithm (SPOT) is that it maintains two different formulations: the CSP formulation and the polynomial formulation.

UBC March Gomes/Shmoys The hybrid nature of their algorithm results from the combination of strategies for variable and value assignment. The hybrid nature of our algorithm (SPOT) results from the combination of strategies: the polynomial formulation is used for variable and value ordering and the CSP formulation for propagation and clause learning.

UBC March Gomes/Shmoys They use randomized restarts to reduce the variance in the search behavior. We restart after each conflict.

UBC March Gomes/Shmoys differences The CSP and LP formulations are comparable in length. The polynomial formulation is significantly shorter (log) than the CSP formulation.

UBC March Gomes/Shmoys differences The LP formulation must be suitably manually constructed from the CSP formulation. The polynomial formulation is derived automatically from the CSP formulation.

UBC March Introduction Boolean MAX-CSP(G) for rank d, G = set of relations of rank d –Input Input = Bag of Constraint = CSP(G) instance Constraint = Relation + Set of Variable Relation = int. // Relation number < 2 ^ (2 ^ d) in G Variable = int –Output (0,1) assignment to variables which maximizes the number of satisfied constraints. Example Input: G = {22} of rank 3. H = –22: –22: –22: in3 has number 22 M = {1 !2 !3 !4} satisfies all

UBC March Variation MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): H = 22: : in MAX-CSP({22},?). Highest value for ? 22: :

UBC March The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Player 1 constructs a CSP(G) instance H with 1000 variables and gives it to player 2 (1 second limit). 3.Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit). 4.Take 10 turns (go to 1). How would you play this game intelligently?

UBC March Our approach by Example: SAT Rank 2 example 14 : : : : : : : : : : 1 2 = or(1 2) 7: 1 3 = or(!1 !3)

UBC March appmean = approximation of the mean (k variables true) Blurry vision What do we learn from the abstract representation? set 1/3 of the variables to true (maximize). the best assignment will satisfy at least 7/9 constraints. very useful but the vision is blurry in the “middle”. excellent peripheral vision = k 8/9 7/9

UBC March Our approach by Example Given a CSP(G)-instance H and an assignment N which satisfies fraction f in H. –Is there an assignment that satisfies more than f? YES (we are done), abs H (mb) > f MAYBE, The closer abs H () comes to f, the better –Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f YES (we are done), H1 = H k=1, abs H1 (mb1) > f MAYBE, the closer abs H1 (mb1) comes to f, the better NO, UP or clause learning abs H = abstract representation of H

UBC March : : : : : : : : : : : : : : : : : : /9 6/7 = 8/9 3/7=5/9 3/9H H0 abstract representation maximum assignment away from max bias: blurry 7/9 5/7=7/9

UBC March : : : : : : : : : : : : : : : : : : /9 7/8=8/9 6/8=7/9 H H1 3/8 2/7=3/ maximum assignment away from max bias: blurry 7/9 clearly above 3/4

UBC March /9 7/9 14 : : : : : : : : : /7=8/9 5/7=7/9 7/8 = 8/9 6/8 = 7/9 abstract representation guarantees 7/9 abstract representation guarantees 7/9 abstract representation guarantees 8/9 H H0 H1 NEVER GOES DOWN: DERANDOMIZATION

UBC March : : : : : : rank 2 10: 1 = or(1) 7: 1 2 = or(!1 !2) 5 : : : : : : rank 2 5: 1 = or(!1) 13: 1 2 = or(1 !2) 4/6 3/6 abstract representation guarantees * 6 = 3.75: 4 satisfied. 4/6 3/6 4/ The effect of n-map

UBC March First Impression The abstract representation = look-ahead polynomials seems useful for guiding the search. The look-ahead polynomials give us averages: the guidance can be misleading because of outliers. But how can we compute the look-ahead polynomials?

UBC March Where we are Introduction Look-forward Look-backward SPOT: how to use the look-ahead polynomials together with superresolution.

UBC March Look Forward Why? –To make informed decisions How? –Abstract representation based on look-ahead polynomials

UBC March Look-ahead Polynomial (Intuition) The look-ahead polynomial computes the expected fraction of satisfied constraints among all random assignments that are produced with bias p.

UBC March Consider an instance: 40 variables, 1000 constraints (1in3) 1, …,40 22: : Abstract representation: reduce the instance to look-ahead polynomial 3p(1-p) 2 = B 1,3 (p) (Bernstein)

UBC March p(1-p) 2 for MAX-CSP({22})

UBC March Look-ahead Polynomial (Definition) H is a CSP(G) instance. N is an arbitrary assignment. The look-ahead polynomial la H,N (p) computes the expected fraction of satisfied constraints of H when each variable in N is flipped with probability p.

UBC March The general case MAX-CSP(G) G = {R 1, … }, t R (F) = fraction of constraints in F that use R. x = p appSAT R (x) over all R is a super set of the Bernstein polynomials (computer graphics, weighted sum of Bernstein polynomials)

UBC March Rational Bezier Curves

UBC March Bernstein Polynomials

UBC March all the appSAT R (x) polynomials

UBC March Look-ahead Polynomial in Action Focus on purely mathematical question first Algorithmic solution will follow Mathematical question: Given a CSP(G) instance. For which fractions f is there always an assignment satisfying fraction f of the constraints? In which constraint systems is it impossible to satisfy many constraints?

UBC March Remember? MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): 22: : : :

UBC March Mathematical Critical Transition Point MAX-CSP({22},f): For f ≤ u: problem has always a solution For f ≥ u +  : problem has not always a solution,    u  critical transition point always (fluid) not always (solid)

UBC March The Magic Number u = 4/9

UBC March p(1-p) 2 for MAX-CSP({22})

UBC March Produce the Magic Number Use an optimally biased coin –1/3 in this case In general: min max problem

UBC March The 22 reductions: Needed for implementation ,0 1,1 2,1 2,0 3,0 3,1 3,0 3,1 2,0 2,1 22 is expanded into 6 additional relations.

UBC March The 22 N-Mappings: Needed for implementation is expanded into 7 additional relations

UBC March The 22 N-Mappings: Needed for implementation N-mapped vars Relation# | | | | | | | | | 104

UBC March General Dichotomy Theorem MAX-CSP(G,f): For each finite set G of relations there exists an algebraic number t G For f ≤ t G : MAX-CSP(G,f) has polynomial solution For f ≥ t G +  : MAX-CSP(G,f) is NP-complete,   t G  critical transition point easy (fluid) Polynomial hard (solid) NP-complete due to Lieberherr/Specker (1979, 1982) polynomial solution: Use optimally biased coin. Derandomize. P-Optimal. 

UBC March Context Ladner [Lad 75]: if P !=NP, then there are decision problems in NP that are neither NP-complete, nor they belong to P. Conceivable that MAX-CSP(G,f) contains problems of intermediate complexity.

UBC March General Dichotomy Theorem (Discussion) MAX-CSP(G,f): For each finite set G of relations there exists an algebraic number t G For f ≤ t G : MAX-CSP(G,f) has polynomial solution For f ≥ t G +  : MAX-CSP(G,f) is NP-complete,   t G  critical transition point easy (fluid), Polynomial (finding an assignment) constant proofs (done statically using look-ahead polynomials) no clause learning hard (solid), NP-complete exponential, super-polynomial proofs ??? relies on clause learning 

UBC March The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Player 1 constructs a CSP(G) instance H with 1000 variables and gives it to player 2 (1 second limit). 3.Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit). 4.Take turns (go to 1).

UBC March Evergreen(3,2) Rank 3: Represent relations by the integer corresponding to the truth table in standard sorted order 000 – 111. choose relations between 1 and 254 (exclude 0 and 255). Don’t choose two odd numbers: All false would satisfy all constraints. Don’t choose both numbers above 128: All true would satisfy all constraints.

UBC March For Evergreen(3,2)

min max problem t G = min max sat(H,M) all (0,1) assignments M all CSP(G) instances H sat(H,M) = fraction of satisfied constraints in CSP(G)-instance H by assignment M

Problem reductions are the key Solution to simpler problem implies solution to original problem.

min max problem t G = lim min max sat(H,M,n) all (0,1) assignments M to n variables all SYMMETRIC constraint systems H with n variables n to infinity sat(H,M,n) = fraction of satisfied constraints in CSP(G)-instance H by assignment M with n variables.

Reduction achieved Instead of minimizing over all constraint systems it is sufficient to minimize over the symmetric constraint systems.

Reduction Symmetric case is the worst-case: If in a symmetric constraint system the fraction f of constraints can be satisfied, then in any constraint system the fraction f can be satisfied.

Symmetric the worst-case.... n variables n! permutations If in the big system the fraction f is satisfied, then there must be a least one small system where the fraction f is satisfied

min max problem t G = lim min max sat(H,M,n) all (0,1) assignments M to n variables where the first k variables are set to 1 all SYMMETRIC constraint systems H with n variables n to infinity sat(H,M,n) = fraction of satisfied constraints in system S by assignment I

UBC March Observations The look-ahead polynomial look-forward approach has not been used in state-of- the-art MAX-SAT and Boolean MAX-CSP solvers. Often a fair coin is used. The optimally biased coin is often significantly better.

UBC March

UBC March N 0 ={!v 1,!v 2,!v 3,!v 4 } How the look-ahead polynomial depends on its context, the currently best assignment.

UBC March N 0 ‘ ={v 1,!v 2,!v 3,!v 4 }

UBC March Other magic numbers (Lieberherr/Specker (1982)) G = all relations used in SAT (Or) –t G = ½ (easy) –2-satisfiable (disallow A and !A for any A): t G =(sqrt(5)-1)/2 G = {R 0,R 1,R 2,R 3 }; R j : rank 3, exactly j of 3 variables are true. t G = ¼

UBC March Other magic numbers (2) (Lieberherr/Specker (1982)) G(p,q) = {R p,q = disjunctions containing at least p positive or q negative literals (p,q≥1)} –Let a be the solution of (1-x) p =x q in (0,1). t G(p,q) =1-a q

UBC March SAT Rank 2 example 9 constraints 14 : : : : : : : : : : 1 2 = or(1 2) 7: 1 3 = or(!1 !3) What is the look-ahead polynomial?

UBC March appmean = lookahead is an approximation of the true mean Blurry vision What do we learn from the abstract representation? set 1/3 of the variables to true (maximize). the best assignment will satisfy at least 7/9 constraints. very useful but the vision is blurred in the “middle”. excellent peripheral vision

UBC March Where we are Introduction Look-forward Look-backward SPOT: how to use the look-ahead polynomials

UBC March Look Backward Why? –to avoid past mistakes How? –Transition system based on superresolution. –Superresolution was first introduced for SAT, now we generalize it for MAX-CSP.

UBC March Observation Optimally biased coin technique based on look-ahead polynomials is “best-possible”. If we could improve it by a trillionth in polynomial time, then P=NP. We improve it now by learning new constraints that will influence the polynomial.

UBC March Clause Learning Let’s go beyond what an optimally biased coin guarantees! Goal: satisfy the maximum number of constraints. Approach: Superresolution. –When to apply: number of constraints guaranteed to be unsatisfied doesn’t decrease A mistake is made. –Who to blame: a subset of the decision literals They are the culprits. –How to penalize: add the disjunctions of their negations as a superresolvent The gang of culprits is watched.

UBC March Transition Rules Unit-Propagation (UP): M || F || SR || N → Mk || F || SR || N if k is undefined in M, and unsat (SR,M¬k) > 0 or unsat(F,M¬k) ≥ unsat(F,N). old mistake(M¬k) new mistake(M¬k) mistake(M) = old mistake(M) or new mistake(M)

UBC March Transition Rules Semi-Superresolution (SSR): NewSR = V (¬k), where k M d M || F || SRs || N → M || F || SRs, NewSR || N if unsat(SR,M) > 0 or unsat(F,M) ≥ unsat(F,N). old mistake(M) new mistake(M) mistake(M) = old mistake(M) or new mistake(M)

UBC March Transition Rules Superresolution (SR): 1977 M || F || SRs || N → M || F || SRs, Common || N if there exists a literal k so that by SSR applied twice: –NewSR=Common, k –NewSR=Common, !k Notes: Note that Common is a resolvent. Superresolution is the mother of clause learning: other clause learning schemes learn clauses implied from superresolvents by UnitPropagation. Resolution and Superresolution are polynomially equivalent (1977, Beame et al. (2004)).

UBC March Superresolution Mother of clause learning: minimal elements of learned clauses But from superresolution to making clause learning a suitable and efficient technique in SAT and CSP and MAX-CSP solvers there is a long way

UBC March Transition Rules Opt-Semi-Superresolution (OSSR): NewSR = V (¬k), where kєM’ subset M d M || F || SRs || N → M || F || SRs, NewSR || N if mistake(M) and not newM(F,M*), for all M* where M* is M’ with one literal deleted. oldM(M) = unsat(SR,M)>0 newM(F,M) = unsat(UP*(F,M),M) ≥ unsat(F,N) mistake(M) = oldM(M) or newM(F,M) UP*(F,M) : apply UP as often as possible after applying M to F NewSR is minimal

UBC March Optimized Semi-Superresolution Not all decision literals may be responsible for the “mistake”. Want to find a minimal superresolvent so that deleting one literal would destroy the superresolvent property. Can be implemented by a traversal back the implication graph that is built as part of unit propagation.

UBC March Optimized Semi-Superresolution (Fast implementation) Can be implemented by a traversal back the implication graph that is built as part of unit propagation. v w k1 k3 k2 k7 k6 k5k4 !k8 k8

UBC March Algorithm plan start with an arbitrary assignment N. while (proof incomplete) { –try to improve N by creating new assignment from scratch using optimally biased coin to flip the assignments; success: Update N; failure: learn a new constraint that will prevent same mistake and will “improve” the polynomial. }

UBC March

UBC March UP / D

UBC March Properties of TS TS finds the maximum in an exponential number of steps. It creates a polynomially checkable proof that we indeed found the maximum.

UBC March Where we are Introduction Look-forward Look-back SPOT: how to use the look-ahead polynomials with superresolution

UBC March SPOT (Superresolution P-OpTimal) Look-forward based on look-ahead polynomials –value-ordering –variable-ordering Look-backward –superresolution many different learning schemes developed by SAT community (different cuts of the implication graph) SPOT defines a family of solvers that rely on look-ahead polynomials and (optimized) superresolvents.

UBC March Our approach to Solving H in MAX-CSP(G,f) Given an assignment N which satisfies fraction f. –Is there an assignment that satisfies more than f? YES (we are done), la H,N (mb) > f MAYBE, The closer la H,N () comes to f, the better –Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f YES (we are done), H1 = UP*(H k=1,N), la H1,N (mb1) > f MAYBE, the closer la H1,N () comes to f, the better NO, UP or clause learning UP*(F,M) : apply UP as often as possible after applying assignment M to F The problem: MAYBE happens frequently, especially when f is close to 1.

UBC March Value Ordering Given is F and currently best assignment N. H1 = UP*(H x=1,N) H0 = UP*(H x=0,N) Choose x = 1, if la H1,N (mb1) ≥ la H0,N (mb0) UP*(F,M) : apply UP as often as possible after applying assignment M to F

UBC March Two ways to look forward using look-ahead polynomials Reduction: H k=d (d=0,1; k a literal) n-map(H,k) –connection: abs((n-map(H,k) k=d )= abs(H k=!d ) abstract representation can achieve maximum either by repeated reductions or by repeated n- maps.

UBC March The SPOT space How to use the look-ahead polynomials Choose top k (number of true variables). Choose among top 5 (4 is the winner)

UBC March SPOT-Conjecture There is a member U of the SPOT family of solvers: –U finds a maximum assignment “quickly”. –But U spends a long time proving that it is the maximum assignment. Stopping rule problem.

UBC March The bold SPOT-Conjecture There is a member U of the SPOT family of solvers: –U finds the maximum assignment after at most |F| c superresolution steps where c is a constant. –Any superresolution proof for maximality is probably superpolynomial.

UBC March SPOT-Conjecture number of tries (proof steps) percentage satisfied 0 1 two helpers: 1. look-ahead polynomial 2. superresolvents stopping rule problem! only one helper: superresolvents look-ahead polynomials become totally useless !?! maximum random assignment N tGtG only one helper: look-ahead polynomial

UBC March SPOT-Conjecture number of tries (proof steps) percentage satisfied 0 1 two helpers: 1. look-ahead polynomial 2. superresolvents stopping rule problem! only one helper: superresolvents look-ahead polynomials become totally useless !?! symmetric instance maximum random assignment N la F,N (mb) only one helper: look-ahead polynomial

UBC March Are look-ahead polynomials useful? number of tries (proof steps) percentage satisfied 0 1 maximum random assignment N la F,N1 (mb) Some fast MAX-CSP solver MC N1 How often does this happen in practice: MC has to search using clause learning, while the look-ahead polynomial can construct a better assignment without search. Intuition: the better the assignment N1, the less likely it is that the look-ahead polynomial improves N1.

UBC March There is hope that the look-ahead polynomials are useful

UBC March What is new? New: Superresolution for MAX-CSP New: Integration of look-ahead polynomials with superresolution Old: Superresolution for SAT (1977) Old: Look-ahead polynomials (1983)

UBC March Additional Information Rich literature on clause learning in SAT and CSP solver domain. Superresolution is the most general form of clause learning with restarts. Papers on look-ahead polynomials and superresolution: papers/publications.html

UBC March Additional Information Useful unpublished paper on look-ahead polynomials: biblio/partial-sat-II.html biblio/partial-sat-II.html Technical report on the topic of this talk: biblio/POptMAXCSP.html biblio/POptMAXCSP.html

UBC March Future work Exploring best combination of look-forward and look-back techniques. Find all maximum-assignments or estimate their number. Robustness of maximum assignments. Are our MAX-CSP solvers useful for reasoning about biological pathways?

UBC March Conclusions Presented SPOT, a family of MAX-CSP solvers based on look-ahead polynomials and non-chronological backtracking. SPOT has a desirable property: P-optimal. SPOT can be implemented very efficiently. Preliminary experimental results are encouraging. A lot more work is needed to assess the practical value of the look- ahead polynomials.

UBC March end for now

UBC March appmean is an approximation of the true mean

UBC March

UBC March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

UBC March MAX-CSP: Superresolution and P-Optimality Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

UBC March Binomial Distribution

UBC March

UBC March Example x1 + x2 + x3 = 1 x1 + x2 + + x4 = 1 can satisfy 6/7 x1 + x3 + x4 = 1 x1 + x2 + + x5 = 1 x1 + x3 + x5 = 1 x2 + x3 + x5 =1

UBC March maximize 3x(1-x) 2

UBC March Transition Rules Unit-Propagation (UP): M || F || SR || N → Mk || F || SR || N if k is undefined in M, and unsat (SR,M¬k) > 0 or unsat(F,M¬k) ≥ unsat(F,N).

UBC March Transition Rules Decide (D): M || F || SR || N → Mk d || F || SR || N if k is undefined in M, and v(k) occurs in some constraint of F.

UBC March Transition Rules Update: M || F || SR || N → M || F || SR || M if M is complete, and unsat(F,M) < unsat(F,N).

UBC March Transition Rules Restart: M || F || SR || N → { } || F || SR || N

UBC March Transition Rules Finale: M || F || SR || N → M || F || SR || N if Φ SR or unsat(F,N) = 0.

UBC March Transition Rules Semi-Superresolution (SSR): NewSR = V (¬k), where k M d M || F || SR || N → M || F || SR, NewSR || N if unsat(SR,M) > 0 or unsat(F,M) ≥ unsat(F,N).

UBC March Transition Manager

UBC March Transition Rules

UBC March Transition Rules (cont.)

UBC March Where we are Introduction Look-forward Look-back Packed Truth Tables SPOT: how to use the look-ahead polynomials

UBC March Requirements for Packed Truth Tables The look-ahead polynomial can be computed efficiently. Requires efficient truth table analysis. Reduction of an instance must be efficient. Efficiently compute the forced variables. Each relation has a unique representation.

UBC March Packed Truth Tables

UBC March RelationI: implemented by bitwise operations int isForced(int variablePosition) boolean isIrrelevant(int variablePosition) int nMap(int variablePosition) int numberOfRelevantVariables() int q(int s) int reduce(int variablePosition, int value) int rename(int permutationSemantics, int... permutation)

UBC March Different ways of constructing implication graph (SAT) Lieberherr 1977: –edge from l1 to l2 is labeled by the set of already forced literals L so that l1 union L forces l2 because of a clause C. Beame 2004 (now the standard, due to Marques-Silva & Sakallah, 1996) –edge from l1 to l2 is labeled by clause C. l1 is responsible for forcing l2 because of clause C.

UBC March The Evergreen Project: Assessing the Guidance of Look-Ahead Polynomials in MAX-CSP Solving Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart