Presentation is loading. Please wait.

Presentation is loading. Please wait.

UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint.

Similar presentations


Presentation on theme: "UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint."— Presentation transcript:

1 UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart Title inspired by a paper by Carla Gomes / David Shmoys

2 UBC March 20072 Abstract We invent a simple game, called the Evergreen Game, which is about generating and solving Boolean MAX-CSP problems. The fallouts from the Evergreen Game are surprising: Although the game is about constructing and solving MAX-CSP problems, simple, efficient algorithms are sufficient to guarantee a draw. The best game-playing strategy leads to a significant reduction of the huge search space for both formula generation and solving.

3 UBC March 20073 Abstract Fallouts (continued) –The Evergreen Game shows us how to systematically translate a CSP formula into a polynomial that is fundamental in playing the game well. –We have some (but incomplete) evidence that those polynomials are useful for efficient MAX-CSP as well as MAX-SAT and SAT solvers.

4 UBC March 20074 Where we are Introduction The Evergreen Game The Evergreen Player as Preprocessor Some Experimental Results

5 UBC March 20075 Problem Snapshot SAT: classic problem in complexity theory SAT & MAX-SAT Solvers: working on CNFs (a multi-set of disjunctions). Boolean CSP: constraint satisfaction problem –Each constraint uses a Boolean relation. –e.g. a Boolean relation 1in3(x y z) is satisfied iff exactly one of its parameters is true. Boolean MAX-CSP a multi-set of constraints.

6 UBC March 20076 Introduction Boolean MAX-CSP(G) for rank d, G = set of relations of rank d –Input Input = Bag of Constraint = CSP(G) instance Constraint = Relation + Set of Variable Relation = int. // Relation number < 2 ^ (2 ^ d) in G Variable = int –Output (0,1) assignment to variables which maximizes the number of satisfied constraints. Example Input: G = {22} of rank 3. H = –22:1 2 3 0 –22:1 2 4 0 –22:1 3 4 0 1in3 has number 22 M = {1 !2 !3 !4} satisfies all

7 UBC March 20077 Variation MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): H = 22:1 2 3 0 22:1 2 4 0 in MAX-CSP({22},?). Highest value for ? 22:1 3 4 0 22: 2 3 4 0

8 UBC March 20078 Where we are Introduction The Evergreen Game The Evergreen Player as Preprocessor Some Experimental Results

9 UBC March 20079 The Game by Example (special case of Evergreen(2,2)) The Evergreen Game is played by two players, Anna and Bob, that take turns creating and solving CSP formulae and paying each other a percentage of a wager based on the fraction of constraints satisfied. Let the wager w be 1 million dollars and the constraints limited to Gamma ={OR(x,y), NOT(x)}.

10 UBC March 200710 The Game by Example Anna starts by constructing F Initial = –{100: NOT(x), 150: NOT(y), 200: OR(x,y)}. Bob tries to find an assignment that satisfies the largest possible fraction of constraints. For example, the assignment {x=true, y=false} will satisfy (150+200)/450 approx 0.78. Anna then pays Bob 0.78 million dollars (w*0.78).

11 UBC March 200711 The Game by Example Bob now constructs a formula that Anna solves and pays Anna the percentage of the wager that she solved.

12 UBC March 200712 Now Bob constructs a formula for Anna: {3: NOT(x), 3: NOT(y), 2: NOT(z) 1: OR(x, y), 1: OR(x, z), 1: OR( y, z)} The best assignment that Anna finds is {x=false, y=false, z=true} which satisfies about the fraction 0.72. Bob keeps 0.06 million in his pocket.

13 UBC March 200713 Theorem 1 Game Evergreen(2,2) has polynomial time algorithms Construct(2,2) and Solve(2,2) for Bob so that Bob can achieve a draw even if Anna has unlimited computational resources.

14 UBC March 200714 The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Player 1 constructs a CSP(G) formula H with 1000 variables and gives it to player 2 (1 second limit). 3.Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit). 4.Take 1 turn and stop. How would you play this game intelligently?

15 UBC March 200715 For details http://www.ccs.neu.edu/home/lieber/evergr een/game-life-science.html

16 Anna’s Objective: inf max problem t G = inf max sat(H,M) all (0,1) assignments M all CSP(G) instances H sat(H,M) = fraction of satisfied constraints in CSP(G)-formula H by assignment M

17 Bob’s Objective t G = inf max sat(H,M) all (0,1) assignments M all CSP(G) instances H Find an assignment that is at least as good as t G : Algorithm Evergreen Player (linear time).

18 UBC March 200718 Where we are Introduction The Evergreen Game The Evergreen Player as Preprocessor Some Experimental Results

19 UBC March 200719 Experiment We propose to put the Evergreen Player into action as a preprocessor for state-of- the-art SAT and MAX-SAT solvers. Use Evergreen Player to create a maximal assignment J for an input formula F. Feed n-map(F,J) to a fast solver.

20 UBC March 200720 Where we are Introduction The Evergreen Game The Evergreen Player as Preprocessor Some Experimental Results

21 UBC March 200721 Results from 2007 Benchmarks Within the MAX3SAT benchmarks, there are 4 formulae where Toolbar timed out at 1200 seconds. (v70- c700.wcnf ~ v70-c1000.wcnf). Among these formulae, 1 has its ratio gotten worse (0.9985795) and 3 of 4 have their ratio gotten better with the average being roughly 1.0099673. Within the 3 MAXCUT benchmarks I've tried, there is one formula where Toolbar timed out at 1200 seconds. This formulae has its ratio unchanged. Among all the 20 benchmarks I've finished, 5 of them fall into the time-out category.

22 UBC March 200722 Other results from 2007 Benchmarks On some benchmarks where no timeout occurred the running time got better (by factors of 2 and 3) in 50 % of the cases with preprocessing. Preprocessing is very fast (linear).

23 UBC March 200723 yices: a nice improvement on one of the first examples we tried Yices without preprocessing: v2000-c8400 average time = 888.048 average sat ratio = 0.947143 Yices with preprocessing: v2000-c8400 average time = 0.0342615 average sat ratio = 1

24 UBC March 200724 Conclusions Worth to investigate further. Suggests a cheap way to parallelize MAX- SAT and SAT solving: Run preprocessed and unpreprocessed version in parallel.

25 UBC March 200725 Thank you. The End.

26 UBC March 200726 Our approach by Example: SAT Rank 2 example 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3)

27 UBC March 200727 appmean = approximation of the mean (k variables true) Blurry vision What do we learn from the abstract representation? set 1/3 of the variables to true (maximize). the best assignment will satisfy at least 7/9 constraints. very useful but the vision is blurry in the “middle”. excellent peripheral vision 0 1 2 3 4 5 6 = k 8/9 7/9

28 UBC March 200728 Our approach by Example Given a CSP(G)-instance H and an assignment N which satisfies fraction f in H. –Is there an assignment that satisfies more than f? YES (we are done), abs H (mb) > f MAYBE, The closer abs H () comes to f, the better –Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f YES (we are done), H1 = H k=1, abs H1 (mb1) > f MAYBE, the closer abs H1 (mb1) comes to f, the better NO, UP or clause learning abs H = abstract representation of H

29 UBC March 200729 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14 : 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 8/9 6/7 = 8/9 3/7=5/9 3/9H H0 abstract representation 0 1 2 3 4 5 6 0 1 2 3 4 5 maximum assignment away from max bias: blurry 7/9 5/7=7/9

30 UBC March 200730 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 3 0 7 : 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 8/9 7/8=8/9 6/8=7/9 H H1 3/8 2/7=3/8 0 1 2 3 4 5 6 0 1 2 3 4 5 maximum assignment away from max bias: blurry 7/9 clearly above 3/4

31 UBC March 200731 8/9 7/9 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 6/7=8/9 5/7=7/9 7/8 = 8/9 6/8 = 7/9 abstract representation guarantees 7/9 abstract representation guarantees 7/9 abstract representation guarantees 8/9 H H0 H1 NEVER GOES DOWN: DERANDOMIZATION

32 UBC March 200732 10 : 1 0 10 : 2 0 10 : 3 0 7 : 1 2 0 7 : 1 3 0 7 : 2 3 0 rank 2 10: 1 = or(1) 7: 1 2 = or(!1 !2) 5 : 1 0 10 : 2 0 10 : 3 0 13 : 1 2 0 13 : 1 3 0 7 : 2 3 0 rank 2 5: 1 = or(!1) 13: 1 2 = or(1 !2) 4/6 3/6 abstract representation guarantees 0.625 * 6 = 3.75: 4 satisfied. 4/6 3/6 4/6 0 1 2 3 The effect of n-map

33 UBC March 200733 First Impression The abstract representation = look-ahead polynomials seems useful for guiding the search. The look-ahead polynomials give us averages: the guidance can be misleading because of outliers. But how can we compute the look-ahead polynomials?

34 UBC March 200734 Where we are Introduction Look-forward Look-backward SPOT: how to use the look-ahead polynomials together with superresolution.

35 UBC March 200735 Look Forward Why? –To make informed decisions How? –Abstract representation based on look-ahead polynomials

36 UBC March 200736 Look-ahead Polynomial (Intuition) The look-ahead polynomial computes the expected fraction of satisfied constraints among all random assignments that are produced with bias p.

37 UBC March 200737 Consider an instance: 40 variables, 1000 constraints (1in3) 1, …,40 22: 6 7 9 0 22: 12 27 38 0 Abstract representation: reduce the instance to look-ahead polynomial 3p(1-p) 2 = B 1,3 (p) (Bernstein)

38 UBC March 200738 3p(1-p) 2 for MAX-CSP({22})

39 UBC March 200739 Look-ahead Polynomial (Definition) H is a CSP(G) instance. N is an arbitrary assignment. The look-ahead polynomial la H,N (p) computes the expected fraction of satisfied constraints of H when each variable in N is flipped with probability p.

40 UBC March 200740 The general case MAX-CSP(G) G = {R 1, … }, t R (F) = fraction of constraints in F that use R. x = p appSAT R (x) over all R is a super set of the Bernstein polynomials (computer graphics, weighted sum of Bernstein polynomials)

41 UBC March 200741 Rational Bezier Curves

42 UBC March 200742 Bernstein Polynomials http://graphics.idav.ucdavis.edu/education/CAGDNotes/Bernstein-Polynomials.pdf

43 UBC March 200743 all the appSAT R (x) polynomials

44 UBC March 200744 Look-ahead Polynomial in Action Focus on purely mathematical question first Algorithmic solution will follow Mathematical question: Given a CSP(G) instance. For which fractions f is there always an assignment satisfying fraction f of the constraints? In which constraint systems is it impossible to satisfy many constraints?

45 UBC March 200745 Remember? MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): 22:1 2 3 0 22:1 2 4 0 22:1 3 4 0 22: 2 3 4 0

46 UBC March 200746 Mathematical Critical Transition Point MAX-CSP({22},f): For f ≤ u: problem has always a solution For f ≥ u +  : problem has not always a solution,    u  critical transition point always (fluid) not always (solid)

47 UBC March 200747 The Magic Number u = 4/9

48 UBC March 200748 3p(1-p) 2 for MAX-CSP({22})

49 UBC March 200749 Produce the Magic Number Use an optimally biased coin –1/3 in this case In general: min max problem

50 UBC March 200750 The 22 reductions: Needed for implementation 2260 3 240 15255 0 1,0 1,1 2,1 2,0 3,0 3,1 3,0 3,1 2,0 2,1 22 is expanded into 6 additional relations.

51 UBC March 200751 The 22 N-Mappings: Needed for implementation 22 41 73 134 97 146 148 0 2 1 22 is expanded into 7 additional relations. 104 0 2 1 0 2 1 0 2 1

52 UBC March 200752 The 22 N-Mappings: Needed for implementation N-mapped vars Relation# 2 1 0 | ------------------------ 0 0 0 | 22 0 0 1 | 41 0 1 0 | 73 1 0 0 | 97 0 1 1 | 134 1 0 1 | 146 1 1 0 | 148 1 1 1 | 104

53 UBC March 200753 General Dichotomy Theorem MAX-CSP(G,f): For each finite set G of relations there exists an algebraic number t G For f ≤ t G : MAX-CSP(G,f) has polynomial solution For f ≥ t G +  : MAX-CSP(G,f) is NP-complete,   t G  critical transition point easy (fluid) Polynomial hard (solid) NP-complete due to Lieberherr/Specker (1979, 1982) polynomial solution: Use optimally biased coin. Derandomize. P-Optimal. 

54 UBC March 200754 Context Ladner [Lad 75]: if P !=NP, then there are decision problems in NP that are neither NP-complete, nor they belong to P. Conceivable that MAX-CSP(G,f) contains problems of intermediate complexity.

55 UBC March 200755 General Dichotomy Theorem (Discussion) MAX-CSP(G,f): For each finite set G of relations there exists an algebraic number t G For f ≤ t G : MAX-CSP(G,f) has polynomial solution For f ≥ t G +  : MAX-CSP(G,f) is NP-complete,   t G  critical transition point easy (fluid), Polynomial (finding an assignment) constant proofs (done statically using look-ahead polynomials) no clause learning hard (solid), NP-complete exponential, super-polynomial proofs ??? relies on clause learning 

56 UBC March 200756 The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Player 1 constructs a CSP(G) instance H with 1000 variables and gives it to player 2 (1 second limit). 3.Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit). 4.Take turns (go to 1).

57 UBC March 200757 Evergreen(3,2) Rank 3: Represent relations by the integer corresponding to the truth table in standard sorted order 000 – 111. choose relations between 1 and 254 (exclude 0 and 255). Don’t choose two odd numbers: All false would satisfy all constraints. Don’t choose both numbers above 128: All true would satisfy all constraints.

58 UBC March 200758 For Evergreen(3,2)

59 min max problem t G = min max sat(H,M) all (0,1) assignments M all CSP(G) instances H sat(H,M) = fraction of satisfied constraints in CSP(G)-instance H by assignment M

60 Problem reductions are the key Solution to simpler problem implies solution to original problem.

61 min max problem t G = lim min max sat(H,M,n) all (0,1) assignments M to n variables all SYMMETRIC constraint systems H with n variables n to infinity sat(H,M,n) = fraction of satisfied constraints in CSP(G)-instance H by assignment M with n variables.

62 Reduction achieved Instead of minimizing over all constraint systems it is sufficient to minimize over the symmetric constraint systems.

63 Reduction Symmetric case is the worst-case: If in a symmetric constraint system the fraction f of constraints can be satisfied, then in any constraint system the fraction f can be satisfied.

64 Symmetric the worst-case.... n variables n! permutations If in the big system the fraction f is satisfied, then there must be a least one small system where the fraction f is satisfied

65 min max problem t G = lim min max sat(H,M,n) all (0,1) assignments M to n variables where the first k variables are set to 1 all SYMMETRIC constraint systems H with n variables n to infinity sat(H,M,n) = fraction of satisfied constraints in system S by assignment I

66 UBC March 200766 Observations The look-ahead polynomial look-forward approach has not been used in state-of- the-art MAX-SAT and Boolean MAX-CSP solvers. Often a fair coin is used. The optimally biased coin is often significantly better.

67 UBC March 200767

68 UBC March 200768 N 0 ={!v 1,!v 2,!v 3,!v 4 } How the look-ahead polynomial depends on its context, the currently best assignment.

69 UBC March 200769 N 0 ‘ ={v 1,!v 2,!v 3,!v 4 }

70 UBC March 200770 Other magic numbers (Lieberherr/Specker (1982)) G = all relations used in SAT (Or) –t G = ½ (easy) –2-satisfiable (disallow A and !A for any A): t G =(sqrt(5)-1)/2 G = {R 0,R 1,R 2,R 3 }; R j : rank 3, exactly j of 3 variables are true. t G = ¼

71 UBC March 200771 Other magic numbers (2) (Lieberherr/Specker (1982)) G(p,q) = {R p,q = disjunctions containing at least p positive or q negative literals (p,q≥1)} –Let a be the solution of (1-x) p =x q in (0,1). t G(p,q) =1-a q

72 UBC March 200772 SAT Rank 2 example 9 constraints 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3) What is the look-ahead polynomial?

73 UBC March 200773 appmean = lookahead is an approximation of the true mean Blurry vision What do we learn from the abstract representation? set 1/3 of the variables to true (maximize). the best assignment will satisfy at least 7/9 constraints. very useful but the vision is blurred in the “middle”. excellent peripheral vision

74 UBC March 200774 Conclusions

75 UBC March 200775 The End Thank You

76 UBC March 200776 Where we are Introduction Look-forward Look-back SPOT: how to use the look-ahead polynomials with superresolution

77 UBC March 200777 SPOT (Superresolution P-OpTimal) Look-forward based on look-ahead polynomials –value-ordering –variable-ordering Look-backward –superresolution many different learning schemes developed by SAT community (different cuts of the implication graph) SPOT defines a family of solvers that rely on look-ahead polynomials and (optimized) superresolvents.

78 UBC March 200778 Our approach to Solving H in MAX-CSP(G,f) Given an assignment N which satisfies fraction f. –Is there an assignment that satisfies more than f? YES (we are done), la H,N (mb) > f MAYBE, The closer la H,N () comes to f, the better –Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f YES (we are done), H1 = UP*(H k=1,N), la H1,N (mb1) > f MAYBE, the closer la H1,N () comes to f, the better NO, UP or clause learning UP*(F,M) : apply UP as often as possible after applying assignment M to F The problem: MAYBE happens frequently, especially when f is close to 1.

79 UBC March 200779 Value Ordering Given is F and currently best assignment N. H1 = UP*(H x=1,N) H0 = UP*(H x=0,N) Choose x = 1, if la H1,N (mb1) ≥ la H0,N (mb0) UP*(F,M) : apply UP as often as possible after applying assignment M to F

80 UBC March 200780 Two ways to look forward using look-ahead polynomials Reduction: H k=d (d=0,1; k a literal) n-map(H,k) –connection: abs((n-map(H,k) k=d )= abs(H k=!d ) abstract representation can achieve maximum either by repeated reductions or by repeated n- maps.

81 UBC March 200781 The SPOT space How to use the look-ahead polynomials Choose top k (number of true variables). Choose among top 5 (4 is the winner). 1 2 4 3 5

82 UBC March 200782 SPOT-Conjecture There is a member U of the SPOT family of solvers: –U finds a maximum assignment “quickly”. –But U spends a long time proving that it is the maximum assignment. Stopping rule problem.

83 UBC March 200783 The bold SPOT-Conjecture There is a member U of the SPOT family of solvers: –U finds the maximum assignment after at most |F| c superresolution steps where c is a constant. –Any superresolution proof for maximality is probably superpolynomial.

84 UBC March 200784 SPOT-Conjecture number of tries (proof steps) percentage satisfied 0 1 two helpers: 1. look-ahead polynomial 2. superresolvents stopping rule problem! only one helper: superresolvents look-ahead polynomials become totally useless !?! maximum random assignment N tGtG only one helper: look-ahead polynomial

85 UBC March 200785 SPOT-Conjecture number of tries (proof steps) percentage satisfied 0 1 two helpers: 1. look-ahead polynomial 2. superresolvents stopping rule problem! only one helper: superresolvents look-ahead polynomials become totally useless !?! symmetric instance maximum random assignment N la F,N (mb) only one helper: look-ahead polynomial

86 UBC March 200786 Are look-ahead polynomials useful? number of tries (proof steps) percentage satisfied 0 1 maximum random assignment N la F,N1 (mb) Some fast MAX-CSP solver MC N1 How often does this happen in practice: MC has to search using clause learning, while the look-ahead polynomial can construct a better assignment without search. Intuition: the better the assignment N1, the less likely it is that the look-ahead polynomial improves N1.

87 UBC March 200787 There is hope that the look-ahead polynomials are useful

88 UBC March 200788 What is new? New: Superresolution for MAX-CSP New: Integration of look-ahead polynomials with superresolution Old: Superresolution for SAT (1977) Old: Look-ahead polynomials (1983)

89 UBC March 200789 Additional Information Rich literature on clause learning in SAT and CSP solver domain. Superresolution is the most general form of clause learning with restarts. Papers on look-ahead polynomials and superresolution: http://www.ccs.neu.edu/research/demeter/ papers/publications.html

90 UBC March 200790 Additional Information Useful unpublished paper on look-ahead polynomials: http://www.ccs.neu.edu/research/demeter/ biblio/partial-sat-II.html http://www.ccs.neu.edu/research/demeter/ biblio/partial-sat-II.html Technical report on the topic of this talk: http://www.ccs.neu.edu/research/demeter/ biblio/POptMAXCSP.html http://www.ccs.neu.edu/research/demeter/ biblio/POptMAXCSP.html

91 UBC March 200791 Future work Exploring best combination of look-forward and look-back techniques. Find all maximum-assignments or estimate their number. Robustness of maximum assignments. Are our MAX-CSP solvers useful for reasoning about biological pathways?

92 UBC March 200792 Conclusions Presented SPOT, a family of MAX-CSP solvers based on look-ahead polynomials and non-chronological backtracking. SPOT has a desirable property: P-optimal. SPOT can be implemented very efficiently. Preliminary experimental results are encouraging. A lot more work is needed to assess the practical value of the look- ahead polynomials.

93 UBC March 200793 end for now

94 UBC March 200794 appmean is an approximation of the true mean

95 UBC March 200795

96 UBC March 200796 The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

97 UBC March 200797 MAX-CSP: Superresolution and P-Optimality Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

98 UBC March 200798 Binomial Distribution

99 UBC March 200799

100 UBC March 2007100 Example x1 + x2 + x3 = 1 x1 + x2 + + x4 = 1 can satisfy 6/7 x1 + x3 + x4 = 1 x1 + x2 + + x5 = 1 x1 + x3 + x5 = 1 x2 + x3 + x5 =1

101 UBC March 2007101 maximize 3x(1-x) 2

102 UBC March 2007102 Transition Rules Unit-Propagation (UP): M || F || SR || N → Mk || F || SR || N if k is undefined in M, and unsat (SR,M¬k) > 0 or unsat(F,M¬k) ≥ unsat(F,N).

103 UBC March 2007103 Transition Rules Decide (D): M || F || SR || N → Mk d || F || SR || N if k is undefined in M, and v(k) occurs in some constraint of F.

104 UBC March 2007104 Transition Rules Update: M || F || SR || N → M || F || SR || M if M is complete, and unsat(F,M) < unsat(F,N).

105 UBC March 2007105 Transition Rules Restart: M || F || SR || N → { } || F || SR || N

106 UBC March 2007106 Transition Rules Finale: M || F || SR || N → M || F || SR || N if Φ SR or unsat(F,N) = 0.

107 UBC March 2007107 Transition Rules Semi-Superresolution (SSR): NewSR = V (¬k), where k M d M || F || SR || N → M || F || SR, NewSR || N if unsat(SR,M) > 0 or unsat(F,M) ≥ unsat(F,N).

108 UBC March 2007108 Transition Manager

109 UBC March 2007109 Transition Rules

110 UBC March 2007110 Transition Rules (cont.)

111 UBC March 2007111 Where we are Introduction Look-forward Look-back Packed Truth Tables SPOT: how to use the look-ahead polynomials

112 UBC March 2007112 Requirements for Packed Truth Tables The look-ahead polynomial can be computed efficiently. Requires efficient truth table analysis. Reduction of an instance must be efficient. Efficiently compute the forced variables. Each relation has a unique representation.

113 UBC March 2007113 Packed Truth Tables 22 254

114 UBC March 2007114 RelationI: implemented by bitwise operations int isForced(int variablePosition) boolean isIrrelevant(int variablePosition) int nMap(int variablePosition) int numberOfRelevantVariables() int q(int s) int reduce(int variablePosition, int value) int rename(int permutationSemantics, int... permutation)

115 UBC March 2007115 Different ways of constructing implication graph (SAT) Lieberherr 1977: –edge from l1 to l2 is labeled by the set of already forced literals L so that l1 union L forces l2 because of a clause C. Beame 2004 (now the standard, due to Marques-Silva & Sakallah, 1996) –edge from l1 to l2 is labeled by clause C. l1 is responsible for forcing l2 because of clause C.

116 UBC March 2007116 The Evergreen Project: Assessing the Guidance of Look-Ahead Polynomials in MAX-CSP Solving Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart

117 UBC March 2007117 Where we are Introduction Look-forward Look-backward SPOT: how to use the look-ahead polynomials

118 UBC March 2007118 Look Backward Why? –to avoid past mistakes How? –Transition system based on superresolution. –Superresolution was first introduced for SAT, now we generalize it for MAX-CSP.

119 UBC March 2007119 Observation Optimally biased coin technique based on look-ahead polynomials is “best-possible”. If we could improve it by a trillionth in polynomial time, then P=NP. We improve it now by learning new constraints that will influence the polynomial.

120 UBC March 2007120 Clause Learning Let’s go beyond what an optimally biased coin guarantees! Goal: satisfy the maximum number of constraints. Approach: Superresolution. –When to apply: number of constraints guaranteed to be unsatisfied doesn’t decrease A mistake is made. –Who to blame: a subset of the decision literals They are the culprits. –How to penalize: add the disjunctions of their negations as a superresolvent The gang of culprits is watched.

121 UBC March 2007121 Transition Rules Unit-Propagation (UP): M || F || SR || N → Mk || F || SR || N if k is undefined in M, and unsat (SR,M¬k) > 0 or unsat(F,M¬k) ≥ unsat(F,N). old mistake(M¬k) new mistake(M¬k) mistake(M) = old mistake(M) or new mistake(M)

122 UBC March 2007122 Transition Rules Semi-Superresolution (SSR): NewSR = V (¬k), where k M d M || F || SRs || N → M || F || SRs, NewSR || N if unsat(SR,M) > 0 or unsat(F,M) ≥ unsat(F,N). old mistake(M) new mistake(M) mistake(M) = old mistake(M) or new mistake(M)

123 UBC March 2007123 Transition Rules Superresolution (SR): 1977 M || F || SRs || N → M || F || SRs, Common || N if there exists a literal k so that by SSR applied twice: –NewSR=Common, k –NewSR=Common, !k Notes: Note that Common is a resolvent. Superresolution is the mother of clause learning: other clause learning schemes learn clauses implied from superresolvents by UnitPropagation. Resolution and Superresolution are polynomially equivalent (1977, Beame et al. (2004)).

124 UBC March 2007124 Superresolution Mother of clause learning: minimal elements of learned clauses But from superresolution to making clause learning a suitable and efficient technique in SAT and CSP and MAX-CSP solvers there is a long way

125 UBC March 2007125 Transition Rules Opt-Semi-Superresolution (OSSR): NewSR = V (¬k), where kєM’ subset M d M || F || SRs || N → M || F || SRs, NewSR || N if mistake(M) and not newM(F,M*), for all M* where M* is M’ with one literal deleted. oldM(M) = unsat(SR,M)>0 newM(F,M) = unsat(UP*(F,M),M) ≥ unsat(F,N) mistake(M) = oldM(M) or newM(F,M) UP*(F,M) : apply UP as often as possible after applying M to F NewSR is minimal

126 UBC March 2007126 Optimized Semi-Superresolution Not all decision literals may be responsible for the “mistake”. Want to find a minimal superresolvent so that deleting one literal would destroy the superresolvent property. Can be implemented by a traversal back the implication graph that is built as part of unit propagation.

127 UBC March 2007127 Optimized Semi-Superresolution (Fast implementation) Can be implemented by a traversal back the implication graph that is built as part of unit propagation. v w k1 k3 k2 k7 k6 k5k4 !k8 k8

128 UBC March 2007128 Algorithm plan start with an arbitrary assignment N. while (proof incomplete) { –try to improve N by creating new assignment from scratch using optimally biased coin to flip the assignments; success: Update N; failure: learn a new constraint that will prevent same mistake and will “improve” the polynomial. }

129 UBC March 2007129

130 UBC March 2007130 UP / D

131 UBC March 2007131 Properties of TS TS finds the maximum in an exponential number of steps. It creates a polynomially checkable proof that we indeed found the maximum.


Download ppt "UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint."

Similar presentations


Ads by Google