Presentation is loading. Please wait.

Presentation is loading. Please wait.

UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.

Similar presentations


Presentation on theme: "UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work."— Presentation transcript:

1 UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart Title inspired by a paper by Carla Gomes / David Shmoys

2 UBC March 20072 Two objectives I want you to become –better writers of MAX-SAT/MAX-CSP solvers better decision making crosscutting exploration of search space –better players of the Evergreen game the game reveals what the polynomials can do iterated game of base zero-sum game: –Together: choose domain –Anna: choose instance (min.max. possible loss) –Bob: solve instance, Anna pays Bob satisfaction fraction perfect information game

3 UBC March 20073 Introduction Boolean MAX-CSP(G) for rank d, G = set of relations of rank d –Input Input = Bag of Constraint = CSP(G) instance Constraint = Relation + Set of Variable Relation = int. // Relation number < 2 ^ (2 ^ d) in G Variable = int –Output (0,1) assignment to variables which maximizes the number of satisfied constraints. Example Input: G = {22} of rank 3. H = –22:1 2 3 0 –22:1 2 4 0 –22:1 3 4 0 1in3 has number 22 M = {1 !2 !3 !4} satisfies all

4 UBC March 20074 Variation MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): H = 22:1 2 3 0 22:1 2 4 0 in MAX-CSP({22},?). Highest value for ? 22:1 3 4 0 22: 2 3 4 0 1in3 has number 22

5 UBC March 20075 Evergreen(3,2) game Anna and Bob: They agree on a protocol P1 to choose a set of 2 relations (=G) of rank 3. –Anna chooses CSP(G)-instance H (limited). –Bob solves H and gets paid by Anna the fraction that Bob satisfies in H. This gives nice control to Anna. Anna will choose an instance that will minimize Bob’s profit. –Take turns. R1, Anna R2, Bob 100% R1, 0% R2100% R2, 0% R1

6 UBC March 20076 Protocol choice Randomly choose R1 and R2 (independently) between 1 and 255 (Throw two dice choosing relations).

7 UBC March 20077 Tell me How would you react as Anna? –The relations 22 and 22 have been chosen. –You must create a CSP({22}) instance with 1000 variables in which only the smallest possible fraction can be satisfied. –What kind of instance will this be? What kind of algorithm should Bob use to maximize its payoff? Should any MAX-CSP solver be able to maximize Bob’s profit? How well does MAX-SAT (e.g., yices, ubcsat) or MAX-CSP solvers do on symmetric instances ???

8 UBC March 20078 Game strategy in a nutshell Choose G={R1,R2} randomly. Anna chooses instance so that payoff is minimized. Bob finds solution so that payoff is maximized (Solve MAX-CSP(G)) Take turns: Choose G= … Bob chooses … Requires thorough understanding of MAX- CSP(G) problem domain. Requires an excellent MAX-CSP(G) solver.

9 UBC March 20079 Our approach by Example: SAT Rank 2 example 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3) = H Evergreen game: maximize payoff find maximum assignment

10 UBC March 200710 appmean = approximation of the mean (k variables true) Blurry vision What do we learn from the abstract representation abs H ? set 1/3 of the variables to true (maximize). the best assignment will satisfy at least 7/9 constraints. very useful but the vision is blurry in the “middle”. excellent peripheral vision 0 1 2 3 4 5 6 = k 8/9 7/9

11 UBC March 200711 Our approach by Example Given a CSP(G)-instance H and an assignment N which satisfies fraction f in H –Is there an assignment that satisfies more than f? YES (we are done), abs H (mb) > f MAYBE, The closer abs H () comes to f, the better –Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f YES (we are done), H1 = H k=1, abs H1 (mb1) > f MAYBE, the closer abs H1 (mb1) comes to f, the better NO, UP or clause learning abs H = abstract representation of H

12 UBC March 200712 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14 : 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 8/9 6/7 = 8/9 3/7=5/9 3/9H H0 abstract representation 0 1 2 3 4 5 6 0 1 2 3 4 5 maximum assignment away from max bias: blurry 7/9 5/7=7/9

13 UBC March 200713 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 3 0 7 : 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 8/9 7/8=8/9 6/8=7/9 H H1 3/8 2/7=3/8 0 1 2 3 4 5 6 0 1 2 3 4 5 maximum assignment away from max bias: blurry 7/9 clearly above 3/4

14 UBC March 200714 8/9 7/9 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 6/7=8/9 5/7=7/9 7/8 = 8/9 6/8 = 7/9 abstract representation guarantees 7/9 abstract representation guarantees 7/9 abstract representation guarantees 8/9 H H0 H1 NEVER GOES DOWN: DERANDOMIZATION UBCSAT

15 UBC March 200715 10 : 1 0 10 : 2 0 10 : 3 0 7 : 1 2 0 7 : 1 3 0 7 : 2 3 0 rank 2 10: 1 = or(1) 7: 1 2 = or(!1 !2) 5 : 1 0 10 : 2 0 10 : 3 0 13 : 1 2 0 13 : 1 3 0 7 : 2 3 0 rank 2 5: 1 = or(!1) 13: 1 2 = or(1 !2) 4/6 3/6 abstract representation guarantees 0.625 * 6 = 3.75: 4 satisfied. 4/6 3/6 4/6 0 1 2 3 The effect of n-map Evergreen game: G = {10,7} How do you choose a CSP(G)-instance to minimize payoff? 0.618 …

16 UBC March 200716 First Impression The abstract representation = look-ahead polynomials seems useful for guiding the search. The look-ahead polynomials give us averages: the guidance can be misleading because of outliers. But how can we compute the look-ahead polynomials? How do the polynomials help play the Evergreen(3,2) game?

17 UBC March 200717 Where we are Introduction Look-forward Look-backward SPOT: how to use the look-ahead polynomials together with superresolution.

18 UBC March 200718 Look Forward Why? –To make informed decisions –To play the Evergreen game How? –Abstract representation based on look-ahead polynomials

19 UBC March 200719 Look-ahead Polynomial (Intuition) The look-ahead polynomial computes the expected fraction of satisfied constraints among all random assignments that are produced with bias p.

20 UBC March 200720 Consider an instance: 40 variables, 1000 constraints (1in3) 1, …,40 22: 6 7 9 0 22: 12 27 38 0 Abstract representation: reduce the instance to look-ahead polynomial 3p(1-p) 2 = B 1,3 (p) (Bernstein)

21 UBC March 200721 3p(1-p) 2 for MAX-CSP({22})

22 UBC March 200722 Look-ahead Polynomial (Definition) H is a CSP(G) instance. N is an arbitrary assignment. The look-ahead polynomial la H,N (p) computes the expected fraction of satisfied constraints of H when each variable in N is flipped with probability p.

23 UBC March 200723 The general case MAX-CSP(G) G = {R 1, … }, t R (F) = fraction of constraints in F that use R. x = p appSAT R (x) over all R is a super set of the Bernstein polynomials (computer graphics, weighted sum of Bernstein polynomials)

24 UBC March 200724 Rational Bezier Curves

25 UBC March 200725 Bernstein Polynomials http://graphics.idav.ucdavis.edu/education/CAGDNotes/Bernstein-Polynomials.pdf

26 UBC March 200726 all the appSAT R (x) polynomials

27 UBC March 200727 Look-ahead Polynomial in Action Focus on purely mathematical question first Algorithmic solution will follow Mathematical question: Given a CSP(G) instance. For which fractions f is there always an assignment satisfying fraction f of the constraints? In which constraint systems is it impossible to satisfy many constraints?

28 UBC March 200728 Remember? MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): 22:1 2 3 0 22:1 2 4 0 22:1 3 4 0 22: 2 3 4 0

29 UBC March 200729 Mathematical Critical Transition Point MAX-CSP({22},f): For f ≤ u: problem has always a solution For f ≥ u +  : problem has not always a solution,    u  critical transition point always (fluid) not always (solid)

30 UBC March 200730 The Magic Number u = 4/9

31 UBC March 200731 3p(1-p) 2 for MAX-CSP({22})

32 UBC March 200732 Produce the Magic Number Use an optimally biased coin –1/3 in this case In general: min max problem

33 UBC March 200733 The 22 reductions: Needed for implementation 2260 3 240 15255 0 1,0 1,1 2,1 2,0 3,0 3,1 3,0 3,1 2,0 2,1 22 is expanded into 6 additional relations.

34 UBC March 200734 The 22 N-Mappings: Needed for implementation 22 41 73 134 97 146 148 0 2 1 22 is expanded into 7 additional relations. 104 0 2 1 0 2 1 0 2 1

35 UBC March 200735 General Dichotomy Theorem MAX-CSP(G,f): For each finite set G of relations closed under renaming there exists an algebraic number t G : For f ≤ t G : MAX-CSP(G,f) has polynomial solution For f ≥ t G +  : MAX-CSP(G,f) is NP-complete,   t G  critical transition point easy (fluid) Polynomial hard (solid) NP-complete due to Lieberherr/Specker (1979, 1982) polynomial solution: Use optimally biased coin. Derandomize. P-Optimal.  implications for the Evergreen game? Are you a better player?

36 UBC March 200736 Context Ladner [Lad 75]: if P !=NP, then there are decision problems in NP that are neither NP-complete, nor they belong to P. Conceivable that MAX-CSP(G,f) contains problems of intermediate complexity.

37 UBC March 200737 General Dichotomy Theorem (Discussion) MAX-CSP(G,f): For each finite set G of relations closed under renaming there exists an algebraic number t G For f ≤ t G : MAX-CSP(G,f) has polynomial solution For f ≥ t G +  : MAX-CSP(G,f) is NP-complete,   t G  critical transition point easy (fluid), Polynomial (finding an assignment) constant proofs (done statically using look-ahead polynomials) no clause learning hard (solid), NP-complete exponential, super-polynomial proofs ??? relies on clause learning 

38 UBC March 200738 min max problem t G = min max sat(H,M) all (0,1) assignments M all CSP(G) instances H sat(H,M) = fraction of satisfied constraints in CSP(G)-instance H by assignment M

39 UBC March 200739 Problem reductions are the key Solution to simpler problem implies solution to original problem.

40 UBC March 200740 min max problem t G = lim min max sat(H,M,n) all (0,1) assignments M to n variables n to infinity sat(H,M,n) = fraction of satisfied constraints in CSP(G)-instance H by assignment M with n variables. all SYMMETRIC CSP(G) -instances H with n variables

41 UBC March 200741 Reduction achieved Instead of minimizing over all constraint systems it is sufficient to minimize over the symmetric constraint systems.

42 UBC March 200742 Reduction Symmetric case is the worst-case: If in a symmetric constraint system the fraction f of constraints can be satisfied, then in any constraint system the fraction f can be satisfied.

43 UBC March 200743 Symmetric the worst-case.... n variables n! permutations If in the big system the fraction f is satisfied, then there must be a least one small system where the fraction f is satisfied

44 UBC March 200744 min max problem t G = lim min max sat(H,M,n) all (0,1) assignments M to n variables where the first k variables are set to 1 n to infinity sat(H,M,n) = fraction of satisfied constraints in system S by assignment I all SYMMETRIC CSP(G) -instances H with n variables

45 UBC March 200745 Observations The look-ahead polynomial look-forward approach has not been used in state-of- the-art MAX-SAT and Boolean MAX-CSP solvers. Often a fair coin is used. The optimally biased coin is often significantly better.

46 UBC March 200746

47 UBC March 200747 The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Anna constructs a CSP(G) instance H with 1000 variables and at most 2*m*(1000 choose r) constraints and gives it to player 2 (1 second limit). 3.Bob gets paid the fraction of constraints she can satisfy in H (100 seconds limit). 4.Take turns (go to 1).

48 UBC March 200748 For Evergreen(3,2) 100% R1, 0% R2 100% R2, 0% R1

49 UBC March 200749 Evergreen(3,2) protocol possibilities Variant 1 –Player Bob chooses both relations G –Player Anna chooses CSP(G) instance H. –Player Bob solves H and gets paid by Anna. This gives too much control to Bob. Bob can choose two odd relations which guarantees him a pay of 1 independent of how Anna chooses the instance H.

50 UBC March 200750 Evergreen(3,2) protocol possibilities Variant 2: –Anna chooses a relation R1 (e.g. 22). –Bob chooses a relation R2. –Anna chooses CSP(G) instance H. –Bob solves H and gets paid by Anna. R1, Anna R2, Bob 100% R1, 0% R2100% R2, 0% R1

51 UBC March 200751 Problem with variant 2 Anna can just ignore relation R2. Gives Anna too much control because the payoff for Bob depends only on R1 chosen by Anna (and the quality of the solver that Bob uses).

52 UBC March 200752 Protocol choice: variant 3 Randomly choose R1 and R2 (independently) between 1 and 255 (Throw two dice).

53 UBC March 200753 Tell me How would you react as Anna? –The relations 22 and 22 have been chosen. –You must create a CSP({22}) instance with 1000 variables in which only the smallest possible fraction can be satisfied. –What kind of instance will this be? 4/9 What kind of algorithm should P1 use to maximize its payoff? compute optimal k + best MAX-CSP solver. symmetric instance with (1000 choose 3) constraints

54 UBC March 200754 For Evergreen(3,2) 100% R1, 0% R2 100% R2, 0% R1 Tells us how to mix the two relations

55 UBC March 200755 Role of t G in the Evergreen(3,2) game 1: Instance construction: Choose a CSP(G) instance so that only the fraction t G can be satisfied: symmetric formula. 2: Choose an algorithm so that at least the fraction t G is satisfied. (2 gets paid t G from 1).

56 UBC March 200756 Game strategy in a nutshell Anna: Best: Choose t G instance Bob: Get’s paid t G etc.

57 UBC March 200757 Additional Information Rich literature on clause learning in SAT and CSP solver domain. Superresolution is the most general form of clause learning with restarts. Papers on look-ahead polynomials and superresolution: http://www.ccs.neu.edu/research/demeter/ papers/publications.html

58 UBC March 200758 Additional Information Useful unpublished paper on look-ahead polynomials: http://www.ccs.neu.edu/research/demeter/ biblio/partial-sat-II.html http://www.ccs.neu.edu/research/demeter/ biblio/partial-sat-II.html Technical report on the topic of this talk: http://www.ccs.neu.edu/research/demeter/ biblio/POptMAXCSP.html http://www.ccs.neu.edu/research/demeter/ biblio/POptMAXCSP.html

59 UBC March 200759 Future work Exploring best combination of look-forward and look-back techniques. Find all maximum-assignments or estimate their number. Robustness of maximum assignments. Are our MAX-CSP solvers useful for reasoning about biological pathways?

60 UBC March 200760 Conclusions Presented SPOT, a family of MAX-CSP solvers based on look-ahead polynomials and non-chronological backtracking. SPOT has a desirable property: P-optimal. SPOT can be implemented very efficiently. Preliminary experimental results are encouraging. A lot more work is needed to assess the practical value of the look- ahead polynomials.

61 UBC March 200761 Polynomials for rank 3 x^3 x^2 x^1 x^0 relation -1 3 -3 1 1 1 -2 1 0 2 0 1 -2 1 3 1 -2 1 0 4 0 1 -2 1 5 For 2: x*(1-x) 2 =x 3 -2x 2 +x maximum at x=1/3; 1/3*(2/3) 2 =4/27 Check: 2 and 4 are the same

62 UBC March 200762 Polynomials for rank 3 x^3 x^2 x^1 x^0 relation -1 3 -3 1 1 1 -2 1 0 2 0 1 -2 1 3 1 -2 1 0 4 (same as 2) 0 1 -2 1 5 For 4: x*(1-x) 2 =x 3 -2x 2 +x maximum at x=1/3; 1/3*(2/3) 2 =4/27

63 UBC March 200763 Recall (f*g)’ = f’*g + f*g’ (f 2 )’ = 2*f * f’ For relation 2: –x*(1-x) 2 = (1-x) 2 + x*2(1-x)*(-1)= (1-x)(1-3x) –x=1 is minimum –x=1/3 is maximum –value at maximum: 4/27

64 UBC March 200764 Harold concern: intension, extension: query, predicate extension = intension(software) Harold Ossher: confirmed pointcuts

65 UBC March 200765 The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Anna constructs a CSP(G) instance H with 1000 variables and at most 2*m*(1000 choose r) constraints and gives it to Bob (1 second limit). 3.Bob gets paid by Anna the fraction of constraints he can satisfy in H (100 seconds limit). 4.Take turns (go to 1).

66 UBC March 200766 Evergreen(3,2) Rank 3: Represent relations by the integer corresponding to the truth table in standard sorted order 000 – 111. choose relations between 1 and 254 (exclude 0 and 255). Don’t choose two odd numbers: All false would satisfy all constraints. Don’t choose both numbers above 128: All true would satisfy all constraints.

67 UBC March 200767 How to play Evergreen(3,2) G = {R1, R2} is given (by some protocol) Anna: compute t=(t1, t2) so that max appmean t (x) for x in [0,1] is minimum. Construct a symmetric instance SYM G (t) H. Bob: Solves H.

68 UBC March 200768 Question For any G and any CSP(G)-instance H, is there a weight assignment to the constraints of H so that the look-ahead polynomial abs H has its maximum not at 0 or 1 and guarantees a maximum assignment for H without weights? – the polynomial might guarantee maximum- 1+e which is enough to guarantee a maximum assignment. –what if we also allow n-maps?

69 UBC March 200769 Absolute P-optimality Bringing max to boundary is polynomial. Bringing max away from boundary using weights? What is the complexity. Definition: ImproveLookAhead(G,H,N): Given G, a CSP(instance) H and an assignment N for H. Is there an assignment that satisfies at least la H,N (mb) + 1. mb = maximum bias. Assume G sufficiently closed. Theorem: [Absolute P-optimality] ImproveLookAhead(G,H,N) is NP-hard iff MAX-CSP(G) is NP-hard. Warning: ImproveAllZero(G,H) is NP-hard iff MAX- CSP(G) is NP-hard.

70 UBC March 200770 Exploring the search space Look-ahead polynomials don’t eliminate parts of the search space. They crosscut the search space early in the search process. Whenever the look- ahead polynomial guarantees more than the currently best assignment, we can cut across the search space but might have to get back to the part we jumped over.

71 UBC March 200771 Crosscutting the search space current better best by look-ahead even better by search

72 UBC March 200772 Early better than later Look-ahead polynomials are more useful early in the search. Later in the search the maximum will be at 0 or 1. Look-ahead polynomials will make mistakes which are compensated by superresolvents. Superresolvents cut part of the search space and they help the look-ahead polynomials to eliminate the mistakes.

73 UBC March 200773 Requirements for algorithms and properties to work Relative P-optimality Absolute P-optimality –G needs to be closed under renaming and reductions and n-maps Look-ahead polynomials –improve assignments: closed under n-maps and reductions

74 UBC March 200774 Never require closed under renaming? symmetric formulas don’t require it? They do? Consider 2: 1 2 3 0 2: 1 2 4 0 2: 1 3 4 0 2: 2 3 4 0 is not symmetric. {1 !2 !3 4} does not satisfy all, only ¾. {!1 2 3 !4} only satisfies ¼.

75 UBC March 200775 What happens during the solution process Maximum of polynomial will be at the boundary, say 0. Can be achieved in P. Notice folding effect. Many superresolvents will be learned until better assignment is found. Most constraints use an odd relation, a few an even relation (if many constraints can be satisfied).

76 UBC March 200776 What happens … Because the polynomial only depends on a few numbers, it is not sensitive to the detailed properties of the instance. But if one variable has a visible bias towards either 1 or 0, polynomials might detect it. Adjust the weight of the constraints to bring the maximum of the polynomial into the middle so that abs(mb) increases.

77 UBC March 200777 Question for Daniel p(x) = t1*p1(x)+t2*p2(x) mb at 0 p(mb) perturb t1,t2 so that p(x) gets a higher maximum. The fraction of t1 should go up if R1 is an unsatisfied relation. How high can we bring the fraction of satisfied constraints this way?

78 UBC March 200778 Question Does this solve the original problem? If we get all satisfied, yes. Can force that, by deleting all but one unsatisfied and adding them later on??? Are forced to work with many relations.

79 UBC March 200779 SAT Rank 2 instance 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3) = F find maximum assignment and proof that it is maximum

80 UBC March 200780 Solution Strategy The MAX-CSP transition system gives many options: –Choose initial assignment. Has significant impact on length of proof. Best to start with a maximum assignment. –variable ordering. Irrelevant because start with maximum assignment. –value ordering: Also irrelevant.

81 UBC March 200781 SAT Rank 2 instance 14 : 1 2 0 14 : 3 4 0 14 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0 14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3) N={1 !2 !3 4 5 !6} unsat=1/9 {}|F|{}|N -> D UP* {1* !3 !5 4 6}|F|{}|N -> SSR Restart {}|F|5(1)|N -> UP* {!1 2 !4 !6 5 3}F|5(1)|N -> SSR {!1 2 !4 !6 5 3}F|5(1),0()|N -> Finale end rank 2 10: 1 = or(1) 5: 1 = or(!1)

82 UBC March 200782 Rank 2 relations ba 1 00 0 0 2 01 1 0 4 10 0 1 8 11 1 1 10 12 10(1) = or(1) = or(*,1), don’t mention second argument 12(1) = or(1) = or(1,*), 10(2,1) = 12(1,2) 0() = empty clause

83 UBC March 200783

84 UBC March 200784 UP / D

85 UBC March 200785 Variable ordering maximizes likelihood that look-ahead polynomials make correct decisions Finds variable where look-ahead polynomials give the strongest indication –even if the look-ahead polynomial chooses the wrong mb, the decision might still be right –what is better: la H1 (mb1) is max | la H1 (mb1) - la H0 (mb0) | is max (is more instance specific. Will adapt to superresolvents.)

86 UBC March 200786 mean versus appmean mean does less averaging, so it is preferred? appmean looks at the neighborhood of x*n

87 UBC March 200787 Derandomization In a perfectly symmetric CSP(G) instance, it is sufficient to try any assignment with k ones for k from 0 to n to achieve the maximum (t G ). But in a non-symmetric instance, we need derandomization to achieve t G and superresolution to achieve the maximum.

88 UBC March 200788 The Game EvergreenTM(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. 1.The players use P1 to choose a set G of m relations of rank r. 2.Anna constructs a CSP(G) instance H with 1000 variables and at most 2*m*(1000 choose r) constraints and gives it to Bob (1 second limit). Anna knows the maximum assignment and has a proof for maximality but keeps it secret until Bob gives response. 3.Bob gets paid the fraction of constraints he can satisfy in H relative to the maximum number that can be satisfied (100 seconds limit). 4.Take turns (go to 1). TM = true maximum

89 UBC March 200789 EvergreenTM versus Evergreen Anna can try to create instances that are hard to solve by Bob’s solver. If Bob has a perfect solver, he will be paid 1.0. The game depends a lot on the solver quality. Incomplete information (maximum assignment is kept secret). Challenge for Anna to find instance where maximum is known with short proof. Anna can control the maximum Bob is paid assuming a perfect solver. Bob may be paid little even with a perfect solver. The game depends less on solver quality. Complete information.


Download ppt "UBC March 20071 The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work."

Similar presentations


Ads by Google