Presentation is loading. Please wait.

Presentation is loading. Please wait.

Query-Guided Maximum Satisfiability

Similar presentations


Presentation on theme: "Query-Guided Maximum Satisfiability"— Presentation transcript:

1 Query-Guided Maximum Satisfiability
Xin Zhang, Ravi Mangal, Mayur Naik Georgia Tech Aditya V. Nori Microsoft Research Thank you.

2 The Ubiquity of Optimization Problems
Information Retrieval Databases Weighted Constraints Circuit Design Bioinformatics Planning and Scheduling Optimization problems are ubiquitous. They are often seen in applications from various domains, including ir, db and many others. One common way to express them is to use a system of weighted constraints POPL 2016 9/20/2018

3 Optimization Problems in Program Reasoning
Program Analysis Program Verification Weighted Constraints Program Synthesis Recently, many emerging tasks in program reasoning have also been casted into such formulation, including program analyses, program verification, and program synthesis. POPL 2016 9/20/2018

4 Optimization Problems in Program Reasoning
Soundness Conditions Hard Soft + Objectives Balance Tradeoffs (e.g., Precision vs. Scalability) Handle Noise (e.g., Incorrect Specs) Model Missing Information (e.g., Open Programs) weighted constraints consist of two kinds of constraints: hard constraints, which MUST be satisfied, and soft constraints, which may be violated but whose weights must be maximized In program reasoning, hard constraints are used to encode soundness conditions, while soft constraints encode certain objectives. These objectives can be used to balance various trade-offs, such as the tradeoff between precision and scalability. They can also be used to handle noises in the reasoning, such as incorrect specifications provided by the user. Finally, they can be used to model missing information, such as native code in the program. POPL 2016 9/20/2018

5 A Common Formulation for Weighted Constraints
Hard Soft + Maximum Satisfiability Problem (MaxSAT) One common way to solve such mixed constraints is to reduce them into a MaxSAT problem. POPL 2016 9/20/2018

6 What is MaxSAT? MaxSAT: 𝑎 ∧ (C1) ¬𝑎∨𝑏 ∧ (C2) 1 ¬𝑏∨𝑐 ∧ (C3)
So what is MaxSAT? We all know about SAT. POPL 2016 9/20/2018

7 Solution: a = true, b = true, c = true, d = false
What is MaxSAT? MaxSAT: 𝑎 ∧ (C1) ¬𝑎∨𝑏 ∧ (C2) ¬𝑏∨𝑐 ∧ (C3) ¬𝑐∨𝑑 ∧ (C4) ¬𝑑 (C5) Subject to C1 Subject to C2 Maximize ×C3+2×C4+7×C5 = MaxSAT extends SAT with weights for optimization. There are two kinds of clauses in a MAXSAT problem: hard clauses are standard SAT clauses, while soft clauses are clauses with weights. MAXSAT solver will find a solution that satisfies all the hard clauses, and maximizes the sum of the weights of the soft clauses satisfied. For the instance on the slide, it has a solution which satisfies all clauses except for C4. As a result, it has an objective of 11. Solution: a = true, b = true, c = true, d = false (Objective = 11) POPL 2016 9/20/2018

8 The Evolution of MaxSAT Solvers
Eva500 (Winner of MaxSAT Competition 2014) 3.5X WBO (Winner of MaxSAT Competition 2010) -enable more demanding applications. In the past decade, we have witnessed remarkable improvement in the performance of MaxSAT solvers. At MaxSAT competition 2014, the organizers did a head-to-head comparison between the winner solver of 2014 and the winner solver of 2010. It turned out that, on the same set of benchmarks, eva500, the winnder of 2014 can solve 3.5 times more instances compared to WBO, the winner of 2010. In MaxSAT competiont 2015, the latest competition, the best solver can solve instances up to 0.2 million variables and 4.1 millions clauses. However, the improvement in MaxSAT solvers in turn, has enabled even more demanding applications, some of which are beyond the capability of the current solvers. MaxSAT Competition 2015: Instances up to 200K variables and 4.1 million clauses. POPL 2016 9/20/2018

9 On Abstraction Refinement for Program Analyses in Datalog
New Challenges M = million Benchmark # Vars # Clauses Solver Runtime (secs) CCLS2akms MaxHS Eva500 antlr 5.2M 10.4M lusearch 3.4M 14.7M luindex 2.7M 5.9M avrora 4.5M 17.6M xalan 5.6M 19.2M The table on this slide shows the MaxSAT instances generated from our PLDI’14 paper, which uses MaxSAT to find good abstractions for analyses written in Datalog. The instances here are a order of magnitude larger than the instances we showed in last slide. We tried to solve them using the top three solvers from MaxSAT competition While two solvers managed to finish on the three smaller instances, none of them can finish on the larger two instances within 24 hours. Don’t say PLDI’14, but recent work On Abstraction Refinement for Program Analyses in Datalog [PLDI’14 Zhang et al.] POPL 2016 9/20/2018

10 On Abstraction Refinement for Program Analyses in Datalog
New Challenges M = million, timeout = 24 hours Benchmark # Vars # Clauses Solver Runtime (secs) CCLS2akms MaxHS Eva500 antlr 5.2M 10.4M timeout 22.6 38.2 lusearch 3.4M 14.7M 23.9 31 luindex 2.7M 5.9M 8.9 22 avrora 4.5M 17.6M xalan 5.6M 19.2M For example, the table shows the instances generated from our PLDI’14 paper, which uses MaxSAT to find good abstractions for analyses written in Datalog. The instances here are a order of magnitude larger than the instances we showed on last slide. We picked three top solvers from MaxSAT competition As we can see, all three solvers timed out on the larger two benchmarks after 24 hours. Note the benchmark programs we have here are all from Dacapo suite, and they’re not the largest from this suite. On Abstraction Refinement for Program Analyses in Datalog [PLDI’14 Zhang et al.] POPL 2016 9/20/2018

11 On Abstraction Refinement for Program Analyses in Datalog
New Challenges M = million, timeout = 24 hours Benchmark # Vars # Clauses Solver Runtime (secs) CCLS2akms MaxHS Eva500 antlr 5.2M 10.4M timeout 22.6 38.2 lusearch 3.4M 14.7M 23.9 31 luindex 2.7M 5.9M 8.9 22 avrora 4.5M 17.6M xalan 5.6M 19.2M For example, the table shows the instances generated from our PLDI’14 paper, which uses MaxSAT to find good abstractions for analyses written in Datalog. The instances here are a order of magnitude larger than the instances we showed on last slide. We picked three top solvers from MaxSAT competition As we can see, all three solvers timed out on the larger two benchmarks after 24 hours. Note the benchmark programs we have here are all from Dacapo suite, and they’re not the largest from this suite. 10^6 -> million On Abstraction Refinement for Program Analyses in Datalog [PLDI’14 Zhang et al.] POPL 2016 9/20/2018

12 New Challenges MaxSAT Solvers New Problems POPL 2016 9/20/2018
As we can see, although the MaxSAT solvers have gained significant development in the past decade, with the growth in data, the introduction of new emerging applications, we are now facing new problems which are at another scale. In front of these problems, the current solvers are simply not powerful enough. POPL 2016 9/20/2018

13 ? So what should we do? It is like We’re trapped in a large maze and cannot find a way out. POPL 2016 9/20/2018

14 ? Fortunately, there is one observation that might save us from such situation. In these applications, often we’re not interested in the complete solution, but only a part of it. And we call these parts of interests “queries”. For example, in this maze, a query can be the shortest path between two points. POPL 2016 9/20/2018

15 This allows us to zoom into the area, that is relevant to these two points, therefore dramatically reducing the problem complexity. 15 POPL 2016 POPL 2016 9/20/2018 9/20/2018

16 Queries in Different Domains
Program Reasoning: Information Retrieval: Does variable head alias with variable tail on line 50 in Complex.java? Is Dijkstra most likely an author of “Structured Programming”? In fact, the concept of queries is very common in various domains. For example, in program reasoning, a query can be the question asking, whether variable head aliases with variable tail on line 50 in Complex.java. In Information retrieval, a query can be the question aksing, whether dijstra is most likely an author of “structured programming”. certain var, certain program point Certain author, certain book POPL 2016 9/20/2018

17 Queries in MaxSAT QUERIES = {a, d} 𝑎 ∧ (C1) ¬𝑎∨𝑏 ∧ (C2) 4 ¬𝑏∨𝑐 ∧ (C3)
In the MaxSAT instances generated from such applications, the queries are mapped to the assignment to certain variables in the solution. And we also call such variables of interests queries. This allows us to define a new problem, called query-guided Maximum Satisfiability, or Q-MaxSAT POPL 2016 9/20/2018

18 Query-Guided Maximum Satisfiability (Q-MaxSAT)
𝑎 ∧ (C1) ¬𝑎∨𝑏 ∧ (C2) ¬𝑏∨𝑐 ∧ (C3) ¬𝑐∨𝑑 ∧ (C4) ¬𝑑 (C5) Q-MaxSAT: Given a MaxSAT formula 𝜑 and a set of queries 𝑄⊆𝑉, a solution to the Q-MaxSAT instance (𝜑,𝑄) is a partial solution 𝛼 𝑄 :𝑄→ 0, 1 such that ∃𝛼∈𝑀𝑎𝑥𝑆𝐴𝑇 𝜑 .(∀𝑣∈𝑄. 𝛼 𝑄 =𝛼(𝑣)) QUERIES = {a, d} Q-MaxSAT augments the MaxSAT problem with a set of queries. And a solution to the Q-MaxSAT instance is a partial solution to these queries, such that there exists a completion under the partial solution, which is a solution to the original MaxSAT problem. One naïve way to solve the Q-MaxSAT problem is to feed the MaxSAT formula to a MaxSAT solver and extract the assignment to the queries. However, this will lose the whole point of being query-guided. We on the other hand, proposes an iterative algorithm, which only tries to solve the part that is relevant to the queries. POPL 2016 9/20/2018

19 Query-Guided Maximum Satisfiability (Q-MaxSAT)
𝑎 ∧ (C1) ¬𝑎∨𝑏 ∧ (C2) ¬𝑏∨𝑐 ∧ (C3) ¬𝑐∨𝑑 ∧ (C4) ¬𝑑 (C5) Q-MaxSAT: Given a MaxSAT formula 𝜑 and a set of queries 𝑄⊆𝑉, a solution to the Q-MaxSAT instance (𝜑,𝑄) is a partial solution 𝛼 𝑄 :𝑄→ 0, 1 such that ∃𝛼∈𝑀𝑎𝑥𝑆𝐴𝑇 𝜑 .(∀𝑣∈𝑄. 𝛼 𝑄 =𝛼(𝑣)) QUERIES = {a, d} Q-MaxSAT augments the MaxSAT problem with a set of queries. And a solution to the Q-MaxSAT instance is a partial solution to these queries, such that there exists a completion under the partial solution, which is a solution to the original MaxSAT problem. One naïve way to solve the Q-MaxSAT problem is to feed the MaxSAT formula to a MaxSAT solver and extract the assignment to the queries. However, this will lose the whole point of being query-guided. We on the other hand, proposes an iterative algorithm, which only tries to solve the part that is relevant to the queries. Solution: a = true, d = false POPL 2016 9/20/2018

20 Query-Guided Maximum Satisfiability (Q-MaxSAT)
𝑎 ∧ (C1) ¬𝑎∨𝑏 ∧ (C2) ¬𝑏∨𝑐 ∧ (C3) ¬𝑐∨𝑑 ∧ (C4) ¬𝑑 (C5) Q-MaxSAT: Given a MaxSAT formula 𝜑 and a set of queries 𝑄⊆𝑉, a solution to the Q-MaxSAT instance (𝜑,𝑄) is a partial solution 𝛼 𝑄 :𝑄→ 0, 1 such that ∃𝛼∈𝑀𝑎𝑥𝑆𝐴𝑇 𝜑 .(∀𝑣∈𝑄. 𝛼 𝑄 =𝛼(𝑣)) QUERIES = {a, d} Q-MaxSAT augments the MaxSAT problem with a set of queries. And a solution to the Q-MaxSAT instance is a partial solution to these queries, such that there exists a completion under the partial solution, which is a solution to the original MaxSAT problem. One naïve way to solve the Q-MaxSAT problem is to feed the MaxSAT formula to a MaxSAT solver and extract the assignment to the queries. However, this will lose the whole point of being query-guided. We on the other hand, proposes an iterative algorithm, which only tries to solve the part that is relevant to the queries. MaxSAT Solution: a = true, b = true, c = true, d = false POPL 2016 9/20/2018

21 An Iterative Algorithm
Q-MaxSAT instance partial solution workSet ⊆ formula Checker Yes MaxSAT Solver Q-MaxSAT solution No Our algorithm starts by feeding a subset of clauses to a MaxSAT solver. And we call this set of clauses the workset \phi’. After producing the solution \alpha_{\phi’}, we invoke a checker to see whether exists a completion under \alpha{\phi’}, such that it is a solution to the original MaxSAT problem. If yes, we will extract the Q-MaxSAT solution from the \alpha_{\phi’}, otherwise, we will try to identify a new subset of clauses and grow \phi’. expanded workSet POPL 2016 9/20/2018

22 An Iterative Algorithm
Key challenge: how to implement a sound yet efficient checker? Q-MaxSAT instance partial solution workSet ⊆ formula Checker Yes MaxSAT Solver Q-MaxSAT solution No Our algorithm starts by feeding a subset of clauses to a MaxSAT solver. And we call this set of clauses the workset \phi’. After producing the solution \alpha_{\phi’}, we invoke a checker to see whether exists a completion under \alpha{\phi’}, such that it is a solution to the original MaxSAT problem. If yes, we will extract the Q-MaxSAT solution from the \alpha_{\phi’}, otherwise, we will try to identify a new subset of clauses and grow \phi’. expanded workSet POPL 2016 9/20/2018

23 Our key idea: Use a small set of clauses to succinctly summarize effect of unexplored clauses Our key idea is to use ….. POPL 2016 9/20/2018

24 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... Queries = {v6}, formula = Next, we illustrate our idea using an example Q-MaxSAT instance. Emphasize the formula is very large, such that it cannot be solved by using a MaxSAT solver directly. POPL 2016 9/20/2018

25 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... POPL 2016 9/20/2018

26 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... POPL 2016 9/20/2018

27 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... Graph: Node -> variable We represent each unit clauses by filling the related node with a T or F. For example, for clause Were repsent each implication clause by a directed edge. For example, The dotted area represents a large number of clauses we omit on the slide. POPL 2016 9/20/2018

28 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... POPL 2016 9/20/2018

29 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... POPL 2016 9/20/2018

30 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... POPL 2016 9/20/2018

31 Example: Iteration 1 Queries = {v6}, formula = v4 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet Our algorithm constructs the initial workset \phi’ by taking all the clauses containing v6, and feed it to a MaxSAT solver. POPL 2016 9/20/2018

32 Example: Iteration 1 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet The MaxSAT solver produces a solution that assigns false to variables ranging from v4 to v8. POPL 2016 9/20/2018

33 ? Example: Iteration 1 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet Our algorithm then checks whether v6 = false is a solution to the Q-MaxSAT instance. POPL 2016 9/20/2018

34 ? Example: Iteration 1 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers Our observation is that, the clauses outside phi’ can only affect the query assignment via the clauses sharing variables in phi’. We call such clauses frontier clauses, which are marked on the slides. Furthermore, if a partial clause is already satisfied by the current partial solution, including it in the current work set won’t improve the solution. POPL 2016 9/20/2018

35 ? Example: Iteration 1 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers This leaves us with just two unit frontier clauses. POPL 2016 9/20/2018

36 ? Example: Iteration 1 (blue = true, red = false) summarySet =
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers summarySet = {(100,v4), (100, v8)} We take these two clauses and construct a summarization set \psi, which summarize the effects the clauses outside \phi’. POPL 2016 9/20/2018

37 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers summarySet = {(100,v4), (100, v8)} Then, our algorithm performs the optimality check by comparing the best objective of \phi’ union \psi and that of \phi’. The former has an objective of 220, while the latter only has an objective of 20. 20 ? 220 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

38 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers summarySet = {(100,v4), (100, v8)} As a result, we fail the optimality check. 20 220 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

39 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers summarySet = {(100,v4), (100, v8)} As a result, we fail the optimality check. v4 = true, v5 = true, v6 = true, v7 = true, v8 = true 20 220 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

40 Example: Iteration 2 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet In the second iteration, by invoking the MaxSAT solver on phi’, we get a solution which assigns true to variables ranging from v4 to v8. POPL 2016 9/20/2018

41 ? Example: Iteration 2 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet POPL 2016 9/20/2018

42 ? Example: Iteration 2 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet frontiers summarySet = {(100, ¬v7), (5, ¬v5∨v2), (5, ¬v5∨v3)} Similarly, to perform optimality check, we construct a summarization set by taking the frontier clauses that are not satisfied by the current partial solution. POPL 2016 9/20/2018

43 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet summarySet = {(100, ¬v7), (5, ¬v5∨v2), (5, ¬v5∨v3)} However, if we want to perform a similar optimality check as last iteration, the check will trivially fail because of the two newly introduced variables, v2 and v3. To solve the problem, and further improve the precision of our optimaility check, we remove v2 and v3 from the summarization set, and strengthen the clauses in it. In this case, this is equivalent to setting v2 and v3 to false. Intuitively, by setting these variables to false, we are overestimating the effects of the unexplored clauses, by assuming these frontier clauses will not be satisfied by unexplored variables like v2 and v3. ? max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

44 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet summarySet = {(100, ¬v7), (5, ¬v5), (5, ¬v5)} With the new summarization set, we perform the check again ? max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

45 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet summarySet = {(100, ¬v7), (5, ¬v5), (5, ¬v5)} Although the optimality check still fails, in next iteration, we will show our algorithm terminates by applying this improved check. 220 320 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

46 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet summarySet = {(100, ¬v7), (5, ¬v5), (5, ¬v5)} Similarly, by checking the solution to \phi’ union psi, we conclude all three clauses in the summarization set are likely responsible for failing the check. Therefore, we expand the workset phi’ with their original clauses. 220 v4 = true, v5 = false, v6 = true, v7 = true, v8 = true 320 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

47 Example: Iteration 3 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet In the third iteration, we get a solution for phi’ that assigns true to variables ranging from v2 to v8. POPL 2016 9/20/2018

48 ? Example: Iteration 3 (blue = true, red = false)
Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet To perform the optimality check, POPL 2016 9/20/2018

49 ? Example: Iteration 3 (blue = true, red = false)
frontier Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet we construct the summarization set. Similar to Iteration 2, we improve the optimality check by strengthening the frontier clauses. summarySet = {(5, ¬v3∨v1)} POPL 2016 9/20/2018

50 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) frontier Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet This time, by performing the improvied optimaility check. summarySet = {(5, ¬v3∨v1)} 325 330 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

51 max(workSet ∪ summarySet) - max(workSet) = 0
Example: Iteration (blue = true, red = false) frontier Queries = {v6}, formula = v weight 100 ∧ v weight 100 ∧ ¬ v weight 100 ∧ ¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet We succesfully conclude that v6=true is a solution to the q-maxsat problem. Although there are many clauses and variables that can reach v6 in the graph, our approach manage to resolve v6 in just three iterations by only exploring a small part of the graph. summarySet = {(5, ¬v3∨v1)} 325 325 max(workSet ∪ summarySet) - max(workSet) = 0 POPL 2016 9/20/2018

52 Example Queries = {v6}, formula = v4 weight 100 ∧ v8 weight 100 ∧
¬ v3 ∨ v1 weight 5 ∧ ¬ v5 ∨ v2 weight 5 ∧ ¬ v5 ∨ v3 weight 5 ∧ ¬ v6 ∨ v5 weight 5 ∧ ¬ v6 ∨ v7 weight 5 ∧ ¬ v4 ∨ v6 weight 5 ∧ ¬ v8 ∨ v6 weight 5 ∧ ... workSet = formula In worst case, we do have to explore the whole formula. But in practice, we find we do not have to. POPL 2016 9/20/2018

53 Why It Works In Practice
Program Reasoning: Information Retrieval: Does variable head alias with variable tail on line 50 in Complex.java? Is Dijkstra most likely an author of “Structured Programming”? This is because locality is in the nature of many problems from various domains. For example, in program reasoning, if you just care about the aliasing information for two variables, why do you have to reason about the other 1,000 variables in the program. In information retrieval, if you just want to know the author information of a book, why do you have to reason about the other 1 million of books. We next confirm such observation by showing the experimental result. POPL 2016 9/20/2018

54 Experimental Setup Implemented in a tool called Pilot
MiFuMaX as the underlying solver Evaluated on 19 large instances from real-world problems Program analysis: a pointer analysis and a datarace analysis Information retrieval: advisor recommendation (AR), entity resolution (ER), and information extraction (IE) 3GB RAM and 1 hour for each MaxSAT invocation 0-CFA Chord [PLDI’06] We implemented our algorithm in a tool called Pilot which uses MiFuMaX as the underlying MaxSAT solver. Later, We will also show the results with other MaxSAT solvers. We evaluated Pilot on 19 large instances generated from program analysis and information retrieval. For program analysis, we use instances generated from the 0-cfa pointer analysis and the datarace analysis from jchord. For information retrieval, we have three standard benchmarks. For the limit of the time, we only show the result on the pointer analysis client and three IR clients. In all experiments, we use 3GB RAM and 1 hour CPU time as the resource constraint for each MaxSAT invocation. All the experiments are performed on a Linux machine with 8GB RAM and a 3GHz AMD processor. POPL 2016 9/20/2018

55 Instance Characteristics: Pointer Analysis
Benchmark # queries # variables # clauses ftp 55 2.3M 3M hedc 36 3.8M 4.8M weblech 25 5.8M 8.4M antlr 113 8.7M 13M avrora 151 11.7M 16.3M chart 94 16M 22.3M luindex 109 8.5M 11.9M lusearch 248 7.8M 10.9M xalan 754 12.4M 18.7M This table shows the instances generated from the pointer analysis. M = million POPL 2016 9/20/2018

56 Instance Characteristics: Pointer Analysis
Benchmark # queries # variables # clauses ftp 55 2.3M 3M hedc 36 3.8M 4.8M weblech 25 5.8M 8.4M antlr 113 8.7M 13M avrora 151 11.7M 16.3M chart 94 16M 22.3M luindex 109 8.5M 11.9M lusearch 248 7.8M 10.9M xalan 754 12.4M 18.7M As, we can see, the largest instance contains 16million variables and 22 million clauses. M = million POPL 2016 9/20/2018

57 Instance Characteristics: Information Retrieval
# queries # variables # clauses AR 10 0.3M 7.9M ER 25 3K 4.8M IE 6 47K 0.9M K = thousand, M = million Similarly, this table shows the instances from information retrieval. POPL 2016 9/20/2018

58 Performance Results: Pointer Analysis
Run MifuMaX without queries running time (seconds) peak memory (MB) # clauses (M=million) Pilot Baseline ftp hedc weblech antlr avrora chart luindex lusearch xalan We evaluate the performance of Pilot when all queries are resolved together. As a baseline, we directly feed the maxsat instances to MifuMaX without queries. POPL 2016 9/20/2018

59 Performance Results: Pointer Analysis
running time (seconds) peak memory (MB) # clauses (M=million) Pilot Baseline ftp 11 1,262 3M hedc 21 1,918 4.8M weblech timeout 8.4M antlr 13M avrora 16.3M chart 22.3M luindex 11.9M lusearch 10.9M xalan 18.7M It turned out, on 7 out of 9 instances, the baseline ran out of resources. timeout: runtime > 1 hr. or memory > 3GB POPL 2016 9/20/2018

60 Performance Results: Pointer Analysis
running time (seconds) peak memory (MB) # clauses (M=million) Pilot Baseline ftp 16 11 1,262 0.03M 3M hedc 23 21 181 1,918 0.4M 4.8M weblech 4 timeout 363 0.9M 8.4M antlr 190 1,405 3.3M 13M avrora 178 1,095 2.6M 16.3M chart 253 721 1.8M 22.3M luindex 169 944 2.2M 11.9M lusearch 115 659 1.5M 10.9M xalan 646 1,312 3.4M 18.7M On the other hand, Pilot successfully terminate on all of them with less than 11 minutes and 1.5GB memory. This is reflected the fact that, the number of clauses Pilot explored is under 30% of all the clauses. On the two smallest instances, compared to the baseline, while Pilot only consumes less than 10% of the memory, it takes longer. This is due to the iterative nature of Pilot. timeout: runtime > 1 hr. or memory > 3GB POPL 2016 9/20/2018

61 Performance Results: Pointer Analysis
running time (seconds) peak memory (MB) # clauses (M=million) iterations Pilot Baseline ftp 16 11 1,262 0.03M 3M 9 hedc 23 21 181 1,918 0.4M 4.8M 7 weblech 4 timeout 363 0.9M 8.4M 1 antlr 190 1,405 3.3M 13M avrora 178 1,095 2.6M 16.3M 8 chart 253 721 1.8M 22.3M 6 luindex 169 944 2.2M 11.9M lusearch 115 659 1.5M 10.9M xalan 646 1,312 3.4M 18.7M As we can see, Pilot takes 9 and 7 iterations on them timeout: runtime > 1 hr. or memory > 3GB POPL 2016 9/20/2018

62 Performance Results: Pointer Analysis
running time (seconds) peak memory (MB) # clauses (M=million) iterations last iter. time Pilot Baseline ftp 16 11 1,262 0.03M 3M 9 0.1 hedc 23 21 181 1,918 0.4M 4.8M 7 3 weblech 4 timeout 363 0.9M 8.4M 1 antlr 190 1,405 3.3M 13M 14 avrora 178 1,095 2.6M 16.3M 8 13 chart 253 721 1.8M 22.3M 6 luindex 169 944 2.2M 11.9M 12 lusearch 115 659 1.5M 10.9M xalan 646 1,312 3.4M 18.7M 19 But if you look at the time consumed by Pilot in the last iteration, it is much less compared to the baseline, as Pilot explores far less clauses. timeout: runtime > 1 hr. or memory > 3GB POPL 2016 9/20/2018

63 Performance Results: Information Retrieval
running time (seconds) peak memory (MB) # clauses (M=million) iterations last iter. time Pilot Baseline AE 4 timeout 2K 7.9M 7 0.3 ER 13 2 6 44 9K 4.8M 19 0.2 IE 2,760 335 27K 0.9M 0.05 timeout: runtime > 1 hr. or memory > 3GB We see similar results on instances generated from information retrieval. POPL 2016 9/20/2018

64 Effect of Resolving Queries Separately
Pointer Analysis on ‘avrora’ Peak Memory (MB) Memory consumption of Pilot when resolving all queries together Next we study the resource consumption of Pilot when each query is resolved separately. This graph shows the result for the instances generated by running pointer analysis on avrora. The red line shows the memory consumption of pilot when all queries are resolved together. Each blue point shows the memory cosumption when one single query is being resolved. As we can see, most queires only require 20% of the memory compared to resolving all queries together. This is in line with locality in program analysis, which is utilized by many query-driven pointer analysis. Query Index POPL 2016 9/20/2018

65 Effect of Resolving Queries Separately
AR (Advisor Recommendation) Peak Memory (MB) On the other hand, on the AR instance, for 8 queries, pilot needs over 80% of memory that it needs when resolving all queires together. This indicates, for queries correlated to each other, batching them together in Pilot can improve the performance compared the accumulative performance when solving them separately. Query Index POPL 2016 9/20/2018

66 Effect of Different Underlying MaxSAT Solvers
instance solver running time (in sec.) peak memory (in MB) Pilot Baseline pointer analysis MiFuMaX 178 timeout 1,095 AR 4 Finally, we study the performance of Pilot when different underlying solvers are used. Besides MifuMaX,we picked 4 top solvers from MaxSat’14. The baseline is running each solver directly on the MaxSAT instances without queries. The benchmark used for pointer analysis is avrora. As we can can see, except for one setting, where both approaches ran out of resources, Pilot terminated on all the other settings, while the baseline couldn’t finish on any of them. This indicates that Pilot consistently improves over the baseline regardless of the underlying MaxSAT solver. POPL 2016 9/20/2018

67 Effect of Different Underlying MaxSAT Solvers
instance top solvers in MaxSAT’14 running time (in sec.) peak memory (in MB) Pilot Baseline pointer analysis MiFuMaX 178 timeout 1,095 CCLS2akms Eva500 2,267 1,379 MaxHS 555 1,296 WPM-2014.co 609 1,127 AR 4 148 13 21 2 9 6 Finally, we study the performance of Pilot when different underlying solvers are used. Besides MifuMaX,we picked 4 top solvers from MaxSat’14. The baseline is running each solver directly on the MaxSAT instances without queries. The benchmark used for pointer analysis is avrora. As we can can see, except for one setting, where both approaches ran out of resources, Pilot terminated on all the other settings, while the baseline couldn’t finish on any of them. This indicates that Pilot consistently improves over the baseline regardless of the underlying MaxSAT solver. POPL 2016 9/20/2018

68 Conclusion New problem: Q-MaxSAT =MaxSAT+ Queries
Natural for optimization problems in diverse and emerging domains Algorithm to solve Q-MaxSAT Local Checking Global Optimality Empirical evaluation on 19 instances from different domains Up to 16 million variables and 22 million clauses Pilot: ~ 300MB and 100 sec. on average Conventional MaxSAT solvers: 8 timeouts (3GB, 1 hrs.) We conclude our talk by summarizing our three contributions: We proposed a new problem Q-MaxSAT, which augments conventional MaxSAT problem with queries. It turns out to be a natural fit for optimizaitonp roblems in various domians. We proposed an novel algorithm to solve Q-MaxSAT, which uses local chekcing to guarantee global optimality. Finally, we evaluated our approachon 19 large instances from program anlaysis and information rettrival. These instances contains up to 16 million vars and 22 million clauses. Pilot managed to finish on all of them with 300MB memory consumption and 100 second runtime on average, While the conventional solvers timed on 8 of them. POPL 2016 9/20/2018


Download ppt "Query-Guided Maximum Satisfiability"

Similar presentations


Ads by Google