Download presentation
Presentation is loading. Please wait.
Published byErick Barnes Modified over 10 years ago
1
Lower Bounds for Exact Model Counting and Applications in Probabilistic Databases Paul Beame Jerry Li Sudeepa Roy Dan Suciu University of Washington
2
Model Counting Model Counting Problem: Given a Boolean formula F, compute #F = #Models (satisfying assignments) of F e.g. F = (x y) (x u w) ( x u w z) #Assignments on x, y, u, z, w which make F = true Probability Computation Problem: Given F, and independent Pr(x), Pr(y), Pr(z), …, compute Pr(F) 2
3
Model Counting #P-hard ▫ Even for formulas where satisfiability is easy to check Applications in probabilistic inference ▫ e.g. Bayesian net learning There are many practical model counters that can compute both #F and Pr(F) 3
4
CDP Relsat Cachet SharpSAT c2d Dsharp … 4 Exact Model Counters Search-based/DPLL-based (explore the assignment-space and count the satisfying ones) Knowledge Compilation-based (compile F into a “computation-friendly” form) [Survey by Gomes et. al. ’09] Both techniques explicitly or implicitly use DPLL-based algorithms produce FBDD or Decision-DNNF compiled forms (output or trace) [Huang-Darwiche’05, ’07] Both techniques explicitly or implicitly use DPLL-based algorithms produce FBDD or Decision-DNNF compiled forms (output or trace) [Huang-Darwiche’05, ’07] [Birnbaum et. al.’99] [Bayardo Jr. et. al. ’97, ’00] [Sang et. al. ’05] [Thurley ’06] [Darwiche ’04] [Muise et. al. ’12]
5
Model Counters Use Extensions to DPLL Caching Subformulas ▫ Cachet, SharpSAT, c2d, Dsharp Component Analysis ▫ Relsat, c2d, Cachet, SharpSAT, Dsharp Conflict Directed Clause Learning ▫ Cachet, SharpSAT, c2d, Dsharp 5 DPLL + caching + (clause learning) FBDD DPLL + caching + component + (clause learning) Decision-DNNF DPLL + caching + (clause learning) FBDD DPLL + caching + component + (clause learning) Decision-DNNF How much more does component analysis add? i.e. how much more powerful are decision-DNNFs than FBDDs?
6
6 Theorem: Decision-DNNF of size N FBDD of size N log N + 1 If the formula is k-DNF, then FBDD of size N k Algorithm runs in linear time in the size of its output Theorem: Decision-DNNF of size N FBDD of size N log N + 1 If the formula is k-DNF, then FBDD of size N k Algorithm runs in linear time in the size of its output Main Result
7
7 Consequence: Running Time Lower Bounds Model counting algorithm running time ≥ compiled form size Lower bound on compiled form size Lower bound on running time Lower bound on compiled form size Lower bound on running time ▫ Note: Running time may be much larger than the size ▫ e.g. an unsatisfiable CNF formula has a trivial compiled form
8
8 Our quasipolynomial conversion + Known exponential lower bounds on FBDDs [Bollig-Wegener ’00, Wegener’02] Exponential lower bounds on decision-DNNF size Exponential lower bounds on running time of exact model counters Our quasipolynomial conversion + Known exponential lower bounds on FBDDs [Bollig-Wegener ’00, Wegener’02] Exponential lower bounds on decision-DNNF size Exponential lower bounds on running time of exact model counters Consequence: Running Time Lower Bounds
9
Outline Review of DPLL-based algorithms ▫ Extensions (Caching & Component Analysis) ▫ Knowledge Compilation (FBDD & Decision-DNNF) Our Contributions ▫ Decision-DNNF to FBDD conversion ▫ Implications of the conversion ▫ Applications to Probabilistic Databases Conclusions 9
10
DPLL Algorithms Davis, Putnam, Logemann, Loveland [Davis et. al. ’60, ’62] 10 x z 0 y 1 u 0 1 1 0 w 1 0 0 1 10 u 1 1 1 0 w 1 0 0 1 10 1 010 0 1 11 F: (x y) (x u w) ( x u w z) uwzuwz uwuw w uwuw ½ ¾ ¾ y(uw)y(uw) 3/83/8 7/87/8 5/85/8 w ½ Assume uniform distribution for simplicity // basic DPLL: Function Pr(F): if F = false then return 0 if F = true then return 1 select a variable x, return ½ Pr(F X=0 ) + ½ Pr(F X=1 ) // basic DPLL: Function Pr(F): if F = false then return 0 if F = true then return 1 select a variable x, return ½ Pr(F X=0 ) + ½ Pr(F X=1 )
11
DPLL Algorithms 11 x z 0 y 1 u 0 1 1 0 w 1 0 0 1 10 u 1 1 1 0 w 1 0 0 1 10 1 010 0 1 11 F: (x y) (x u w) ( x u w z) uwzuwz uwuw w uwuw ½ ¾ ¾ y(uw)y(uw) 3/83/8 7/87/8 5/85/8 w ½ The trace is a Decision-Tree for F The trace is a Decision-Tree for F
12
Extensions to DPLL Caching Subformulas Component Analysis Conflict Directed Clause Learning ▫ Affects the efficiency of the algorithm, but not the final “form” of the trace 12 Traces of DPLL + caching + (clause learning) FBDD DPLL + caching + component + (clause learning) Decision-DNNF Traces of DPLL + caching + (clause learning) FBDD DPLL + caching + component + (clause learning) Decision-DNNF
13
Caching 13 // basic DPLL: Function Pr(F): if F = false then return 0 if F = true then return 1 select a variable x, return ½ Pr(F X=0 ) + ½ Pr(F X=1 ) // basic DPLL: Function Pr(F): if F = false then return 0 if F = true then return 1 select a variable x, return ½ Pr(F X=0 ) + ½ Pr(F X=1 ) x z 0 y 1 u 0 1 1 0 w 1 0 0 1 1 0 u 1 1 1 0 w 1 0 0 1 10 F: (x y) (x u w) ( x u w z) uwzuwz uwuw w uwuw y(uw)y(uw) w // DPLL with caching: Cache F and Pr(F); look it up before computing // DPLL with caching: Cache F and Pr(F); look it up before computing
14
Caching & FBDDs 14 x z 0 y 1 0 1 0 u 1 1 1 0 w 1 0 0 1 10 F: (x y) (x u w) ( x u w z) uwzuwz uwuw w y(uw)y(uw) The trace is a decision-DAG for F FBDD (Free Binary Decision Diagram) or ROBP (Read Once Branching Program) Every variable is tested at most once on any path All internal nodes are decision-nodes The trace is a decision-DAG for F FBDD (Free Binary Decision Diagram) or ROBP (Read Once Branching Program) Every variable is tested at most once on any path All internal nodes are decision-nodes Decision-Node
15
Component Analysis 15 x z 0 y 1 0 1 0 u 1 1 1 0 w 1 0 0 1 10 F: (x y) (x u w) ( x u w z) uwzuwz uwuw w y ( u w) // basic DPLL: Function Pr(F): if F = false then return 0 if F = true then return 1 select a variable x, return ½ Pr(F X=0 ) + ½ Pr(F X=1 ) // basic DPLL: Function Pr(F): if F = false then return 0 if F = true then return 1 select a variable x, return ½ Pr(F X=0 ) + ½ Pr(F X=1 ) // DPLL with component analysis (and caching): if F = G H where G and H have disjoint set of variables Pr(F) = Pr(G) × Pr(H) // DPLL with component analysis (and caching): if F = G H where G and H have disjoint set of variables Pr(F) = Pr(G) × Pr(H)
16
Components & Decision-DNNF 16 x z 1 u 1 1 1 0 w 1 0 0 1 10 uwzuwz w y ( u w) 0 y 1 0 F: (x y) (x u w) ( x u w z) The trace is a Decision-DNNF [Huang-Darwiche ’05, ’07] FBDD + “Decomposable” AND-nodes (Two sub-DAGs do not share variables) The trace is a Decision-DNNF [Huang-Darwiche ’05, ’07] FBDD + “Decomposable” AND-nodes (Two sub-DAGs do not share variables) Decision Node y 0 1 AND Node uwuw How much power do they add?
17
17 Main Technical Result Decision-DNNFFBDD Efficient construction Size NSize N log N+1 (quasipolynomial) Size N k (polynomial) k-DNF e.g. 3-DNF: ( x y z) (w y z)
18
Outline Review of DPLL algorithms ▫ Extensions (Caching & Component Analysis) ▫ Knowledge Compilation (FBDDs & Decision-DNNF) Our Contributions ▫ Decision-DNNF to FBDD conversion ▫ Implications of the conversion ▫ Applications to Probabilistic Databases Conclusions 18
19
19 Need to convert all AND-nodes to Decision-nodes while evaluating the same formula F Decision-DNNF FBDD
20
A Simple Idea 20 G H 01 0 1 G H 0 01 Decision-DNNFFBDD G and H do not share variables, so every variable is still tested at most once on any path 1 FBDD
21
But, what if sub-DAGs are shared? 21 G H 01 0 1 Decision-DNNF Conflict! g’g’ h G H 0 0 1 H G 0 1 0 g’g’ h
22
22 G H 0 10 1 g’g’ h Obvious Solution: Replicate Nodes G H No conflict Apply the simple idea But, may need recursive replication Can have exponential blowup!
23
Main Idea: Replicate Smaller Sub-DAG 23 Edges coming from other nodes in the decision-DNNF Smaller sub-DAG Larger sub-DAG Each AND-node creates a private copy of its smaller sub-DAG
24
Light and Heavy Edges 24 Smaller sub-DAG Larger sub-DAG Light Edge Heavy Edge Each AND-node creates a private copy of its smaller sub-DAG Recursively each node u is replicated #times in a smaller sub-DAG #Copies of u = #sequences of light edges leading to u Each AND-node creates a private copy of its smaller sub-DAG Recursively each node u is replicated #times in a smaller sub-DAG #Copies of u = #sequences of light edges leading to u
25
Quasipolynomial Conversion 25 L = Max #light edges on any path L ≤ log N N = N small + N big ≥ 2 N small ≥... ≥ 2 L #Copies of each node ≤ N L ≤ N log N We also show that our analysis is tight #Nodes in FBDD ≤ N. N log N
26
Polynomial Conversion for k-DNFs L = #Max light edges on any path ≤ k – 1 #Nodes in FBDD ≤ N. N L = N k 26
27
Outline Review of DPLL algorithms ▫ Extensions (Caching & Component Analysis) ▫ Knowledge Compilation (FBDDs & Decision-DNNF) Our Contributions ▫ Decision-DNNF to FBDD conversion ▫ Implications of the conversion ▫ Applications to Probabilistic Databases Conclusions 27
28
Separation Results AND-FBDD Decision-DNNF FBDD d-DNNF FBDD: Decision-DAG, each variable is tested once along any path Decision-DNNF: FBDD + decomposable AND-nodes (disjoint sub-DAGs) Exponential Separation Poly-size AND-FBDD or d-DNNF exists Exponential lower bound on decision-DNNF size Exponential Separation Poly-size AND-FBDD or d-DNNF exists Exponential lower bound on decision-DNNF size AND-FBDD: FBDD + AND-nodes (not necessarily decomposable) [Wegener’00] d-DNNF: Decomposable AND nodes + OR-nodes with sub-DAGs not simultaneously satisfiable [Darwiche ’01, Darwiche-Marquis ’02] AND-FBDD: FBDD + AND-nodes (not necessarily decomposable) [Wegener’00] d-DNNF: Decomposable AND nodes + OR-nodes with sub-DAGs not simultaneously satisfiable [Darwiche ’01, Darwiche-Marquis ’02]
29
Outline Review of DPLL algorithms ▫ Extensions (Caching & Component Analysis) ▫ Knowledge Compilation (FBDDs & Decision-DNNF) Our Contributions ▫ Decision-DNNF to FBDD conversion ▫ Implications of the conversion ▫ Applications to Probabilistic Databases Conclusions 29
30
Probabilistic Databases AsthmaPatient Ann Bob Friend AnnJoe AnnTom BobTom Smoker Joe Tom Boolean query Q: x y AsthmaPatient(x) Friend (x, y) Smoker(y) Tuples are probabilistic (and independent) ▫ “Ann” is present with probability 0.3 What is the probability that Q is true on D? ▫ Assign unique variables to tuples Boolean formula F Q,D = (x 1 y 1 z 1 ) (x 1 y 2 z 2 ) (x 2 y 3 z 2 ) ▫ Q is true on D F Q,D is true x1x1 x2x2 z1z1 z2z2 y1y1 y2y2 y3y3 0.3 0.1 0.5 1.0 0.9 0.5 0.7 Pr(x 1 ) = 0.3
31
Probabilistic Databases F Q,D = (x 1 y 1 z 1 ) (x 1 y 2 z 2 ) (x 2 y 3 z 2 ) Probability Computation Problem: Compute Pr(F Q,D ) given Pr(x 1 ), Pr(x 2 ), …. F Q,D can be written as a k-DNF ▫ for fixed, monotone queries Q For an important class of queries Q, we get exponential lower bounds on decision-DNNFs and model counting algorithms For an important class of queries Q, we get exponential lower bounds on decision-DNNFs and model counting algorithms
32
Outline Review of DPLL algorithms ▫ Extensions (Caching & Component Analysis) ▫ Knowledge Compilation (FBDDs & Decision-DNNF) Our Contributions ▫ Decision-DNNF to FBDD conversion ▫ Implications of the conversion ▫ Applications to Probabilistic Databases Conclusions 32
33
Summary Quasi-polynomial conversion of any decision-DNNF into an FBDD (polynomial for k-DNF) Exponential lower bounds on model counting algorithms d-DNNFs and AND-FBDDs are exponentially more powerful than decision-DNNFs Applications in probabilistic databases 33
34
Open Problems A polynomial conversion of decision-DNNFs to FBDDs? A more powerful syntactic subclass of d-DNNFs than decision-DNNFs? ▫ d-DNNF is a semantic concept ▫ No efficient algorithm to test if two sub-DAGs of an OR-node are simultaneously satisfiable Approximate model counting? 34
35
Thank You Questions? 35
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.