Download presentation
Presentation is loading. Please wait.
1
Recursive Random Fields Daniel Lowd University of Washington June 29th, 2006 (Joint work with Pedro Domingos)
2
One-Slide Summary Question: How to represent uncertainty in relational domains? State-of-the-Art: Markov logic [Richardson & Domingos, 2004] Markov logic network (MLN) = first-order KB with weights: Problem: Only top-level conjunction and universal quantifiers are probabilistic Solution: Recursive random fields (RRFs) RRF = MLN whose features are MLNs Inference: Gibbs sampling, iterated conditional modes (ICM) Learning: back-propagation
3
Example: Friends and Smokers Predicates: Smokes(x); Cancer(x); Friends(x,y) We wish to represent beliefs such as: Smoking causes cancer Friends of friends are friends (transitivity) Everyone has a friend who smokes [Richardson and Domingos, 2004]
4
First-Order Logic Sm(x) Ca(x) Fr(x,y) Fr(y,z) Fr(x,z) x x x,y,z x x Fr(x,y) Sm(y) y y Logical
5
Markov Logic Sm(x) Ca(x) Fr(x,y) Fr(y,z) Fr(x,z) 1/Z exp( …) x x x,y,z x x Fr(x,y) Sm(y) y y Probabilistic Logical w1w1 w2w2 w3w3
6
Markov Logic Sm(x) Ca(x) Fr(x,y) Fr(y,z) Fr(x,z) 1/Z exp( …) x x x,y,z x x Fr(x,y) Sm(y) y y Probabilistic Logical w1w1 w2w2 w3w3
7
Markov Logic Sm(x) Ca(x) Fr(x,y) Fr(y,z) Fr(x,z) 1/Z exp( …) x x x,y,z x x Fr(x,y) Sm(y) y y Probabilistic Logical w1w1 w2w2 w3w3 This becomes a disjunction of n conjunctions.
8
Markov Logic Sm(x) Ca(x) Fr(x,y) Fr(y,z) Fr(x,z) 1/Z exp( …) x x x,y,z x x Fr(x,y) Sm(y) y y Probabilistic Logical w1w1 w2w2 w3w3 In CNF, each grounding explodes into 2 n clauses!
9
Recursive Random Fields Sm(x) Ca(x) Fr(x,y)Fr(y,z) Fr(x,z) f0f0 x f 1,x Fr(x,y) Sm(y) y f 4,x,y Probabilistic w1w1 w2w2 w3w3 x,y,z f 2,x,y,z x f 3,x w4w4 w6w6 w5w5 w7w7 w8w8 w9w9 w 10 w 11 Where: f i,x = 1/Z i exp( …)
10
RRF features are parameterized and are grounded using objects in the domain. Leaves = predicates: Recursive features are built up from other RRF features: The RRF Model
11
RRF features are parameterized and are grounded using objects in the domain. Leaves = predicates: Recursive features are built up from other RRF features: The RRF Model
12
Representing Logic: AND (x y) 1/Z exp(w 1 x + w 2 y) 01n … P(World) # true literals
13
Representing Logic: OR (x y) 1/Z exp(w 1 x + w 2 y) (x y) ( x y) −1/Z exp(−w 1 x + −w y) De Morgan: (x y) ( x y) 01n … P(World) # true literals
14
Representing Logic: FORALL (x y) 1/Z exp(w 1 x + w 2 y) (x y) ( x y) −1/Z exp(−w 1 x + −w y) a: f(a) 1/Z exp(w x 1 + w x 2 + …) 01n … P(World) # true literals
15
Representing Logic: EXIST (x y) 1/Z exp(w 1 x + w 2 y) (x y) ( x y) −1/Z exp(−w 1 x + −w y) a: f(a) 1/Z exp(w x 1 + w x 2 + …) a: f(a) ( a: f(a)) −1/Z exp(−w x 1 + −w x 2 + …) 01n … P(World) # true literals
16
Distributions MLNs and RRFs can compactly represent DistributionMLNsRRFs Propositional MRFYes Deterministic KBYes Soft conjunctionYes Soft universal quantificationYes Soft disjunctionNoYes Soft existential quantificationNoYes Soft nested formulasNoYes
17
Inference and Learning Inference MAP: iterated conditional modes (ICM) Conditional probabilities: Gibbs sampling Learning Back-propagation RRF weight learning is more powerful than MLN structure learning More flexible theory revision
18
Current Work: Probabilistic Integrity Constraints Want to represent probabilistic version of:
19
Conclusion Recursive random fields: + Compactly represent many distributions MLNs cannot + Make conjunctions, existentials, and nested formulas probabilistic + Offer new methods for structure learning and theory revision – Less intuitive than Markov logic
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.