Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington.

Slides:



Advertisements
Similar presentations
Online Max-Margin Weight Learning with Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.
Advertisements

Joint Inference in Information Extraction Hoifung Poon Dept. Computer Science & Eng. University of Washington (Joint work with Pedro Domingos)
Markov Networks Alan Ritter.
Discriminative Training of Markov Logic Networks
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Discriminative Structure and Parameter.
Online Max-Margin Weight Learning for Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 20
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 March, 25, 2015 Slide source: from Pedro Domingos UW.
Logic.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis Lecture)
Markov Logic Networks Instructor: Pedro Domingos.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis)
Markov Logic: Combining Logic and Probability Parag Singla Dept. of Computer Science & Engineering Indian Institute of Technology Delhi.
Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u
Markov Networks.
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Jesse Davis, Stanley Kok,
Markov Logic: A Unifying Framework for Statistical Relational Learning Pedro Domingos Matthew Richardson
Markov Logic Networks Hao Wu Mariyam Khalid. Motivation.
Speaker:Benedict Fehringer Seminar:Probabilistic Models for Information Extraction by Dr. Martin Theobald and Maximilian Dylla Based on Richards, M., and.
School of Computing Science Simon Fraser University Vancouver, Canada.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
CSE 574 – Artificial Intelligence II Statistical Relational Learning Instructor: Pedro Domingos.
Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
CSE 574: Artificial Intelligence II Statistical Relational Learning Instructor: Pedro Domingos.
Recursive Random Fields Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Stanley Kok, Daniel Lowd,
Relational Models. CSE 515 in One Slide We will learn to: Put probability distributions on everything Learn them from data Do inference with them.
Markov Logic Networks: A Unified Approach To Language Processing Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with.
Markov Logic: A Simple and Powerful Unification Of Logic and Probability Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Learning, Logic, and Probability: A Unified View Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Stanley Kok, Matt.
1 Learning the Structure of Markov Logic Networks Stanley Kok & Pedro Domingos Dept. of Computer Science and Eng. University of Washington.
Statistical Relational Learning Pedro Domingos Dept. Computer Science & Eng. University of Washington.
Pedro Domingos Dept. of Computer Science & Eng.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Markov Logic Parag Singla Dept. of Computer Science University of Texas, Austin.
Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Machine Learning For the Web: A Unified View Pedro Domingos Dept. of Computer Science & Eng. University of Washington Includes joint work with Stanley.
Markov Logic And other SRL Approaches
Logical Agents Logic Propositional Logic Summary
Markov Random Fields Probabilistic Models for Images
Markov Logic and Deep Networks Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Markov Logic Networks Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Matt Richardson)
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
First-Order Logic and Inductive Logic Programming.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
1 Markov Logic Stanley Kok Dept. of Computer Science & Eng. University of Washington Joint work with Pedro Domingos, Daniel Lowd, Hoifung Poon, Matt Richardson,
CPSC 322, Lecture 31Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 33 Nov, 25, 2015 Slide source: from Pedro Domingos UW & Markov.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Markov Logic: A Representation Language for Natural Language Semantics Pedro Domingos Dept. Computer Science & Eng. University of Washington (Based on.
UBC Department of Computer Science Undergraduate Events More
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
First Order Representations and Learning coming up later: scalability!
Scalable Statistical Relational Learning for NLP William Y. Wang William W. Cohen Machine Learning Dept and Language Technologies Inst. joint work with:
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
An Introduction to Markov Logic Networks in Knowledge Bases
Knowledge Discovery, Machine Learning, and Social Mining
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30
First-Order Logic and Inductive Logic Programming
Logical Inference: Through Proof to Truth
Logic for Artificial Intelligence
Markov Networks.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 20
Learning Markov Networks
Mostly pilfered from Pedro’s slides
Markov Networks.
Presentation transcript:

Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington

Desiderata A language for cognitive modeling should: Handle uncertainty Noise Incomplete information Ambiguity Handle complexity Many objects Relations among them IsA and IsPart hierarchies Etc.

Solution Probability handles uncertainty Logic handles complexity What is the simplest way to combine the two?

Markov Logic Assign weights to logical formulas Treat formulas as templates for features of Markov networks

Overview Representation Inference Learning Applications

Propositional Logic Atoms: Symbols representing propositions Logical connectives: ¬, Λ, V, etc. Knowledge base: Set of formulas World: Truth assignment to all atoms Every KB can be converted to CNF CNF: Conjunction of clauses Clause: Disjunction of literals Literal: Atom or its negation Entailment: Does KB entail query?

First-Order Logic Atom: Predicate(Variables,Constants) E.g.: Ground atom: All arguments are constants Quantifiers: This tutorial: Finite, Herbrand interpretations

Markov Networks Undirected graphical models Cancer CoughAsthma Smoking Potential functions defined over cliques SmokingCancer Ф(S,C) False 4.5 FalseTrue 4.5 TrueFalse 2.7 True 4.5

Markov Networks Undirected graphical models Log-linear model: Weight of Feature iFeature i Cancer CoughAsthma Smoking

Probabilistic Knowledge Bases PKB = Set of formulas and their probabilities + Consistency + Maximum entropy = Set of formulas and their weights = Set of formulas and their potentials (1 if formula true, if formula false)

Markov Logic A Markov Logic Network (MLN) is a set of pairs (F, w) where F is a formula in first-order logic w is a real number An MLN defines a Markov network with One node for each grounding of each predicate in the MLN One feature for each grounding of each formula F in the MLN, with the corresponding weight w

Relation to Statistical Models Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields Obtained by making all predicates zero-arity Markov logic allows objects to be interdependent (non-i.i.d.) Markov logic facilitates composition

Relation to First-Order Logic Infinite weights  First-order logic Satisfiable KB, positive weights  Satisfying assignments = Modes of distribution Markov logic allows contradictions between formulas

Example

Overview Representation Inference Learning Applications

Theorem Proving TP(KB, Query) KB Q ← KB U {¬ Query} return ¬SAT(CNF(KB Q ))

Satisfiability (DPLL) SAT(CNF) if CNF is empty return True if CNF contains empty clause return False choose an atom A return SAT(CNF(A)) V SAT(CNF(¬A))

First-Order Theorem Proving Propositionalization 1. Form all possible ground atoms 2. Apply propositional theorem prover Lifted Inference: Resolution Resolve pairs of clauses until empty clause derived Unify literals by substitution, e.g.: unifies and

Probabilistic Theorem Proving Given Probabilistic knowledge base K Query formula Q Output P(Q|K)

Weighted Model Counting ModelCount(CNF) = # worlds that satisfy CNF Assign a weight to each literal Weight(world) = П weights(true literals) Weighted model counting: Given CNF C and literal weights W Output Σ weights(worlds that satisfy C) PTP is reducible to lifted WMC

Example

If Then

Example

Inference Problems

All conditional probabilities are ratios of partition functions: All partition functions can be computed by weighted model counting Propositional Case

Conversion to CNF + Weights WCNF(PKB) for all (F i, Φ i ) є PKB s.t. Φ i > 0 do PKB ← PKB U {(F i  A i, 0)} \ {(F i, Φ i )} CNF ← CNF(PKB) for all ¬A i literals do W ¬Ai ← Φ i for all other literals L do w L ← 1 return (CNF, weights)

Probabilistic Theorem Proving PTP(PKB, Query) PKB Q ← PKB U {(Query,0)} return WMC(WCNF(PKB Q )) / WMC(WCNF(PKB))

Probabilistic Theorem Proving PTP(PKB, Query) PKB Q ← PKB U {(Query,0)} return WMC(WCNF(PKB Q )) / WMC(WCNF(PKB)) TP(KB, Query) KB Q ← KB U {¬ Query} return ¬SAT(CNF(KB Q )) Compare:

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 Base Case

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C 1,…, C k sharing no atoms return Decomp. Step

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C 1,…, C k sharing no atoms return choose an atom A return Splitting Step

First-Order Case PTP schema remains the same Conversion of PKB to hard CNF and weights: New atom in F i  A i is now Predicate i (variables in F i, constants in F i ) New argument in WMC: Set of substitution constraints of the form x = A, x ≠ A, x = y, x ≠ y Lift each step of WMC

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 Base Case

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if there exists a lifted decomposition of CNF return Decomp. Step

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if there exists a lifted decomposition of CNF return choose an atom A return Splitting Step

Extensions Unit propagation, etc. Caching / Memoization Knowledge-based model construction

Approximate Inference WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C 1,…, C k sharing no atoms return choose an atom A return with probability, etc. Splitting Step

MPE Inference Replace sums by maxes Use branch-and-bound for efficiency Do traceback

Overview Representation Inference Learning Applications

Learning Data is a relational database Closed world assumption (if not: EM) Learning parameters (weights) Generatively Discriminatively Learning structure (formulas)

Generative Weight Learning Maximize likelihood Use gradient ascent or L-BFGS No local maxima Requires inference at each step (slow!) No. of true groundings of clause i in data Expected no. true groundings according to model

Pseudo-Likelihood Likelihood of each variable given its neighbors in the data [Besag, 1975] Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well for long inference chains

Discriminative Weight Learning Maximize conditional likelihood of query ( y ) given evidence ( x ) Expected counts can be approximated by counts in MAP state of y given x No. of true groundings of clause i in data Expected no. true groundings according to model

w i ← 0 for t ← 1 to T do y MAP ← Viterbi(x) w i ← w i + η [count i (y Data ) – count i (y MAP )] return ∑ t w i / T Voted Perceptron Originally proposed for training HMMs discriminatively [Collins, 2002] Assumes network is linear chain

HMMs are special case of MLNs Replace Viterbi by prob. theorem proving Network can now be arbitrary graph w i ← 0 for t ← 1 to T do y MAP ← PTP(MLN U {x}, y) w i ← w i + η [count i (y Data ) – count i (y MAP )] return ∑ t w i / T Voted Perceptron for MLNs

Structure Learning Generalizes feature induction in Markov nets Any inductive logic programming approach can be used, but... Goal is to induce any clauses, not just Horn Evaluation function should be likelihood Requires learning weights for each candidate Turns out not to be bottleneck Bottleneck is counting clause groundings Solution: Subsampling

Structure Learning Initial state: Unit clauses or hand-coded KB Operators: Add/remove literal, flip sign Evaluation function: Pseudo-likelihood + Structure prior Search: Beam, shortest-first [Kok & Domingos, 2005] Bottom-up [Mihalkova & Mooney, 2007] Relational pathfinding [Kok & Domingos, 2009, 2010]

Alchemy Open-source software including: Full first-order logic syntax MAP and marginal/conditional inference Generative & discriminative weight learning Structure learning Programming language features alchemy.cs.washington.edu

AlchemyPrologBUGS Represent- ation F.O. Logic + Markov nets Horn clauses Bayes nets InferenceProbabilistic thm. proving Theorem proving Gibbs sampling LearningParameters & structure NoParams. UncertaintyYesNoYes RelationalYes No

Overview Representation Inference Learning Applications

Applications to Date Natural language processing Information extraction Entity resolution Link prediction Collective classification Social network analysis Robot mapping Activity recognition Scene analysis Computational biology Probabilistic Cyc Personal assistants Etc.

Information Extraction Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp ). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

Segmentation Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp ). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence. Author Title Venue

Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp ). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp ). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

State of the Art Segmentation HMM (or CRF) to assign each token to a field Entity resolution Logistic regression to predict same field/citation Transitive closure Alchemy implementation: Seven formulas

Types and Predicates token = {Parag, Singla, and, Pedro,...} field = {Author, Title, Venue} citation = {C1, C2,...} position = {0, 1, 2,...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation)

Types and Predicates token = {Parag, Singla, and, Pedro,...} field = {Author, Title, Venue,...} citation = {C1, C2,...} position = {0, 1, 2,...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Optional

Types and Predicates Evidence token = {Parag, Singla, and, Pedro,...} field = {Author, Title, Venue} citation = {C1, C2,...} position = {0, 1, 2,...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation)

token = {Parag, Singla, and, Pedro,...} field = {Author, Title, Venue} citation = {C1, C2,...} position = {0, 1, 2,...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Types and Predicates Query

Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”) Formulas

Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)

Formulas Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)

Formulas Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)

Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”) Formulas

Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”) Formulas

Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)

Formulas Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) ^ !Token(“.”,i,c) InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)

Results: Segmentation on Cora

Results: Matching Venues on Cora

Summary Cognitive modeling requires combination of logical and statistical techniques We need to unify the two Markov logic Syntax: Weighted logical formulas Semantics: Markov network templates Inference: Probabilistic theorem proving Learning: Statistical inductive logic programming Many applications to date

Resources Open-source software/Web site: Alchemy Learning and inference algorithms Tutorials, manuals, etc. MLNs, datasets, etc. Publications Book: Domingos & Lowd, Markov Logic, Morgan & Claypool, alchemy.cs.washington.edu