Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u0866707.

Slides:



Advertisements
Similar presentations
CS188: Computational Models of Human Behavior
Advertisements

Discriminative Training of Markov Logic Networks
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Discriminative Structure and Parameter.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 March, 25, 2015 Slide source: from Pedro Domingos UW.
Introduction to Markov Random Fields and Graph Cuts Simon Prince
Markov Logic Networks: Exploring their Application to Social Network Analysis Parag Singla Dept. of Computer Science and Engineering Indian Institute of.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis Lecture)
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis)
Markov Logic: Combining Logic and Probability Parag Singla Dept. of Computer Science & Engineering Indian Institute of Technology Delhi.
1 Unsupervised Semantic Parsing Hoifung Poon and Pedro Domingos EMNLP 2009 Best Paper Award Speaker: Hao Xiong.
Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Garrette, Katrin Erk, Raymond Mooney The University of Texas at Austin Richard Montague Andrey Markov Montague.
Random Regression: Example Target Query: P(gender(sam) = F)? Sam is friends with Bob and Anna. Unnormalized Probability: Oliver Schulte, Hassan Khosravi,
Markov Networks.
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Jesse Davis, Stanley Kok,
Markov Logic: A Unifying Framework for Statistical Relational Learning Pedro Domingos Matthew Richardson
Markov Logic Networks Hao Wu Mariyam Khalid. Motivation.
Speaker:Benedict Fehringer Seminar:Probabilistic Models for Information Extraction by Dr. Martin Theobald and Maximilian Dylla Based on Richards, M., and.
School of Computing Science Simon Fraser University Vancouver, Canada.
SAT ∩ AI Henry Kautz University of Rochester. Outline Ancient History: Planning as Satisfiability The Future: Markov Logic.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Recursive Random Fields Daniel Lowd University of Washington June 29th, 2006 (Joint work with Pedro Domingos)
Graphical Models Lei Tang. Review of Graphical Models Directed Graph (DAG, Bayesian Network, Belief Network) Typically used to represent causal relationship.
Inference. Overview The MC-SAT algorithm Knowledge-based model construction Lazy inference Lifted inference.
Recursive Random Fields Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Stanley Kok, Daniel Lowd,
Relational Models. CSE 515 in One Slide We will learn to: Put probability distributions on everything Learn them from data Do inference with them.
Markov Logic Networks: A Unified Approach To Language Processing Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with.
Markov Logic: A Simple and Powerful Unification Of Logic and Probability Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Learning, Logic, and Probability: A Unified View Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Stanley Kok, Matt.
On the Proper Treatment of Quantifiers in Probabilistic Logic Semantics Islam Beltagy and Katrin Erk The University of Texas at Austin IWCS 2015.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
1 Learning the Structure of Markov Logic Networks Stanley Kok & Pedro Domingos Dept. of Computer Science and Eng. University of Washington.
Statistical Relational Learning Pedro Domingos Dept. Computer Science & Eng. University of Washington.
Approximate Inference 2: Monte Carlo Markov Chain
Markov Logic Parag Singla Dept. of Computer Science University of Texas, Austin.
Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Machine Learning For the Web: A Unified View Pedro Domingos Dept. of Computer Science & Eng. University of Washington Includes joint work with Stanley.
Undirected Models: Markov Networks David Page, Fall 2009 CS 731: Advanced Methods in Artificial Intelligence, with Biomedical Applications.
Markov Logic And other SRL Approaches
Transfer in Reinforcement Learning via Markov Logic Networks Lisa Torrey, Jude Shavlik, Sriraam Natarajan, Pavan Kuppili, Trevor Walker University of Wisconsin-Madison,
Markov Random Fields Probabilistic Models for Images
Markov Logic and Deep Networks Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
BioSnowball: Automated Population of Wikis (KDD ‘10) Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/11/30 1.
Markov Logic Networks Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Matt Richardson)
CS774. Markov Random Field : Theory and Application Lecture 02
1 Markov Logic Stanley Kok Dept. of Computer Science & Eng. University of Washington Joint work with Pedro Domingos, Daniel Lowd, Hoifung Poon, Matt Richardson,
CPSC 322, Lecture 31Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 33 Nov, 25, 2015 Slide source: from Pedro Domingos UW & Markov.
CPSC 422, Lecture 32Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 32 Nov, 27, 2015 Slide source: from Pedro Domingos UW & Markov.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 Nov, 23, 2015 Slide source: from Pedro Domingos UW.
Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Happy Mittal (Joint work with Prasoon Goyal, Parag Singla and Vibhav Gogate) IIT Delhi New Rules for Domain Independent Lifted.
Markov Logic: A Representation Language for Natural Language Semantics Pedro Domingos Dept. Computer Science & Eng. University of Washington (Based on.
Progress Report ekker. Problem Definition In cases such as object recognition, we can not include all possible objects for training. So transfer learning.
First Order Representations and Learning coming up later: scalability!
Markov Random Fields in Vision
Scalable Statistical Relational Learning for NLP William Y. Wang William W. Cohen Machine Learning Dept and Language Technologies Inst. joint work with:
New Rules for Domain Independent Lifted MAP Inference
An Introduction to Markov Logic Networks in Knowledge Bases
Scalable Statistical Relational Learning for NLP
Markov Logic Networks for NLP CSCI-GA.2591
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 29
Logic for Artificial Intelligence
Markov Networks.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
Lifted First-Order Probabilistic Inference [de Salvo Braz, Amir, and Roth, 2005] Daniel Lowd 5/11/2005.
Markov Networks.
Markov Random Fields Presented by: Vladan Radosavljevic.
Markov Networks.
Presentation transcript:

Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u

Overview Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments

Markov Networks Also known as Markov random fields. Composed of ◦ An undirected graph G ◦ A set of potential function φ k Function: And x {k} is the state of kth clique. Z is partition function:

Markov Networks Log-linear models: each clique potential function is replaced by an exponentiated weighted sum of features of the state:

Overview Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments

First-order Logic A set of sentences or formulas in first- order logic. Constructed by the symbols: connective, quanitfier, constants, variables, functions, predicates, etc.

Syntax for First-Order Logic Connective → ∨ | ∧ | ⇒ | ⇔ Quanitfier → ∃ | ∀ Constant → A | John | Car1 Variable → x | y | z |... Predicate → Brother | Owns |... Function → father-of | plus |...

Overview Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments

Markov Logic Networks A Markov Logic Network (MLN) L is a set of pairs (F i, w i ) where ◦ F i is a formula in first-order logic ◦ w i is a real number

Features of Markov Logic Network It defines a Markov network M L,C with: ◦ For each possible grounding of each predicate in L, there is a binary node in M L,C. If the ground atom is true, the node is 1. Otherwise, 0. ◦ For each possible grounding of each formula in L, there is a feature node in M L,C. If the ground formula is true, the feature is 1. Otherwise, 0.

Ground Term A ground term is a term containing no variables. Ground Markov Network: MLNs have certain regularities in structure and parameters. MLN is template for ground Markov networks

Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Cancer(A) Smokes(A)Smokes(B) Cancer(B)

Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Friends(A,A) Friends(B,A) Friends(A,B) Friends(B,B)

Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Cancer(A) Smokes(A)Friends(A,A) Friends(B,A) Smokes(B) Friends(A,B) Cancer(B) Friends(B,B)

MLNs and First-Order Logic First-order KB  assign a weight to each formula  MLN. Satisfiable KB + positive weights to each formula  MLN represents a uniform distribution over the worlds. MLN produce useful results even contains contradictions.

Overview Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments

Inference Already know the probability of formula F 1, what is the probability of F 2 ? Two steps (Approximate): ◦ Find the minimal subset of the ground network. ◦ (MCMC-Gibbs algorithm) Sampling one ground atom given its Markov blanket (the set of ground atoms that appear in some grounding of a formula with it).

Inference The probability of a ground atom X l when its Markov blanket B l is in state b l is: is the value of 0 or 1.

Overview Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments

Learning Data is from a relational database Strategy: ◦ Counting the number of true groundings of formula in DB. ◦ Use Pseudo-Likelihood to get gradient. is the number of true groundings of the ith formula when we force Xl =0 and leave the remaining data unchanged, and similarly for

Overview Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments

Experiments Hand-built knowledge base (KB) ILP: CLAUDIEN Markov logic networks (MLNs) ◦ Using KB ◦ Using CLAUDIEN ◦ Using KB + CLAUDIEN Bayesian network learner Naïve Bayes

Results

Summary Markov logic networks combine first- order logic and Markov networks ◦ Syntax: First-order logic + Positive Weights ◦ Semantics: Templates for Markov networks Inference: Minimal subset + Gibbs Learning: Pseudo-likelihood