Mostly pilfered from Pedro’s slides

Slides:



Advertisements
Similar presentations
Discriminative Training of Markov Logic Networks
Advertisements

University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Discriminative Structure and Parameter.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 March, 25, 2015 Slide source: from Pedro Domingos UW.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
Markov Logic Networks: Exploring their Application to Social Network Analysis Parag Singla Dept. of Computer Science and Engineering Indian Institute of.
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis Lecture)
Undirected Probabilistic Graphical Models (Markov Nets) (Slides from Sam Roweis)
Markov Logic: Combining Logic and Probability Parag Singla Dept. of Computer Science & Engineering Indian Institute of Technology Delhi.
Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u
Practical Statistical Relational AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Markov Networks.
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Jesse Davis, Stanley Kok,
Markov Logic Networks Hao Wu Mariyam Khalid. Motivation.
Speaker:Benedict Fehringer Seminar:Probabilistic Models for Information Extraction by Dr. Martin Theobald and Maximilian Dylla Based on Richards, M., and.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Practical Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
CSE 574: Artificial Intelligence II Statistical Relational Learning Instructor: Pedro Domingos.
Inference. Overview The MC-SAT algorithm Knowledge-based model construction Lazy inference Lifted inference.
Unifying Logical and Statistical AI Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Stanley Kok, Daniel Lowd,
Relational Models. CSE 515 in One Slide We will learn to: Put probability distributions on everything Learn them from data Do inference with them.
Markov Logic: A Simple and Powerful Unification Of Logic and Probability Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Learning, Logic, and Probability: A Unified View Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Stanley Kok, Matt.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Statistical Relational Learning Pedro Domingos Dept. Computer Science & Eng. University of Washington.
Markov Logic Parag Singla Dept. of Computer Science University of Texas, Austin.
Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint.
Machine Learning For the Web: A Unified View Pedro Domingos Dept. of Computer Science & Eng. University of Washington Includes joint work with Stanley.
1 MCMC Style Sampling / Counting for SAT Can we extend SAT/CSP techniques to solve harder counting/sampling problems? Such an extension would lead us to.
Markov Logic And other SRL Approaches
Statistical Modeling Of Relational Data Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Markov Random Fields Probabilistic Models for Images
Markov Logic Networks Pedro Domingos Dept. Computer Science & Eng. University of Washington (Joint work with Matt Richardson)
CS Introduction to AI Tutorial 8 Resolution Tutorial 8 Resolution.
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
First-Order Logic and Inductive Logic Programming.
1 Markov Logic Stanley Kok Dept. of Computer Science & Eng. University of Washington Joint work with Pedro Domingos, Daniel Lowd, Hoifung Poon, Matt Richardson,
CPSC 322, Lecture 31Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 33 Nov, 25, 2015 Slide source: from Pedro Domingos UW & Markov.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Oct, 30, 2015 Slide credit: some slides adapted from Stuart.
CPSC 322, Lecture 30Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30 Nov, 23, 2015 Slide source: from Pedro Domingos UW.
Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
Applications of Markov Logic. Overview Basics Logistic regression Hypertext classification Information retrieval Entity resolution Hidden Markov models.
Happy Mittal (Joint work with Prasoon Goyal, Parag Singla and Vibhav Gogate) IIT Delhi New Rules for Domain Independent Lifted.
Markov Logic: A Representation Language for Natural Language Semantics Pedro Domingos Dept. Computer Science & Eng. University of Washington (Based on.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Progress Report ekker. Problem Definition In cases such as object recognition, we can not include all possible objects for training. So transfer learning.
First Order Representations and Learning coming up later: scalability!
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Scalable Statistical Relational Learning for NLP William Y. Wang William W. Cohen Machine Learning Dept and Language Technologies Inst. joint work with:
Practical Statistical Relational Learning Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
New Rules for Domain Independent Lifted MAP Inference
Statistical Relational Learning
An Introduction to Markov Logic Networks in Knowledge Bases
Inference in Propositional Logic (and Intro to SAT)
Markov Logic Networks for NLP CSCI-GA.2591
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 30
Practical Statistical Relational AI
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 29
First-Order Logic and Inductive Logic Programming
Logic for Artificial Intelligence
Markov Networks.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 20
Practical Statistical Relational AI
Learning Markov Networks
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Administrivia What: LTI Seminar When: Friday Oct 30, 2009, 2:00pm - 3:00pm Where: 1305 NSH Faculty Host: Noah Smith Title: SeerSuite: Enterprise Search.
Methods of Proof Chapter 7, second half.
Markov Networks.
Presentation transcript:

11-15-2010 Mostly pilfered from Pedro’s slides Poon and Domingos 11-15-2010 Mostly pilfered from Pedro’s slides

Motivation The real world is complex and uncertain Logic handles complexity Probability handles uncertainty

MLN’s Bold Agenda: “Unify Logical and Statistical AI” Field Logical approach Statistical approach Knowledge representation First-order logic Graphical models Automated reasoning Satisfiability testing Markov chain Monte Carlo Machine learning Inductive logic programming Neural networks Planning Classical planning Markov decision processes Natural language processing Definite clause grammars Prob. context-free grammars

Markov Networks: [Review] Undirected graphical models Smoking Cancer Asthma Cough Potential functions defined over cliques Smoking Cancer Ф(S,C) False 4.5 True 2.7

Markov Networks Smoking Cancer Asthma Cough Undirected graphical models Smoking Cancer Asthma Cough Log-linear model: Weight of Feature i Feature i

Inference in Markov Networks Goal: Compute marginals & conditionals of Exact inference is #P-complete Conditioning on Markov blanket is easy: Gibbs sampling exploits this

MCMC: Gibbs Sampling state ← random truth assignment for i ← 1 to num-samples do for each variable x sample x according to P(x|neighbors(x)) state ← state with new value of x P(F) ← fraction of states in which F is true

Overview Motivation Foundational areas Putting the pieces together Probabilistic inference [Markov nets] Statistical learning Logical inference Inductive logic programming Putting the pieces together MLNs Applications

Learning Markov Networks Learning parameters (weights) Generatively Discriminatively Learning structure (features) In this tutorial: Assume complete data (If not: EM versions of algorithms)

Generative Weight Learning Maximize likelihood or posterior probability Numerical optimization (gradient or 2nd order) No local maxima Requires inference at each step (slow!) [Like CRF learning but for P(x) not P(y|x) – WC] No. of times feature i is true in data Expected no. times feature i is true according to model

Pseudo-Likelihood Likelihood of each variable given its neighbors in the data Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well for long inference chains [Like MEMM learning – WC]

Discriminative Weight Learning Maximize conditional likelihood of query (y) given evidence (x) Approximate expected counts by counts in MAP state of y given x [Like CRF learning – WC] No. of true groundings of clause i in data Expected no. true groundings according to model

Overview Motivation Foundational areas Putting the pieces together Probabilistic inference [Markov nets] Statistical learning Logical inference Inductive logic programming Putting the pieces together MLNs Applications

First-Order Logic Constants, variables, functions, predicates E.g.: Anna, x, MotherOf(x), Friends(x, y) Literal: Predicate or its negation Clause: Disjunction of literals Grounding: Replace all variables by constants E.g.: Friends (Anna, Bob) World (model, interpretation): Assignment of truth values to all ground predicates

Inference in First-Order Logic “Traditionally” done by theorem proving (e.g.: Prolog) Propositionalization followed by model checking turns out to be faster (often a lot) [This was popularized by Kautz & Selman – WC] Propositionalization: Create all ground atoms and clauses Model checking: Satisfiability testing Two main approaches: Backtracking (e.g.: DPLL) Stochastic local search (e.g.: WalkSAT)

Satisfiability Input: Set of clauses (Convert KB to conjunctive normal form (CNF)) Output: Truth assignment that satisfies all clauses, or failure The paradigmatic NP-complete problem Solution: Search Key point: Most [random] SAT problems are actually easy Hard region: Narrow range of #Clauses / #Variables

Backtracking Assign truth values by depth-first search Assigning a variable deletes false literals and satisfied clauses Empty set of clauses: Success Empty clause: Failure Additional improvements: Unit propagation (unit clause forces truth value) Pure literals (same truth value everywhere)

Stochastic Local Search Uses complete assignments instead of partial Start with random state Flip variables in unsatisfied clauses Hill-climbing: Minimize # unsatisfied clauses Avoid local minima: Random flips Multiple restarts

The WalkSAT Algorithm for i ← 1 to max-tries do solution = random truth assignment for j ← 1 to max-flips do if all clauses satisfied then return solution c ← random unsatisfied clause with probability p flip a random variable in c else flip variable in c that maximizes number of satisfied clauses return failure

Overview Motivation Foundational areas Putting the pieces together Probabilistic inference [Markov nets] Statistical learning Logical inference Inductive logic programming Putting the pieces together MLNs Applications

Markov Logic Logical language: First-order logic Probabilistic language: Markov networks Syntax: First-order formulas with weights Semantics: Templates for Markov net features Learning: Parameters: Generative or discriminative Structure: ILP with arbitrary clauses and MAP score Inference: MAP: Weighted satisfiability Marginal: MCMC with moves proposed by SAT solver Partial grounding + Lazy inference

Markov Logic: Intuition A logical KB is a set of hard constraints on the set of possible worlds Let’s make them soft constraints: When a world violates a formula, It becomes less probable, not impossible Give each formula a weight (Higher weight  Stronger constraint)

Markov Logic: Definition A Markov Logic Network (MLN) is a set of pairs (F, w) where F is a formula in first-order logic w is a real number Together with a set of constants, it defines a Markov network with One node for each grounding of each predicate in the MLN One feature for each grounding of each formula F in the MLN, with the corresponding weight w

Example: Friends & Smokers

Example: Friends & Smokers

Example: Friends & Smokers

Example: Friends & Smokers Two constants: Anna (A) and Bob (B)

Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B)

Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

Markov Logic Networks MLN is template for ground Markov nets Probability of a world x: Typed variables and constants greatly reduce size of ground Markov net Functions, existential quantifiers, etc. Infinite and continuous domains Weight of formula i No. of true groundings of formula i in x

Relation to Statistical Models Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields Obtained by making all predicates zero-arity Markov logic allows objects to be interdependent (non-i.i.d.)

Relation to First-Order Logic Infinite weights  First-order logic Satisfiable KB, positive weights  Satisfying assignments = Modes of distribution Markov logic allows contradictions between formulas

<aside> Is this a good thing? Pros: Drawing connections between problems Specifying/visualizing/prototyping new methods … Cons: Robust, general tools are very hard to build for computationally hard tasks This is especially true when learning is involved </aside> -wc

Learning Data is a relational database Closed world assumption (if not: EM) Learning parameters (weights) Learning structure (formulas)

Weight Learning Parameter tying: Groundings of same clause Generative learning: Pseudo-likelihood Discriminative learning: Cond. Likelihood [still just like CRFs - all we need is to do inference efficiently. Here they use a Collins-like method. WC] No. of times clause i is true in data Expected no. times clause i is true according to MLN

Problem 1… Memory Explosion Problem: If there are n constants and the highest clause arity is c, the ground network requires O(n ) memory Solution: Exploit sparseness; ground clauses lazily → LazySAT algorithm [Singla & Domingos, 2006] c

Ground Network Construction queue ← query nodes repeat node ← front(queue) remove node from queue add node to network if node not in evidence then add neighbors(node) to queue until queue = Ø

Problem 2: MAP/MPE Inference Problem: Find most likely state of world given evidence Query Evidence

MAP/MPE Inference Problem: Find most likely state of world given evidence

MAP/MPE Inference Problem: Find most likely state of world given evidence

MAP/MPE Inference Problem: Find most likely state of world given evidence This is just the weighted MaxSAT problem Use weighted SAT solver (e.g., MaxWalkSAT [Kautz et al., 1997] ) Potentially faster than logical inference (!)

The MaxWalkSAT Algorithm for i ← 1 to max-tries do solution = random truth assignment for j ← 1 to max-flips do if ∑ weights(sat. clauses) > threshold then return solution c ← random unsatisfied clause with probability p flip a random variable in c else flip variable in c that maximizes ∑ weights(sat. clauses) return failure, best solution found

Ended about here on 11/15

MCMC is Insufficient for Logic Problem: Deterministic dependencies break MCMC Near-deterministic ones make it very slow Solution: Combine MCMC and WalkSAT → MC-SAT algorithm [Poon & Domingos, 2006]

Current Best Solution: The MC-SAT Algorithm [Basic idea: figure out how to sample from SAT assignments, and use this as a subroutine for MCMC - WC] MC-SAT = MCMC + SAT MCMC: Slice sampling with an auxiliary variable for each clause SAT: Wraps around SampleSAT (a uniform sampler) to sample from highly non-uniform distributions Sound: Satisfies ergodicity & detailed balance Efficient: Orders of magnitude faster than Gibbs and other MCMC algorithms In this talk, I will talk about a new inf alg, MC-SAT, that we developed to meet this challenge. MC-SAT combines ideas in MCMC and SAT. From the MCMC perspective, MC-SAT is a slice sampling method using an aux var for each clause. From the SAT perspective, MC-SAT wraps around SampleSAT, a uniform SAT sampler, to sample from highly non-uniform dist. MC-SAT is sound and our experiments show that it is much faster than existing MCMC methods like Gibbs and ST. MC-SAT accepts problems defined in ML, which is a very general language that subsumes finite FOL and MN. So MC-SAT is applicable to any S.R.L. problems. 48

The MC-SAT Algorithm X ( 0 )  A random solution satisfying all hard clauses for k  1 to num_samples M  Ø forall Ci satisfied by X ( k – 1 ) With prob. 1 – exp ( – wi ) add Ci to M endfor X ( k )  A uniformly random solution satisfying M So! we arrive at the MC-SAT alg, simple and elegant. We init w. a random sol that sat all hard clauses. In each sampling, we first choose a random subset M of the sat clauses, subj to each prb of 1-e^-w_i. Then, the next sample is drawn uniformly from solution to M. 49

The MC-SAT Algorithm Sound: Satisfies ergodicity and detailed balance Assuming we have a perfect uniform sampler Of course, we don’t (since the problem is #P-hard) But there is an approximate uniform sampler [Wei et al. 2004] SampleSAT = WalkSAT + Simulated annealing Trade off uniformity vs. efficiency by tuning the prob. of WS steps vs. SA steps It is easy to prove that MC-SAT satisfies ergodicity and DB, assuming we have a perfect uniform sampler. In practice, there is no efficient uniform sampler. Otherwise prb inf won’t be #P-complete. So we use an approx sampler, the samplesat alg, developed by Wei et al SS combines WS and SA. WS …; whereas SA … SS works by, at each step, w. certain prob, running a WS step, and otherwise running an SA step. And it achieves efficiency close to WS, and uniformity close to SA. This prb can be used to trade off … Now the question is whether MC-SAT will work w. the imperfect uniformity provided by SS. In exp, we find out that the answer is yes. 56

Overview Motivation Foundational areas Putting the pieces together Probabilistic inference [Markov nets] Statistical learning Logical inference Inductive logic programming Putting the pieces together MLNs Applications

Applications Basics Logistic regression Hypertext classification Information retrieval Entity resolution Hidden Markov models Information extraction Statistical parsing Semantic processing Bayesian networks Relational models Robot mapping Planning and MDPs Practical tips

Uniform Distribn.: Empty MLN Example: Unbiased coin flips Type: flip = { 1, … , 20 } Predicate: Heads(flip)

Binomial Distribn.: Unit Clause Example: Biased coin flips Type: flip = { 1, … , 20 } Predicate: Heads(flip) Formula: Heads(f) Weight: Log odds of heads: By default, MLN includes unit clauses for all predicates (captures marginal distributions, etc.)

Multinomial Distribution Example: Throwing die Types: throw = { 1, … , 20 } face = { 1, … , 6 } Predicate: Outcome(throw,face) Formulas: Outcome(t,f) ^ f != f’ => !Outcome(t,f’). Exist f Outcome(t,f). Too cumbersome!

Multinomial Distrib.: ! Notation Example: Throwing die Types: throw = { 1, … , 20 } face = { 1, … , 6 } Predicate: Outcome(throw,face!) Formulas: Semantics: Arguments without “!” determine arguments with “!”. Also makes inference more efficient (triggers blocking).

Multinomial Distrib.: + Notation Example: Throwing biased die Types: throw = { 1, … , 20 } face = { 1, … , 6 } Predicate: Outcome(throw,face!) Formulas: Outcome(t,+f) Semantics: Learn weight for each grounding of args with “+”.

Logistic Regression Logistic regression: Type: obj = { 1, ... , n } Query predicate: C(obj) Evidence predicates: Fi(obj) Formulas: a C(x) bi Fi(x) ^ C(x) Resulting distribution: Therefore: Alternative form: Fi(x) => C(x)

Text Classification page = { 1, … , n } word = { … } topic = { … } Topic(page,topic!) HasWord(page,word) !Topic(p,t) [bias term, since topics are sparse? –WC] HasWord(p,+w) => Topic(p,+t)

Entity Resolution Problem: Given database, find duplicate records HasToken(token,field,record) SameField(field,record,record) SameRecord(record,record) HasToken(+t,+f,r) ^ HasToken(+t,+f,r’) => SameField(f,r,r’) SameField(f,r,r’) => SameRecord(r,r’) SameRecord(r,r’) ^ SameRecord(r’,r”) => SameRecord(r,r”)

Entity Resolution Can also resolve fields: HasToken(token,field,record) SameField(field,record,record) SameRecord(record,record) HasToken(+t,+f,r) ^ HasToken(+t,+f,r’) => SameField(f,r,r’) SameField(f,r,r’) <=> SameRecord(r,r’) SameRecord(r,r’) ^ SameRecord(r’,r”) => SameRecord(r,r”) SameField(f,r,r’) ^ SameField(f,r’,r”) => SameField(f,r,r”) P. Singla & P. Domingos, “Entity Resolution with Markov Logic”, in Proc. ICDM-2006.

Hidden Markov Models obs = { Obs1, … , ObsN } state = { St1, … , StM } time = { 0, … , T } State(state!,time) Obs(obs!,time) State(+s,0) State(+s,t) => State(+s',t+1) Obs(+o,t) => State(+s,t)

Information Extraction (simplified) Problem: Extract database from text or semi-structured sources Example: Extract database of publications from citation list(s) (the “CiteSeer problem”) Two steps: Segmentation: Use HMM to assign tokens to fields Entity resolution: Use logistic regression and transitivity

Motivation for joint extraction and matching

Information Extraction (simplified) Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) <=> InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) <=> SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)

Information Extraction (simplified) Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) ^ !Token(“.”,i,c) <=> InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) <=> SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”) More: H. Poon & P. Domingos, “Joint Inference in Information Extraction”, in Proc. AAAI-2007.

Information Extraction (less simplified) Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) ^ !Token(“.”,i,c) <=> InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) => InField(i,+f,c) ~Token("aardvark",i,c) v InField(i,”author”,c) … ~Token("zymurgy",i,c) v InField(i,"author",c) ~Token("zymurgy",i,c) v InField(i,"venue",c)

Information Extraction (less simplified) Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) ^ !Token(“.”,i,c) <=> InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) => InField(i+1,+f,c) InField(i,+f,c) ^ !HasPunct(c,i) => InField(i+1,+f,c) InField(i,+f,c) ^ !HasComma(c,i) => InField(i+1,+f,c) => InField(1,”author”,c) => InField(2,”author”,c) => InField(midpointOfC, "title", c)

Information Extraction (less simplified) Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Token(+t,i,c) => InField(i,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) => InField(i+1,+f,c) InField(i,+f,c) ^ !HasPunct(c,i) => InField(i+1,+f,c) InField(i,+f,c) ^ !HasComma(c,i) => InField(i+1,+f,c) => InField(1,”author”,c) => InField(2,”author”,c) => InField(midpointOfC, "title", c)

Information Extraction (less simplified) Token(+t,i,c) => InField(i,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) => InField(i+1,+f,c) InField(i,+f,c) ^ !HasPunct(c,i) => InField(i+1,+f,c) InField(i,+f,c) ^ !HasComma(c,i) => InField(i+1,+f,c) => InField(1,”author”,c) => InField(2,”author”,c) InField(midpointOfC, "title", c) Initial(t) => InField(t,i,"author") v InField(t,i,"venue") i<j ^ VenueWord(t) ^ Token(t,i,c) ^ Token(t’,j,c) => !InField(t’,j,”author”) v !InField(t’,j,”title”) Initial(t) => InField(t,”venue”) v InField(t,”author”) i<j ^ Token(t,i,c) ^ LastInitial(j,c) => !InField(i,”title”) ^ !InField(i,”venue”) i<j<k ^ Token(t,i,c) ^ Initial(j,c) ^ Initial(k,c) & !InField(j,”venue”) => !InField(k,”venue”) v [!InField(i,”author”) v !InField(i,”title”)]

Information Extraction (less simplified) SimilarTitle(c,i,j,c’,i’,j’): true if c[i..j] and c’[i’…j’] are both “titlelike” i.e., no punctuation, doesn’t violate rules above c[i..j] and c’[i’…j’] are “similar” i.e. start with same trigram and end with same token SimilarVenue(c,c’): true if c and c’ don’t contain conflicting venue keywords (e.g., journal vs proceedings)

Information Extraction (less simplified) SimilarTitle(c,i,j,c’,i’,j’): … SimilarVenue(c,c’): … JointInferenceCandidate(c,i,c’): trigram starting at i in c also appears in c’ and trigram is a possible title and punct before trigram in c’ but not c

Information Extraction (less simplified) SimilarTitle(c,i,j,c’,i’,j’): … SimilarVenue(c,c’): … JointInferenceCandidate(c,i,c’): trigram starting at i in c also appears in c’ and trigram is a possible title and punct before trigram in c’ but not c

Information Extraction (less simplified) SimilarTitle(c,i,j,c’,i’,j’): … SimilarVenue(c,c’): … JointInferenceCandidate(c,i,c’): InField(i,+f,c) ^ !HasPunct(c,i) => InField(i+1,+f,c) InField(i,+f,c) ^ ~HasPunct(c,i) ^ (!exists c’:JointInferenceCandidate(c,i,c’)) => InField(i+1,+f,c) Jnt-Seg

Information Extraction (less simplified) SimilarTitle(c,i,j,c’,i’,j’): … SimilarVenue(c,c’): … JointInferenceCandidate(c,i,c’): InField(i,+f,c) ^ ~HasPunct(c,i) ^ (!exists c’:JointInferenceCandidate(c,i,c’) ^ SameCitation(c,c’) ) => InField(i+1,+f,c) Jnt-Seg-ER

Results

Results Fraction of clusters correctly constructed using transitive closure of pairwise decisions Cora F-S: 0.87 F1 Cora TFIDF: 0.84 max F1

William’s summary MLNs are a compact, elegant way of describing a Markov network Standard learning methods work Network may be very very large Inference may be expensive Doesn’t eliminate feature engineering E.g., complicated “feature” predicates Experimental results for joint matching/NER are not that strong overall Cascading segmentation and then matching improves segmentation, probably not matching But it needs to be carefully restricted (efficiency?) And it’s no better than a “pseudo” joint model using plausible matches as features