Download presentation
Presentation is loading. Please wait.
1
Logic Programming & LPMLN
2
Table of Contents Logic programming Answer set programming
Markov logic network LPMLN
3
Logic Programming
4
Definitions Term: π π₯ π( π‘ 1 , β¦, π‘ π ) Atom: π π‘ 1 ,β¦, π‘ π Literal: π΄ | Β¬π΄ Substitution: π={π₯ / π, π¦ / π§}
5
Rule & Fact (Β¬πβ§Β¬πβ§β¦β§Β¬π‘β§π’) or πβ§πβ§β¦β§π‘ βπ’ u :- p, q, ..., t. (Rule) u :- . (Fact) head :- body.
6
Basic Datalog πΏ 0 :β πΏ 1 , πΏ 2 ,β¦, πΏ π
πΏ 0 :β πΏ 1 , πΏ 2 ,β¦, πΏ π πΏ π is a literal of the form π π π‘ 1 , π‘ 2 , , π‘ π π π‘ π is either a constant or a variable Safety condition for program π Each fact of π is ground Each variable which occurs in the head of a rule of π must also occur in the body of the same rule These conditions guarantee that the set of all facts that can be derived from a Datalog program is finite
7
Datalog Rule Example If X is a parent of Y and if Y is a parent of Z, then X is a grandparent of Z. grandpar(Z, X) :- par(Y, X), par(Z, Y). Same Generation Cousin sgc(X, X) :- person(X). sgc(X, Y) :- par(X, X1), sgc(X1, Y1), par(Y, Y1).
8
Herbrand Model A set of ground facts πΌ that satisfy a clause πΆ or a set of clause π is called a Herbrand model for πΆ or π Intersection property: If πΌ 1 and πΌ 2 are Herbrand model, then πΌ 1 β© πΌ 2 is also Herbrand model Least Herbrand model: The intersection of all Herbrand models
9
Proof Theory Elementary Production Principle Proof Tree
πΏ 0 :- πΏ 1 , πΏ 2 ,β¦, πΏ π πΉ 1 , πΉ 2 ,β¦, πΉ π are facts If there exists a substitution π that πΏ π π= πΉ π then πΏ 0 π can be inferred as a new fact Proof Tree
10
Bottom-up Evaluation r1: sgc(X, X) :- person(X). r2: sgc(X, Y) :- par(X, X1), sgc(X1, Y1), par(Y, Y1). person(a). person(b). person(c). par(a, c). par(b, c). sgc(a, a). sgc(b, b). sgc(c, c). sgc(a, b) :- par(a, c), sgc(c, c), par(b, c).
11
Top-down Evaluation ?- sgc(a, Y). person(a). -> Y = {a} par(a, X1). -> X1 = {c} sgc(c, Y1). person(c). -> Y1 = {c} par(c, X1). -> X1 = {} ... -> Y1 = {c} par(Y, Y1) -> Y = {b} -> Y = {a, b}
12
Problem Solving Example
word(d,o,g). word(r,u,n). word(t,o,p). word(f,i,v,e). word(f,o,u,r). word(l,o,s,t). word(m,e,s,s). word(u,n,i,t). word(b,a,k,e,r). word(f,o,r,u,m). word(g,r,e,e,n). word(s,u,p,e,r). word(p,r,o,l,o,g). word(v,a,n,i,s,h). word(w,o,n,d,e,r). word(y,e,l,l,o,w).
13
Problem Solving Example
solution(L1,L2,L3,L4,L5,L6,L7,L8,L 9,L10,L11,L12,L13,L14,L15,L16):- word(L1,L2,L3,L4,L5), word(L9,L10,L11,L12,L13,L14), word(L1,L6,L9,L15), word(L3,L7,L11), word(L5,L8,L13,L16).
14
Bottom-up Evaluation Optimization
NaΓ―ve approach: iteratively apply rules to all facts until no new fact can be derived Problem One fact is derived many times The fact irrelevant to goal is also derived Solution Semi-naΓ―ve approach Magic set rewriting
15
Answer Set Programming
16
Rules with Negation Closed world assumption
If a fact does not logically follow from a set of Datalog clauses, then we conclude that the negation of this fact is true. Problem: multiple minimal Herbrand models boring(chess) :- not interesting(chess) π» π = πππ‘ππππ π‘πππ πβππ π , π» π ={ππππππ πβππ π }
17
Stable Model Ground program
Assume a set of facts π of program Ξ , the reduct of Ξ according to π ( Ξ π ) is defined as follows For all literals πΏ π with negation and | πΏ π | is in π, drop it from rule body For all rules whose body contains negation, drop the rule Thus all the rule contains no negation If the minimal Herbrand model coincides with π, then π is a stable model of Ξ
18
Examples p :- not q q :- not p {} -> {p, q} {p} -> {p} {q} -> {q} {p, q} -> {} Stable models are {p}, {q} a :- not a {} -> {a} {a} -> {} No stable model
19
Constraints a :- B, not a. If B is true, then the rule becomes a :- not a. which means thereβs no stable model for the program We can write such a rule as :- B
20
Multiple Literals in Rule Head
If the minimal Herbrand models of reduct Ξ π contains π, then π is a stable model of Ξ For example a; b :- c, d. c. d. Only {a, c, d} and {b, c, d} are stable models, {a, b, c, d} is not.
21
Example: Solving Hamiltonian Cycle
vertex(a). vertex(b). vertex(c). vertex(d). edge(a, b). edge(b, c). edge(c, d). edge(d, a). edge(b, d). in(X, Y) :- edge(X, Y), not nin(X, Y). nin(X, Y) :- edge(X, Y), not in(X, Y). :- in(X, Y), in(X1, Y), X != X1. :- in(X, Y), in(X, Y1), Y != Y1. reachable(X, X) :- vertex(X). reachable(X, Y) :- vertex(X), vertex(Y), reachable(X, X1), in(X1, Y). :- vertex(X), vertex(Y), not reachable(X, Y). Online ASP Solver:
22
Markov Logic Network
23
Markov network Node: random variable Edge: variable dependency
Potential function π: non-negative function for clique π π=π₯ = 1 π Ξ π π π π₯ π , π= Ξ£ π₯βπ³ Ξ π π π ( π₯ π ) Log-linear model π π=π₯ = 1 π expβ‘( Ξ£ π π€ π π π π₯ )
24
Markov Logic Network Definition
A Markov logic network πΏ is a set of (πΉ π , π€ π ), where πΉ π is a formula in first- order logic and π€ π is a real number. Together with a finite set of constants πΆ={ π 1 , π 2 ,β¦, π π } defines a Markov network π πΏ,πΆ . 1. π πΏ,πΆ contains one binary node for each possible grounding of each predicate appearing in πΏ. The value of the node is 1 if the ground atom is true, and 0 otherwise. 2. π πΏ,πΆ contains one feature for each possible grounding of each formula πΉ π in πΏ. The value of this feature is 1 if the ground formula is true, and 0 otherwise. The weight of the feature is the π€ π associated with πΉ π in πΏ.
25
Example
26
Example Given constant A and B, consider last 2 formulas in the table.
Ground atoms Fr(A, A), Fr(A, B), Fr(B, A), Fr(B, B), Sm(A), Sm(B) Ground formulas Β¬ππ π΄ β¨πΆπ(π΄) Β¬ππ π΅ β¨πΆπ(π΅) Β¬πΉπ π΄,π΄ β¨ππ π΄ β¨Β¬Sm π΄ Β¬πΉπ π΄,π΄ β¨Β¬ππ π΄ β¨Sm π΄ Β¬πΉπ π΅,π΅ β¨ππ π΅ β¨Β¬Sm π΅ Β¬πΉπ π΅,π΅ β¨Β¬ππ π΅ β¨Sm π΅ Β¬πΉπ π΄,π΅ β¨ππ π΄ β¨Β¬Sm π΅ Β¬πΉπ π΄,π΅ β¨Β¬ππ π΄ β¨Sm π΅ Β¬πΉπ π΅,π΄ β¨ππ π΅ β¨Β¬Sm π΄ Β¬πΉπ π΅,π΄ β¨Β¬ππ π΅ β¨Sm π΄
27
Example
28
LPMLN Weighted Rules under the Stable Model Semantics
29
LPMLN Rule and Definition
Rule format π€ :π
Hard rule π€=πΌ For a LPMLN program Ξ Ξ : π
π€:π
βΞ } Ξ πΌ : the set of rules π€:π
such that πΌβ¨π
ππ Ξ : πΌ πΌ ππ π‘βπ π π‘ππππ πππππ ππ Ξ πΌ }
30
LPMLN Definition
31
Example
32
Example
33
Relating LPMLN to ASP & MLN
34
LPMLN Implementation
35
Reformulating LPMLN Based on Penalty
36
Turning LPMLN into ASP
37
Turning LPMLN into MLN
38
References Ceri, Stefano, Georg Gottlob, and Letizia Tanca. "What you always wanted to know about Datalog (and never dared to ask)."Β IEEE transactions on knowledge and data engineering1.1 (1989): Brewka, Gerhard, Thomas Eiter, and MirosΕaw TruszczyΕski. "Answer set programming at a glance."Β Communications of the ACMΒ 54.12 (2011): Richardson, Matthew, and Pedro Domingos. "Markov logic networks."Β Machine learningΒ (2006): Lee, Joohyung, and Yi Wang. "Weighted Rules under the Stable Model Semantics."Β KR Lee, Joohyung, Samidh Talsania, and Yi Wang. "Computing LP MLN using ASP and MLN solvers."Β Theory and Practice of Logic ProgrammingΒ (2017):
39
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.