Download presentation
Presentation is loading. Please wait.
Published byDwayne Murphy Modified over 9 years ago
1
Speaker:Benedict Fehringer Seminar:Probabilistic Models for Information Extraction by Dr. Martin Theobald and Maximilian Dylla Based on Richards, M., and Domingos, P. (2006) Markov Logic Networks 1
2
Outline Markov Logic Networks 2 Part 1: Why we need Markov Logic Networks (MLN’s) Markov Networks First-Order Logic Conclusion and Motivation Part 2: How the MLN’s work? Part 3: Are they better than other methods?
3
Outline Markov Logic Networks 3 Part 1: Why we need Markov Logic Networks (MLN’s) Markov Networks First-Order Logic Conclusion and Motivation Part 2: How the MLN’s work? Part 3: Are they better than other methods?
4
Markov Networks Markov Logic Networks 4 Set of variables: The distribution is given by: with as normalization factor and as potential function
5
Markov Networks Markov Logic Networks 5 Representation as log-linear model: In our case there will be only binary features: Each feature corresponds to each possible state The weight is equal to the log of the potential:
6
Little example scenario Markov Logic Networks 6 There is a company witch has a Playstation and each company’s employee has the right to play with it. If two employees are friends then the probability is high ( ω =3) that both play all day long or both do not. If someone plays all day long then the chance for her/him is high ( ω =2) to get fired.
7
Little example scenario Markov Logic Networks 7 There is a company witch has a Playstation and each company’s employee has the right to play with it. If two employees are friends then the probability is high ( ω =3) that both play all day long or both do not. If someone plays all day long then the chance for her/him is high ( ω =2) to get fired.
8
Markov Networks Markov Logic Networks 8 Has playing Friend Plays Fired PlaysFired ω True 2 False0 True2 False 2 FriendsPlays ω True 3 False0 True0 False 3 One possibility…
9
Markov Networks Markov Logic Networks 9 Is Friend with Plays Fired Some playing Employee And another one.
10
Little example scenario Markov Logic Networks 10 There is a company witch has a Playstation and each company’s employee has the right to play with it. If two employees are friends then the probability is high ( ω =3) that both play all day long or both do not. If someone plays all day long then the chance for her/him is high ( ω =2) to get fired. If an employee A can convince another employee B to play, depends on the lability of B. For a high lability of B there is a higher probability ( ω =4) than for a low lability ( ω =2)
11
Markov Networks Markov Logic Networks 11 Is Friend with Plays Fired Playing Person Is labile Could be.
12
Markov Networks Markov Logic Networks 12 Advantages: Efficiently handling uncertainty Tolerant against imperfection and contradictory knowledge Disadvantages: Very complex networks for a wide variety of knowledge Difficult to incorporate a wide range of domain knowledge
13
Outline Markov Logic Networks 13 Part 1: Why we need Markov Logic Networks (MLN’s) Markov Networks First-Order Logic Conclusion and Motivation Part 2: How the MLN’s work? Part 3: Are they better than other methods?
14
First-Order Logic Markov Logic Networks 14 Four types of symbols: Constants:concrete object in the domain (e.g., people: Anna, Bob) Variables:range over the objects in the domain Functions:Mapping from tuples of objects to objects (e.g., GrandpaOf) Predicates:relations among objects in the domain (e.g., Friends) or attributes of objects (e.g. Fired) Term: Any expression representing an object in the domain. Consisting of a constant, a variable, or a function applied to a tuple of terms. Atomic formula or atom: predicate applied to a tuple of terms Logical connectives and quantifier:
15
Translation in First-Order Logic Markov Logic Networks 15 If two employees are friends then the probability is high that both play all day long or both do not. In Clausal Form: If someone plays all day long then the chance for her/him is high to get fired. In Clausal Form:
16
First-Order Logic Markov Logic Networks 16 Advantages: Compact representation a wide variety of knowledge Flexible and modularly incorporate a wide range of domain knowledge Disadvantages: No possibility to handle uncertainty No handling of imperfection and contradictory knowledge
17
Outline Markov Logic Networks 17 Part 1: Why we need Markov Logic Networks (MLN’s) Markov Networks First-Order Logic Conclusion and Motivation Part 2: How the MLN’s work? Part 3: Are they better than other methods?
18
Conclusion and Motivation Markov Networks First-Order Logic Markov Logic Networks 18 Efficiently handling uncertainty Tolerant against imperfection and contradictory knowledge Compact representation a wide variety of knowledge Flexible and modularly incorporate a wide range of domain knowledge → Combination of Markov Networks and First-Order Logic to use the advantages of both
19
Outline Markov Logic Networks 19 Part 1: Why we need Markov Logic Networks (MLN’s) Markov Networks First-Order Logic Conclusion and Motivation Part 2: How the MLN’s work? Part 3: Are they better than other methods?
20
Markov Logic Network Markov Logic Networks 20 Description of the problem Translation in First-Order Logic Construction of a MLN- ”Template” Derive a concrete MLN for a given Set of Constants Compute whatever you want
21
Markov Logic Network Markov Logic Networks 21 Description of the problem Translation in First-Order Logic Construction of a MLN- ”Template” Derive a concrete MLN for a given Set of Constants Compute whatever you want
22
Markov Logic Network - Translation in First-Order Logic - Markov Logic Networks 22 If two employees are friends then the probability is high that both play all day long or both do not. In Clausal Form: If someone plays all day long then the chance for her/him is high to get fired. In Clausal Form:
23
Markov Logic Network Markov Logic Networks 23 Description of the problem Translation in First-Order Logic Construction of a MLN- ”Template” Derive a concrete MLN for a given Set of Constants Compute whatever you want
24
Markov Logic Network Markov Logic Networks 24 Each formula matches one clique Each formula owns a weight that reflects the importance of this formula If a world violates one formula then it is less probable but not impossible Concrete: The weight of this formula will be ignored (that means the weight is 0)
25
Markov Logic Network Markov Logic Networks 25 Formula to compare: Three Assumptions: 1. Unique Names 2. Domain closure 3. Known functions
26
Markov Logic Network Markov Logic Networks 26 Description of the problem Translation in First-Order Logic Construction of a MLN- ”Template” Derive a concrete MLN for a given Set of Constants Compute whatever you want
27
Markov Logic Network Markov Logic Networks 27 Grounding: (with Constants c1 and c2) => Elimination of the existential quantifier Elimination of the universal quantifier Elimination of the functions Plays(c1)Plays(c2)Fired(c1)Fired(c2) TrueFalseTrueFalse
28
Markov Networks Markov Logic Networks 28 Friends(A,A)Friends(B,B) Friends(A,B) Friends(B,A) Plays(A)Plays(B) Constants: Alice (A) and Bob (B)
29
Markov Logic Network Markov Logic Networks 29 Friends(A,A)Friends(B,B) Friends(A,B) Friends(B,A) Plays(A)Plays(B) Fired(B)Fired(A) Constants: Alice (A) and Bob (B) Plays(A)Plays(B)
30
Markov Logic Network Markov Logic Networks 30 Friends(A,A)Friends(B,B) Friends(A,B) Friends(B,A) Plays(A)Plays(B) Fired(B)Fired(A) Friends(x,y ) Plays(x)Plays(y) ω True 3 FalseTrue0 FalseTrue 3 False True3 False0 TrueFalse 3 TrueFalse3 3 Plays(x)Fired(x) ω True 2 False0 True2 False 2
31
Markov Logic Network Markov Logic Networks 31 Description of the problem Translation in First-Order Logic Construction of a MLN- ”Template” Derive a concrete MLN for a given Set of Constants Compute whatever you want
32
Markov Logic Network Markov Logic Networks 32 What is the probability that Alice and Bob are friends, both play playstation all day long but both are not getting fired? Friends(A,A) =1 Friends(B,B) =1 Friends(A,B) =1 Friends(B,A) =1 Plays(A)=1Plays(B)=1 Fired(B)=0Fired(A)=0 Friends(A,B)Plays(A)Plays(B) ω True 3 Plays(x)Fired(x) ω TrueFalse0 Friends(x,x)Plays(x) ω FalseTrue3
33
Markov Logic Network Markov Logic Networks 33 What happens if lim ω → ∞ ? If all formulas fulfilled: If not all formulas fulfilled: =>
34
Markov Logic Network Markov Logic Networks 34 What happens if lim ω → ∞ ? If all formulas fulfilled:
35
Markov Logic Network Markov Logic Networks 35 What happens if lim ω → ∞ ? If not all formulas fulfilled:
36
Markov Logic Network Markov Logic Networks 36 What is the probability that a formula F 1 holds given that formula F 2 does?
37
Markov Logic Network Markov Logic Networks 37 Learning the weights: It is #P-complete to count the number of true groundings => approximation is necessary => using the pseudo-likelihood:
38
Outline Markov Logic Networks 38 Part 1: Why we need Markov Logic Networks (MLN’s) Markov Networks First-Order Logic Conclusion and Motivation Part 2: How the MLN’s work? Part 3: Are they better than other methods?
39
Experiment I Markov Logic Networks 39 Setting: Using a database describing the Department of Computer Science and Engineering at the University of Washington 12 predicates (e.g. Professor, Student, Area, AdvisedBy, …) 2707 constants 96 formulas (Knowledge base was provided by four volunteers who not know the database but were member of the department) The whole database was divided into five subsets for each area (AI, graphics, programming languages, systems, theory) => in the end 521 true ground atoms of possible 58,457
40
Experiment II Markov Logic Networks 40 Testing: leave-one-out over the areas Prediction of AdvisedBy(x,y) Either with all or only partial (except Student(x) and Professor(x)) information Drawing the precision/recall curves Computation of the area under the curve (AUC)
41
Experiment III Markov Logic Networks 41 MLN was compared with: Logic (only logical KB without probability) Probability (only probability relations without special knowledge representations) Naïve Bayes (NB) and Bayesian Network (BN) Inductive logic programming (automatically development of the KB) CLAUDIEN (clausal discovery engine)
42
Results Markov Logic Networks 42
43
Results I Markov Logic Networks 43 all AreasAI Area
44
Results II Markov Logic Networks 44 graphics areaprog. Language Area
45
Results III Markov Logic Networks 45 systems areatheory Area
46
Sample applications Markov Logic Networks 46 Link Prediction Link-Based Clustering Social Network Modeling …
47
Conclusion Markov Logic Networks 47 MLN’s are a simple way to combine first-order logic and probability They can be seen as a template for construction ordinary Markov Networks Clauses can be learned by CLAUDIEN Empirical tests with real-world data and knowledge are promising for the use of the MLN’s
48
Literature Markov Logic Networks 48 Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine Learning Journal, 62, 107-136.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.