Improving Knowledge Graph Embedding Using Simple Constraints Boyang Ding, Quan Wang, Bin Wang, Li Guo Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences State Key Laboratory of Information Security, Chinese Academy of Sciences ACL’18
Knowledge Graph Embedding Scoring Model
Knowledge Graph Embedding Deep Neural Network R-GCN ConvE Extra Information Entity Types: TKRL Relation Paths: PTransE Textual Descriptions: DKRL, TEKE Logical Rules: KALE, RUGE, 𝐶𝑜𝑚𝑝𝑙𝐸𝑥 𝑅
KG Embedding with Constraints Non-Negativity Constraints entity vectors stay within hypercube [0, 1] 𝑑 sparsity and interpretability Approximate Entailment Constraints encode entailment into vectors effectivity
Basic Embedding Model----ComplEx Problem Set , Entity , Relation Representation Vector , Complex Function Score Function
Non-Negativity of Entity Representation Simple Constraint
Approximate Entailment for Relations e.g. BornInCountry and Nationality
Approximate Entailment for Relations
Approximate Entailment for Relations
Overall model ComplEx Loss L2 Regularization Slack Penalty Approximate Entailment Non-Negativity
Rewrite
Experiment1--Link Prediction Datasets WN18: from WordNet FB15K: from Freebase DB100K: created on core DBpedia Entailment Mining AMIE+ on training set
Link Prediction
Link Prediction
Experiment2--Entity Representations Analysis Compact and Interpretable Representations
Entity Representations Analysis Semantic Purity
Experiment3--Relation Representations
Contributions Constraints imposed directly on entity and relation representations without grounding, easy to scale up to large KGs Constraints derived automatically from statistical properties, universal, no manual effort and applicable to almost all KGs Individual representation for each entity, and successfully predictions between unpaired entities
Supplement about Embedding SGNS(skip-gram negative sampling) vectors are arranged along a primary axis and mostly non-negative from Mimno and Thompson, EMNLP’17 Multiplicative entitiy embedding get more compact(conicity increase and average vector length decrease) with increasing negative samples from Chandrahas, ACL’18