Improving Knowledge Graph Embedding Using Simple Constraints

Slides:



Advertisements
Similar presentations
Linked data: P redicting missing properties Klemen Simonic, Jan Rupnik, Primoz Skraba {klemen.simonic, jan.rupnik,
Advertisements

Sparse Computations: Better, Faster, Cheaper! Padma Raghavan Department of Computer Science and Engineering, The Pennsylvania State University
Semantics For the Semantic Web: The Implicit, the Formal and The Powerful Amit Sheth, Cartic Ramakrishnan, Christopher Thomas CS751 Spring 2005 Presenter:
Lasso regression. The Goals of Model Selection Model selection: Choosing the approximate best model by estimating the performance of various models Goals.
Representation learning for Knowledge Bases LivesIn BornIn LocateIn Friendship Nationality Nicole Kidman PerformIn Nationality Sydney Hugh Jackman Australia.
LLNL-PRES This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Artificial Intelligence (AI) Addition to the lecture 11.
Ensemble Solutions for Link-Prediction in Knowledge Graphs
Authors : Ramon F. Astudillo, Silvio Amir, Wang Lin, Mario Silva, Isabel Trancoso Learning Word Representations from Scarce Data By: Aadil Hayat (13002)
1 A Biterm Topic Model for Short Texts Xiaohui Yan, Jiafeng Guo, Yanyan Lan, Xueqi Cheng Institute of Computing Technology, Chinese Academy of Sciences.
TWC Illuminate Knowledge Elements in Geoscience Literature Xiaogang (Marshall) Ma, Jin Guang Zheng, Han Wang, Peter Fox Tetherless World Constellation.
Weight Uncertainty in Neural Networks
Using Lexical Knowledge to Evaluate the Novelty of Rules Mined from Text Sugato Basu, Raymond J. Mooney, Krupakar V. Pasupuleti, Joydeep Ghosh Presented.
Ganesh J, Soumyajit Ganguly, Manish Gupta, Vasudeva Varma, Vikram Pudi
A Review of Relational Machine Learning for Knowledge Graphs CVML Reading Group Xiao Lin.
Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka.
TDM in the Life Sciences Application to Drug Repositioning *
Distributed Representations for Natural Language Processing
You Can’t Afford to be Late!
He Xiangnan Research Fellow National University of Singapore
Jonatas Wehrmann, Willian Becker, Henry E. L. Cagnini, and Rodrigo C
Neural Machine Translation
Machine Learning for Computer Security
Data Transformation: Normalization
Sentence Modeling Representation of sentences is the heart of Natural Language Processing A sentence model is a representation and analysis of semantic.
Constrained Hidden Markov Models for Population-based Haplotyping
Syntax-based Deep Matching of Short Texts
Boosting and Additive Trees (2)
Chinese Academy of Sciences, Beijing, China
Wenhan Xiong, Thien Hoang, William Wang Department of Computer Science
Neural networks (3) Regularization Autoencoder
Wed June 12 Goals of today’s lecture. Learning Mechanisms
Are End-to-end Systems the Ultimate Solutions for NLP?
Welcome Final Year Project Oral Presentation
Generating Natural Answers by Incorporating Copying and Retrieving Mechanisms in Sequence-to-Sequence Learning Shizhu He, Cao liu, Kang Liu and Jun Zhao.
Fenglong Ma1, Jing Gao1, Qiuling Suo1
Basic Intro Tutorial on Machine Learning and Data Mining
Variational Knowledge Graph Reasoning
Semantic Network & Knowledge Graph
Deep neural networks (DNNs) can successfully identify, count, and describe animals in camera-trap images. Deep neural networks (DNNs) can successfully.
Knowledge Base Completion
Goodfellow: Chapter 14 Autoencoders
Knowledge Graph Embedding
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Word Embedding Word2Vec.
MEgo2Vec: Embedding Matched Ego Networks for User Alignment Across Social Networks Jing Zhang+, Bo Chen+, Xianming Wang+, Fengmei Jin+, Hong Chen+, Cuiping.
Connecting Data with Domain Knowledge in Neural Networks -- Use Deep learning in Conventional problems Lizhong Zheng.
Machine Learning Interpretability
Deep Cross-media Knowledge Transfer
COLING‘18 Hailong Jin and Lei Hou and Juanzi Li(DCST,Tsinghua)
View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions 1,2 1.
Neural networks (3) Regularization Autoencoder
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
ProBase: common Sense Concept KB and Short Text Understanding
边缘检测年度进展概述 Ming-Ming Cheng Media Computing Lab, Nankai University
Word embeddings (continued)
15.3: Motion Rita Korsunsky.
Topological Signatures For Fast Mobility Analysis
Hierarchical, Perceptron-like Learning for OBIE
NAACL‘18 Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung
Keshav Balasubramanian
Dan Roth Department of Computer Science
Word representations David Kauchak CS158 – Fall 2016.
An Efficient Projection for L1-∞ Regularization
Embedding based entity summarization
Presenter: Yu Chen Computer Science Department
Journal of Web Semantics 55 (2019)
Peng Cui Tsinghua University
Vector Representation of Text
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Improving Knowledge Graph Embedding Using Simple Constraints Boyang Ding, Quan Wang, Bin Wang, Li Guo Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences State Key Laboratory of Information Security, Chinese Academy of Sciences ACL’18

Knowledge Graph Embedding Scoring Model

Knowledge Graph Embedding Deep Neural Network R-GCN ConvE Extra Information Entity Types: TKRL Relation Paths: PTransE Textual Descriptions: DKRL, TEKE Logical Rules: KALE, RUGE, 𝐶𝑜𝑚𝑝𝑙𝐸𝑥 𝑅

KG Embedding with Constraints Non-Negativity Constraints entity vectors stay within hypercube [0, 1] 𝑑 sparsity and interpretability Approximate Entailment Constraints encode entailment into vectors effectivity

Basic Embedding Model----ComplEx Problem Set , Entity , Relation Representation Vector , Complex Function Score Function

Non-Negativity of Entity Representation Simple Constraint

Approximate Entailment for Relations e.g. BornInCountry and Nationality

Approximate Entailment for Relations

Approximate Entailment for Relations

Overall model ComplEx Loss L2 Regularization Slack Penalty Approximate Entailment Non-Negativity

Rewrite

Experiment1--Link Prediction Datasets WN18: from WordNet FB15K: from Freebase DB100K: created on core DBpedia Entailment Mining AMIE+ on training set

Link Prediction

Link Prediction

Experiment2--Entity Representations Analysis Compact and Interpretable Representations

Entity Representations Analysis Semantic Purity

Experiment3--Relation Representations

Contributions Constraints imposed directly on entity and relation representations without grounding, easy to scale up to large KGs Constraints derived automatically from statistical properties, universal, no manual effort and applicable to almost all KGs Individual representation for each entity, and successfully predictions between unpaired entities

Supplement about Embedding SGNS(skip-gram negative sampling) vectors are arranged along a primary axis and mostly non-negative from Mimno and Thompson, EMNLP’17 Multiplicative entitiy embedding get more compact(conicity increase and average vector length decrease) with increasing negative samples from Chandrahas, ACL’18