1 NAGA: Searching and Ranking Knowledge Gjergji Kasneci Joint work with: Fabian M. Suchanek, Georgiana Ifrim, Maya Ramanath, and Gerhard Weikum.

Slides:



Advertisements
Similar presentations
Date: 2014/05/06 Author: Michael Schuhmacher, Simon Paolo Ponzetto Source: WSDM’14 Advisor: Jia-ling Koh Speaker: Chen-Yu Huang Knowledge-based Graph Document.
Advertisements

YAGO: A Large Ontology from Wikipedia and WordNet Fabian M. Suchanek, Gjergji Kasneci, Gerhard Weikum Max-Planck-Institute for Computer Science, Saarbruecken,
Efficient IR-Style Keyword Search over Relational Databases Vagelis Hristidis University of California, San Diego Luis Gravano Columbia University Yannis.
YAGO-NAGA Project Presented By: Mohammad Dwaikat To: Dr. Yuliya Lierler CSCI 8986 – Fall 2012.
Effective Keyword Based Selection of Relational Databases Bei Yu, Guoliang Li, Karen Sollins, Anthony K.H Tung.
SEARCHING QUESTION AND ANSWER ARCHIVES Dr. Jiwoon Jeon Presented by CHARANYA VENKATESH KUMAR.
Query Dependent Pseudo-Relevance Feedback based on Wikipedia SIGIR ‘09 Advisor: Dr. Koh Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/01/24 1.
Graph Data Management Lab, School of Computer Science Put conference information here.
IR Challenges and Language Modeling. IR Achievements Search engines  Meta-search  Cross-lingual search  Factoid question answering  Filtering Statistical.
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
Incorporating Language Modeling into the Inference Network Retrieval Framework Don Metzler.
Database and Information- Retrieval Methods for Knowledge Discovery Database and Information- Retrieval Methods for Knowledge Discovery Gerhard Weikum,
1 Unique identifiers for the Web Zoltan Miklos Joint work with Gleb Skobeltsyn, Saket Sathe, Nicolas Bonvin, Philippe Cudré-Mauroux, Ekaterini Ioannou,
Employing Two Question Answering Systems in TREC 2005 Harabagiu, Moldovan, et al 2005 Language Computer Corporation.
Saarbrucken / Germany ¨
Databases & Information Retrieval Maya Ramanath ( Further Reading: Combining Database and Information-Retrieval Techniques for Knowledge Discovery. G.
İrem Arıkan, Srikanta Bedathur, Klaus Berberich Time Will Tell: Leveraging Temporal Expressions in IR.
1 Announcements Research Paper due today Research Talks –Nov. 29 (Monday) Kayatana and Lance –Dec. 1 (Wednesday) Mark and Jeremy –Dec. 3 (Friday) Joe and.
Probabilistic Model for Definitional Question Answering Kyoung-Soo Han, Young-In Song, and Hae-Chang Rim Korea University SIGIR 2006.
SEEKING STATEMENT-SUPPORTING TOP-K WITNESSES Date: 2012/03/12 Source: Steffen Metzger (CIKM’11) Speaker: Er-gang Liu Advisor: Dr. Jia-ling Koh 1.
Mehdi Kargar Aijun An York University, Toronto, Canada Keyword Search in Graphs: Finding r-cliques.
DBrev: Dreaming of a Database Revolution Gjergji Kasneci, Jurgen Van Gael, Thore Graepel Microsoft Research Cambridge, UK.
Optimizing Plurality for Human Intelligence Tasks Luyi Mo University of Hong Kong Joint work with Reynold Cheng, Ben Kao, Xuan Yang, Chenghui Ren, Siyu.
Newsjunkie: Providing Personalized Newsfeeds via Analysis of Information Novelty Gabrilovich et.al WWW2004.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Ruirui Li, Ben Kao, Bin Bi, Reynold Cheng, Eric Lo Speaker: Ruirui Li 1 The University of Hong Kong.
Query Routing in Peer-to-Peer Web Search Engine Speaker: Pavel Serdyukov Supervisors: Gerhard Weikum Christian Zimmer Matthias Bender International Max.
Exploring Online Social Activities for Adaptive Search Personalization CIKM’10 Advisor : Jia Ling, Koh Speaker : SHENG HONG, CHUNG.
Assigning Global Relevance Scores to DBpedia Facts Philipp Langer, Patrick Schulze, Stefan George, Tobias Metzke, Ziawasch Abedjan, Gjergji Kasneci DESWeb.
A Probabilistic Graphical Model for Joint Answer Ranking in Question Answering Jeongwoo Ko, Luo Si, Eric Nyberg (SIGIR ’ 07) Speaker: Cho, Chin Wei Advisor:
Retrieval Models for Question and Answer Archives Xiaobing Xue, Jiwoon Jeon, W. Bruce Croft Computer Science Department University of Massachusetts, Google,
Keyword Searching and Browsing in Databases using BANKS Seoyoung Ahn Mar 3, 2005 The University of Texas at Arlington.
Chapter 6: Information Retrieval and Web Search
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Q2Semantic: A Lightweight Keyword Interface to Semantic Search Haofen Wang 1, Kang Zhang 1, Qiaoling Liu 1, Thanh Tran 2, and Yong Yu 1 1 Apex Lab, Shanghai.
Question Answering over Implicitly Structured Web Content
Mehdi Kargar Aijun An York University, Toronto, Canada Keyword Search in Graphs: Finding r-cliques.
Date : 2012/10/25 Author : Yosi Mass, Yehoshua Sagiv Source : WSDM’12 Speaker : Er-Gang Liu Advisor : Dr. Jia-ling Koh 1.
Keyword Query Routing.
Lecture 1: Overview of IR Maya Ramanath. Who hasn’t used Google? Why did Google return these results first ? Can we improve on it? Is this a good result.
Natural Language Questions for the Web of Data Mohamed Yahya, Klaus Berberich, Gerhard Weikum Max Planck Institute for Informatics, Germany Shady Elbassuoni.
Natural Language Questions for the Web of Data Mohamed Yahya 1, Klaus Berberich 1, Shady Elbassuoni 2 Maya Ramanath 3, Volker Tresp 4, Gerhard Weikum 1.
Natural Language Questions for the Web of Data 1 Mohamed Yahya, Klaus Berberich, Gerhard Weikum Max Planck Institute for Informatics, Germany 2 Shady Elbassuoni.
Date : 2013/03/18 Author : Jeffrey Pound, Alexander K. Hudek, Ihab F. Ilyas, Grant Weddell Source : CIKM’12 Speaker : Er-Gang Liu Advisor : Prof. Jia-Ling.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Ranking objects based on relationships Computing Top-K over Aggregation Sigmod 2006 Kaushik Chakrabarti et al.
Institute of Computing Technology, Chinese Academy of Sciences 1 A Unified Framework of Recommending Diverse and Relevant Queries Speaker: Xiaofei Zhu.
Using linked data to interpret tables Varish Mulwad September 14,
Natural Language Questions for the Web of Data Mohamed Yahya, Klaus Berberich, Gerhard Weikum Max Planck Institute for Informatics, Germany Shady Elbassuoni.
Date: 2013/10/23 Author: Salvatore Oriando, Francesco Pizzolon, Gabriele Tolomei Source: WWW’13 Advisor: Jia-ling Koh Speaker: Chen-Yu Huang SEED:A Framework.
Exploiting Relevance Feedback in Knowledge Graph Search
What Does the User Really Want ? Relevance, Precision and Recall.
4. Relationship Extraction Part 4 of Information Extraction Sunita Sarawagi 9/7/2012CS 652, Peter Lindes1.
Post-Ranking query suggestion by diversifying search Chao Wang.
1 Adaptive Subjective Triggers for Opinionated Document Retrieval (WSDM 09’) Kazuhiro Seki, Kuniaki Uehara Date: 11/02/09 Speaker: Hsu, Yu-Wen Advisor:
Date: 2013/4/1 Author: Jaime I. Lopez-Veyna, Victor J. Sosa-Sosa, Ivan Lopez-Arevalo Source: KEYS’12 Advisor: Jia-ling Koh Speaker: Chen-Yu Huang KESOSD.
Gaby Nativ, SDBI  Motivation  Other Ontologies  System overview  YAGO Dive IN  LEILA  NAGA  Conclusion.
Michael Bendersky, W. Bruce Croft Dept. of Computer Science Univ. of Massachusetts Amherst Amherst, MA SIGIR
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
Information Retrieval Lecture 3 Introduction to Information Retrieval (Manning et al. 2007) Chapter 8 For the MSc Computer Science Programme Dell Zhang.
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo.
GoRelations: an Intuitive Query System for DBPedia Lushan Han and Tim Finin 15 November 2011
Entity-Relationship Query over Wikipedia Xiaonan Li 1, Chengkai Li 1, Cong Yu 2 1 University of Texas at Arlington 2 Yahoo! Research International Workshop.
Einat Minkov University of Haifa, Israel CL course, U
Probabilistic Data Management
and Knowledge Graphs for Query Expansion Saeid Balaneshinkordan
Summarizing Entities: A Survey Report
Cumulated Gain-Based Evaluation of IR Techniques
A Classification-based Approach to Question Routing in Community Question Answering Tom Chao Zhou 22, Feb, 2010 Department of Computer.
INF 141: Information Retrieval
Presentation transcript:

1 NAGA: Searching and Ranking Knowledge Gjergji Kasneci Joint work with: Fabian M. Suchanek, Georgiana Ifrim, Maya Ramanath, and Gerhard Weikum

2 Motivation Example queries  Which politicians are also scientists?  Which gods do the Maya and the Greeks have in common? Keyword queries are too weak to express advanced user intentions such as  concepts,  entity properties  relationships between entities

3 Motivation

4

5 Keyword queries are too weak to express advanced user intentions such as  concepts,  entity properties  relationships between entities Data is not knowledge.  Data extraction and organization needed

6 Query: Results: Benjamin Franklin Paul Wolfowitz Angela Merkel … Politician$x Scientist isA

7 Query: Results: Thunder god Wisdom god Agricultural god … $z $x $yMayan god type Greek god type

8

9 Web  Universal knowledge Information Extraction Semantic Database (Relational Database, XML(XLinks), RDF) Entity (Keyword) (Proximity) Search Ranking Question Answering Question Answering & Ranking NAGA Libra Nie et al. WWW 2007 Entity Search Cheng et al. CIDR 2007 Ex DBMS Cafarella et al. CIDR 2007 START Katz et al. TREC 2005 BANKS Bhalotia et al. ICDE 2002 DISCOVER Histridis et al. VLDB 2002 BLINKS He et al. SIGMOD 2007 Querying SYSTEMS KnowItAll Etzioni et al. WWW 2004 ALICE Banko et al. K-CAP 2007 TextRunner Banko et al. IJCAI 2007 YAGO Suchanek et al. WWW 2007

10 Outline Framework  Data model  Query language  Ranking model Evaluation  Setting  Metrics  Results

11 type Excerpt from YAGO: Suchanek et al. WWW 2007 Framework (Data model) Entity-relationship (ER) graph  Node label : entity  Edge label : relation  Edge weight : relation “strength” Fact  Represented by an edge Evidence pages for a fact f  Web pages from which f was derived Computation of fact confidence (i.e. edge weights) : Max Planck InstituteGermany locatedIn

12 type Excerpt from YAGO: Suchanek et al. WWW 2007 Framework (Data model) Entity-relationship (ER) graph  Node label : entity  Edge label : relation  Edge weight : relation “strength” Fact  Represented by an edge Evidence pages for a fact f  Web pages from which f was derived Computation of fact confidence (i.e. edge weights) : Max Planck InstituteGermany locatedIn

13 type Excerpt from YAGO: Suchanek et al. WWW 2007 Framework (Data model) Entity-relationship (ER) graph  Node label : entity  Edge label : relation  Edge weight : relation “strength” Fact  Represented by an edge Evidence pages for a fact f  Web pages from which f was derived Computation of fact confidence (i.e. edge weights) : Max Planck InstituteGermany locatedIn

14 type Excerpt from YAGO: Suchanek et al. WWW 2007 Framework (Data model) Entity-relationship (ER) graph  Node label : entity  Edge label : relation  Edge weight : relation “strength” Fact  Represented by an edge Evidence pages for a fact f  Web pages from which f was derived Computation of fact confidence (i.e. edge weights) : Max Planck InstituteGermany locatedIn

15 type Excerpt from YAGO: Suchanek et al. WWW 2007 Framework (Data model) Entity-relationship (ER) graph  Node label : entity  Edge label : relation  Edge weight : relation “strength” Fact  Represented by an edge Evidence pages for a fact f  Web pages from which f was derived Computation of fact confidence (i.e. edge weights) : Max Planck InstituteGermany locatedIn

16 Framework (Query language) R : set of relationship labels RegEx(R) : set of regular expressions over R-labels E : set of entity labels V : set of variables Definition (fact template) A fact template is a triple where e 1, e 2  E  V and r  RegEx(R)  V. Liu givenNameOf | familiyNameOf $x Examples: Albert Einstein $x Mileva Maric

17 Framework (Query language) Definition (NAGA query) A NAGA query is a connected directed graph in which each edge represents a fact template. Examples 1) Which physicist was born in the same year as Max Planck? Politician$x Scientist isA 2) Which politician is also a scientist? 3) Which scientist are called Liu? 4) Which mountain is located in Africa? Albert Einstein Niels Bohr * 5) What connects Einstein and Bohr? Africa$x Mountain isA loctedIn * Liu givenNameOf | familiyNameOf $x Scientist isA Physicist $x Max Planck$y bornInYear isA

18 Framework (Query language) Definition (NAGA query) A NAGA query is a connected directed graph in which each edge represents a fact template. Examples 1) Which physicist was born in the same year as Max Planck? Politician$x Scientist isA 2) Which politician is also a scientist? 3) Which scientist are called Liu? 4) Which mountain is located in Africa? Albert Einstein Niels Bohr * 5) What connects Einstein and Bohr? Africa$x Mountain isA loctedIn * Liu givenNameOf | familiyNameOf $x Scientist isA Physicist $x Max Planck$y bornInYear isA

Framework (Query language) Definition (NAGA answer) A NAGA answer is a subgraph of the underlying ER graph that matches the query graph. Examples 1) Which physicist was born in the same year as Max Planck? Albert Einstein Niels Bohr * 3) What connects Einstein and Bohr? 2) Which mountain is located in Africa? Africa$x Mountain isA loctedIn * AfricaTanzania Mountain isA loctedIn * Kilimanjaro loctedIn * Albert Einstein Nobel Prize hasWonPrize Niels Bohr hasWonPrize Physicist 1858 Max PlanckMihajlo Pupuin bornInYear isA Physicist $x Max Planck$y bornInYear isA

Framework (Query language) Definition (NAGA answer) A NAGA answer is a subgraph of the underlying ER graph that matches the query graph. Examples 1) Which physicist was born in the same year as Max Planck? Albert Einstein Niels Bohr * 3) What connects Einstein and Bohr? 2) Which mountain is located in Africa? Africa$x Mountain isA loctedIn * AfricaTanzania Mountain isA loctedIn Kilimanjaro loctedIn Albert Einstein Nobel Prize hasWonPrize Niels Bohr hasWonPrize Physicist 1858 Max PlanckMihajlo Pupin bornInYear isA Physicist $x Max Planck$y bornInYear isA

21 Framework (Ranking model) Question How to rank multiple matches to the same query? Ranking desiderata Confidence Correct answers  Certainty of IE  Trust/Authority of source “Max Planck born in Kiel” bornIn (Max_Planck, Kiel) (Source: Wikipedia) “They believe Elvis hides on Mars” livesIn (Elvis_Presley, Mars) (Source: The One and Only King‘s Blog) Informativeness prominent results preferred  Frequency of facts in the corpus hasWonPrize Einsteinvegetarian BohrNobel Prize Tom Cruise 1962 isA bornInYear diedInYear hasWonPrize Compactness Prefer “tightly” connected answers  Size of the answer graph Einstein isa scientist Einstein isa vegetarian Einstein$x isA

22 Framework (Ranking model) Question How to rank multiple matches to the same query? Ranking desiderata Confidence Correct answers  Certainty of IE  Trust/Authority of source “Max Planck born in Kiel” bornIn (Max_Planck, Kiel) (Source: Wikipedia) “They believe Elvis hides on Mars” livesIn (Elvis_Presley, Mars) (Source: The One and Only King‘s Blog) Informativeness prominent results preferred  Frequency of facts in the corpus hasWonPrize Einsteinvegetarian BohrNobel Prize Tom Cruise 1962 isA bornInYear diedInYear hasWonPrize Compactness Prefer “tightly” connected answers  Size of the answer graph Einstein isa scientist Einstein isa vegetarian Einstein$x isA

23 Framework (Ranking model) Question How to rank multiple matches to the same query? Ranking desiderata Confidence Correct answers  Certainty of IE  Trust/Authority of source “Max Planck born in Kiel” bornIn (Max_Planck, Kiel) (Source: Wikipedia) “They believe Elvis hides on Mars” livesIn (Elvis_Presley, Mars) (Source: The One and Only King‘s Blog) Informativeness prominent results preferred  Frequency of facts in the corpus hasWonPrize Einsteinvegetarian BohrNobel Prize Tom Cruise 1962 isA bornInYear diedInYear hasWonPrize Compactness Prefer “tightly” connected answers  Size of the answer graph Einstein isa scientist Einstein isa vegetarian Einstein$x isA

24 Framework (Ranking model) Question How to rank multiple matches to the same query? Ranking desiderata Confidence Correct answers  Certainty of IE  Trust/Authority of source “Max Planck born in Kiel” bornIn (Max_Planck, Kiel) (Source: Wikipedia) “They believe Elvis hides on Mars” livesIn (Elvis_Presley, Mars) (Source: The One and Only King‘s Blog) Informativeness prominent results preferred  Frequency of facts hasWonPrize Einsteinvegetarian BohrNobel Prize Tom Cruise 1962 isA bornInYear diedInYear hasWonPrize Compactness Prefer “tightly” connected answers  Size of the answer graph Einstein isa scientist Einstein isa vegetarian Einstein$x isA

25 Framework (Ranking model) Question How to rank multiple matches to the same query? Ranking desiderata Confidence Correct answers  Certainty of IE  Trust/Authority of source “Max Planck born in Kiel” bornIn (Max_Planck, Kiel) (Source: Wikipedia) “They believe Elvis hides on Mars” livesIn (Elvis_Presley, Mars) (Source: The One and Only King‘s Blog) Informativeness prominent results preferred  Frequency of facts hasWonPrize Einsteinvegetarian BohrNobel Prize Tom Cruise 1962 isA bornInYear diedInYear hasWonPrize Compactness Prefer “tightly” connected answers  Size of the answer graph Einstein isa scientist Einstein isa vegetarian Einstein$x isA

26 Framework (Ranking model) Question How to rank multiple matches to the same query? Ranking desiderata Confidence Correct answers  Certainty of IE  Trust/Authority of source “Max Planck born in Kiel” bornIn (Max_Planck, Kiel) (Source: Wikipedia) “They believe Elvis hides on Mars” livesIn (Elvis_Presley, Mars) (Source: The One and Only King‘s Blog) Informativeness prominent results preferred  Frequency of facts hasWonPrize Einsteinvegetarian BohrNobel Prize Tom Cruise 1962 isA bornInYear diedInYear hasWonPrize Compactness Prefer “tightly” connected answers  Size of the answer graph Einstein isa scientist Einstein isa vegetarian Einstein$x isA NAGA exploits language models for ranking

27 Framework (Ranking model) Statistical Language Models for Document IR q LM(  1 ) d1d1 d2d2 LM(  2 ) ? ? each doc has LM: generative prob. distr. with parameters  query q viewed as sample estimate likelihood that q is sample of LM of doc d rank by descending likelihoods (best „explanation“ of q) [Maron/Kuhns 1960, Ponte/Croft 1998, Lafferty/Zhai 2001] MLE: sparseness mixture model Background model (smoothing)

28 Scoring answers Query q with templates q 1 q 2 … q n, e.g. Given g with facts g 1 g 2 … g n, e.g. We use generative mixture models to compute P[q | g] background model estimated by correlation statistics using generative mixture model estimated using knowledge base graph structure based on IE accuracy and authority analysis Framework (Ranking model) Albert Einstein isA $x Albert Einstein isA Physicist

29 Framework (Ranking model) Informativeness Consider Possible results NAGA Ranking (Informativeness) Albert Einstein isA $x Albert Einstein isA PhysicistAlbert Einstein isA Vegetarian

30 Framework (Ranking model) Informativeness Consider Possible results BANKS Ranking (Bhalotia et al. ICDE 2002) Albert Einstein isA $x Albert Einstein isA VegetarianAlbert Einstein isA Physicist Relies only on underlying graph structure Importance of an entity is proportional to its degree Albert Einstein isA Physicist Vegetarian isA  Vegetarian more important than Physicist

31 Evaluation (Setting) Knowledge graph YAGO (Suchanek et al. WWW 2007)  16 Million facts 85 NAGA queries  55 queries from TREC 2005/2006  12 queries from the work on SphereSearch (Graupmann et al. VLDB 2005)  We provided 18 regular expression queries

32 Evaluation (Setting) The queries were issued to  Google,  Yahoo! Answers,  START (  NAGA (Banks scoring) relies only on the structure of the underlying graph. (see Bhalotia et al. ICDE 2002)  NAGA (NAGA scoring) top-10 answers assessed by 20 human judges as relevant, less relevant and irrelevant.

33 Evaluation (Setting) The queries were issued to  Google,  Yahoo! Answers,  START (  NAGA (Banks scoring) relies only on the structure of the underlying graph. (see Bhalotia et al. ICDE 2002)  NAGA (NAGA scoring) top-10 answers assessed by 20 human judges as relevant (2), less relevant (1), and irrelevant (0).

34 Evaluation (Metrics & Results) NDCG (normalized discounted cumulative gain)  rewards result lists in which relevant results are ranked higher than less relevant ones  Useful when comparing result lists of different lengths  to measure how satisfied the user was on average with the first answer of the search engine We report the Wilson confidence intervals at  =0.95%

35 Evaluation (Metrics & Results) NDCG (normalized discounted cumulative gain)  rewards result lists in which relevant results are ranked higher than less relevant ones  Useful when comparing result lists of different lengths  to measure how satisfied the user was on average with the first answer of the search engine We report the Wilson confidence intervals at  =0.95%

36 Evaluation (Metrics & Results) NDCG (normalized discounted cumulative gain)  rewards result lists in which relevant results are ranked higher than less relevant ones  Useful when comparing result lists of different lengths  to measure how satisfied the user was on average with the first answer of the search engine Wilson confidence intervals computed at  =0.95% Benchmark# Q# AMetricsGoogleYahoo! Answers STARTBANKS scoringNAGA TREC551098NDCG 75.88% 67.81% 26.15% 17.20% 75.38% 73.23% 87.93% 69.54% 92.75% 84.40% SphereSearch12343NDCG 38.22% 19.38% 17.23% 6.15%2.87% 88.82% 84.28% 91.01% 84.94% OWN18418NDCG 54.09% 27.95% 17.98% 6.57% 13.35% 13.57% 85.59% 76.54% 91.33% 86.56%

37 Summary NAGA is a search engine for  Advanced querying of information in ER graphs NAGA queries NAGA answers We a novel scoring mechanism  based on generative language models, Applied to the specific and unexplored setting of ER graphs  Incorporating confidence, informativeness, and compactness Viability of the approach demonstrated in comparison to state of the art search engines and QA-Systems.

38 Summary NAGA is a search engine for  Advanced querying of information in ER graphs NAGA queries NAGA answers We propose a novel scoring mechanism  based on generative language models, Applied to the specific and unexplored setting of ER graphs  Incorporating confidence, informativeness, and compactness Viability of the approach demonstrated in comparison to state of the art search engines and QA-Systems.

39 Summary NAGA is a search engine for  Advanced querying of information in ER graphs NAGA queries NAGA answers We propose a novel scoring mechanism  based on generative language models, Applied to the specific and unexplored setting of ER graphs  Incorporating confidence, informativeness, and compactness Viability of the approach demonstrated in comparison to state of the art search engines and QA-Systems.

40 Thank you NAGA: YAGO: