CLEF 2008 - Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen.

Slides:



Advertisements
Similar presentations
SINAI-GIR A Multilingual Geographical IR System University of Jaén (Spain) José Manuel Perea Ortega CLEF 2008, 18 September, Aarhus (Denmark) Computer.
Advertisements

Thomas Mandl: Robust CLEF Overview 1 Cross-Language Evaluation Forum (CLEF) Thomas Mandl Information Science Universität Hildesheim
Current and Future Research Directions University of Tehran Database Research Group 1 October 2009 Abolfazl AleAhmad, Ehsan Darrudi, Hadi.
Overview of Collaborative Information Retrieval (CIR) at FIRE 2012 Debasis Ganguly, Johannes Leveling, Gareth Jones School of Computing, CNGL, Dublin City.
Query Dependent Pseudo-Relevance Feedback based on Wikipedia SIGIR ‘09 Advisor: Dr. Koh Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/01/24 1.
CLEF 2008 Multilingual Question Answering Track UNED Anselmo Peñas Valentín Sama Álvaro Rodrigo CELCT Danilo Giampiccolo Pamela Forner.
Word sense induction using continuous vector space models
LREC Combining Multiple Models for Speech Information Retrieval Muath Alzghool and Diana Inkpen University of Ottawa Canada.
Aiding WSD by exploiting hypo/hypernymy relations in a restricted framework MEANING project Experiment 6.H(d) Luis Villarejo and Lluís M à rquez.
A New Approach for Cross- Language Plagiarism Analysis Rafael Corezola Pereira, Viviane P. Moreira, and Renata Galante Universidade Federal do Rio Grande.
Evaluating the Contribution of EuroWordNet and Word Sense Disambiguation to Cross-Language Information Retrieval Paul Clough 1 and Mark Stevenson 2 Department.
COMP423: Intelligent Agent Text Representation. Menu – Bag of words – Phrase – Semantics – Bag of concepts – Semantic distance between two words.
CLEF – Cross Language Evaluation Forum Question Answering at CLEF 2003 ( Bridging Languages for Question Answering: DIOGENE at CLEF-2003.
TREC 2009 Review Lanbo Zhang. 7 tracks Web track Relevance Feedback track (RF) Entity track Blog track Legal track Million Query track (MQ) Chemical IR.
COMP423.  Query expansion  Two approaches ◦ Relevance feedback ◦ Thesaurus-based  Most Slides copied from ◦
“How much context do you need?” An experiment about context size in Interactive Cross-language Question Answering B. Navarro, L. Moreno-Monteagudo, E.
1 The Domain-Specific Track at CLEF 2008 Vivien Petras & Stefan Baerisch GESIS Social Science Information Centre, Bonn, Germany Aarhus, Denmark, September.
Combining Lexical Semantic Resources with Question & Answer Archives for Translation-Based Answer Finding Delphine Bernhard and Iryna Gurevvch Ubiquitous.
CLEF Budapest Joint SemEval/CLEF tasks: Contribution of WSD to CLIR UBC: Agirre, Lopez de Lacalle, Otegi, Rigau, FBK: Magnini Irion Technologies:
CLEF 2004 – Interactive Xling Bookmarking, thesaurus, and cooperation in bilingual Q & A Jussi Karlgren – Preben Hansen –
Semantic Search via XML Fragments: A High-Precision Approach to IR Jennifer Chu-Carroll, John Prager, David Ferrucci, and Pablo Duboue IBM T.J. Watson.
CLEF 2005: Multilingual Retrieval by Combining Multiple Multilingual Ranked Lists Luo Si & Jamie Callan Language Technology Institute School of Computer.
“ SINAI at CLEF 2005 : The evolution of the CLEF2003 system.” Fernando Martínez-Santiago Miguel Ángel García-Cumbreras University of Jaén.
A Study on Query Expansion Methods for Patent Retrieval Walid MagdyGareth Jones Centre for Next Generation Localisation School of Computing Dublin City.
Jennie Ning Zheng Linda Melchor Ferhat Omur. Contents Introduction WordNet Application – WordNet Data Structure - WordNet FrameNet Application – FrameNet.
The CLEF 2003 cross language image retrieval task Paul Clough and Mark Sanderson University of Sheffield
1 Query Operations Relevance Feedback & Query Expansion.
Paper Review by Utsav Sinha August, 2015 Part of assignment in CS 671: Natural Language Processing, IIT Kanpur.
MIRACLE Multilingual Information RetrievAl for the CLEF campaign DAEDALUS – Data, Decisions and Language, S.A. Universidad Carlos III de.
Interactive Probabilistic Search for GikiCLEF Ray R Larson School of Information University of California, Berkeley Ray R Larson School of Information.
Péter Schönhofen – Ad Hoc Hungarian → English – CLEF Workshop 20 Sep 2007 Performing Cross-Language Retrieval with Wikipedia Participation report for Ad.
SYMPOSIUM ON SEMANTICS IN SYSTEMS FOR TEXT PROCESSING September 22-24, Venice, Italy Combining Knowledge-based Methods and Supervised Learning for.
An Effective Word Sense Disambiguation Model Using Automatic Sense Tagging Based on Dictionary Information Yong-Gu Lee
IIIT Hyderabad’s CLIR experiments for FIRE-2008 Sethuramalingam S & Vasudeva Varma IIIT Hyderabad, India 1.
UA in ImageCLEF 2005 Maximiliano Saiz Noeda. Index System  Indexing  Retrieval Image category classification  Building  Use Experiments and results.
Clustering Word Senses Eneko Agirre, Oier Lopez de Lacalle IxA NLP group
Automatic Set Instance Extraction using the Web Richard C. Wang and William W. Cohen Language Technologies Institute Carnegie Mellon University Pittsburgh,
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
1 01/10/09 1 INFILE CEA LIST ELDA Univ. Lille 3 - Geriico Overview of the INFILE track at CLEF 2009 multilingual INformation FILtering Evaluation.
How robust is CLIR? Proposal for a new robust task at CLEF Thomas Mandl Information Science Universität Hildesheim 6 th Workshop.
CLEF 2008 Workshop September 17-19, 2008 Aarhus, Denmark.
Information Retrieval at NLC Jianfeng Gao NLC Group, Microsoft Research China.
Iterative Translation Disambiguation for Cross Language Information Retrieval Christof Monz and Bonnie J. Dorr Institute for Advanced Computer Studies.
© 2004 Chris Staff CSAW’04 University of Malta of 15 Expanding Query Terms in Context Chris Staff and Robert Muscat Department of.
CLEF 2007 Workshop Budapest, Hungary, 19–21 September 2007 Nicola Ferro Information Management Systems (IMS) Research Group Department of Information Engineering.
CLEF Kerkyra Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Arantxa Otegi UNIPD: Giorgio Di Nunzio UH: Thomas Mandl.
Improving Named Entity Translation Combining Phonetic and Semantic Similarities Fei Huang, Stephan Vogel, Alex Waibel Language Technologies Institute School.
From Text to Image: Generating Visual Query for Image Retrieval Wen-Cheng Lin, Yih-Chen Chang and Hsin-Hsi Chen Department of Computer Science and Information.
Thomas Mandl: GeoCLEF Track Overview Cross-Language Evaluation Forum (CLEF) Thomas Mandl, (U. Hildesheim) 8 th Workshop.
QA Pilot Task at CLEF 2004 Jesús Herrera Anselmo Peñas Felisa Verdejo UNED NLP Group Cross-Language Evaluation Forum Bath, UK - September 2004.
Evaluating Answer Validation in multi- stream Question Answering Álvaro Rodrigo, Anselmo Peñas, Felisa Verdejo UNED NLP & IR group nlp.uned.es The Second.
Multi-level Bootstrapping for Extracting Parallel Sentence from a Quasi-Comparable Corpus Pascale Fung and Percy Cheung Human Language Technology Center,
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
1 Evaluating High Accuracy Retrieval Techniques Chirag Shah,W. Bruce Croft Center for Intelligent Information Retrieval Department of Computer Science.
1 13/05/07 1/20 LIST – DTSI – Interfaces, Cognitics and Virtual Reality Unit The INFILE project: a crosslingual filtering systems evaluation campaign Romaric.
The Loquacious ( 愛說話 ) User: A Document-Independent Source of Terms for Query Expansion Diane Kelly et al. University of North Carolina at Chapel Hill.
Thomas Mandl: Robust CLEF Overview 1 Cross-Language Evaluation Forum (CLEF) Thomas Mandl Information Science Universität Hildesheim
Analysis of Experiments on Hybridization of different approaches in mono and cross-language information retrieval DAEDALUS – Data, Decisions and Language,
Query expansion COMP423. Menu Query expansion Two approaches Relevance feedback Thesaurus-based Most Slides copied from
University Of Seoul Ubiquitous Sensor Network Lab Query Dependent Pseudo-Relevance Feedback based on Wikipedia 전자전기컴퓨터공학 부 USN 연구실 G
A Trainable Multi-factored QA System Radu Ion, Dan Ştefănescu, Alexandru Ceauşu, Dan Tufiş, Elena Irimia, Verginica Barbu-Mititelu Research Institute for.
CLEF Budapest1 Measuring the contribution of Word Sense Disambiguation for QA Proposers: UBC: Agirre, Lopez de Lacalle, Otegi, Rigau, FBK: Magnini.
Multilingual Search using Query Translation and Collection Selection Jacques Savoy, Pierre-Yves Berger University of Neuchatel, Switzerland
F. López-Ostenero, V. Peinado, V. Sama & F. Verdejo
SENSEVAL: Evaluating WSD Systems
Experiments for the CL-SR task at CLEF 2006
Irion Technologies (c)
WordNet WordNet, WSD.
CS 620 Class Presentation Using WordNet to Improve User Modelling in a Web Document Recommender System Using WordNet to Improve User Modelling in a Web.
CLEF 2008 Multilingual Question Answering Track
Presentation transcript:

CLEF Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen UH: Thomas Mandl

CLEF Ǻrhus2 Introduction Robust: emphasize difficult topics using non-linear combination of topic results (GMAP) This year also automatic word sense annotation: English documents and topics (English WordNet) Spanish topics (Spanish WordNet - closely linked to the English WordNet) Participants explore how the word senses (plus the semantic information in wordnets) can be used in IR and CLIR See also QA-WSD exercise, which uses same set of documents

CLEF Ǻrhus3 Documents News collection: LA Times 94, Glasgow Herald 95 Sense information added to all content words Lemma Part of speech Weight of each sense in WordNet 1.6 XML with DTD provided Two leading WSD systems: National University of Singapore University of the Basque Country Significant effort (100Mword corpus) Special thanks to Hwee Tou Ng and colleagues from NUS and Oier Lopez de Lacalle from UBC

CLEF Ǻrhus4 Documents: example XML

CLEF Ǻrhus5 Topics We used existing CLEF topics in English and Spanish: 2001; 41-90; LA ; ; LA ; ; GH ; ; LA 94, GH ; ; LA 94, GH ; ; LA 94, GH 95 First three as training (plus relevance judg.) Last three for testing

CLEF Ǻrhus6 Topics: WSD English topics were disambiguated by both NUS and UBC systems Spanish topics: no large-scale WSD system available, so we used the first-sense heuristic Word sense codes are shared between Spanish and English wordnets Sense information added to all content words Lemma Part of speech Weight of each sense in WordNet 1.6 XML with DTD provided

CLEF Ǻrhus7 Topics: WSD example

CLEF Ǻrhus8 Evaluation Reused relevance assessments from previous years Relevance assessment for training topics were provided alongside the training topics MAP and GMAP Participants had to send at least one run which did not use WSD and one run which used WSD

CLEF Ǻrhus9 Participation 8 official participants, plus two late ones:  Martínez et al. (Univ. of Jaen)  Navarro et al. (Univ. of Alicante) 45 monolingual runs 18 bilingual runs

CLEF Ǻrhus10 Monolingual results MAP: non-WSD best, 3 participants improve it GMAP: WSD best, 3 participants improve it

CLEF Ǻrhus11 Monolingual: using WSD UNINE: synset indexes, combine with results from other indexes Improvement in GMAP UCM: query expansion using structured queries Improvement in MAP and GMAP IXA: expand to all synonyms of all senses in topics, best sense in documents Improvement in MAP GENEVA: synset indexes, expanding to synonyms and hypernyms No improvement, except for some topics UFRGS: only use lemmas (plus multiwords) Improvement in MAP and GMAP

CLEF Ǻrhus12 Monolingual: using WSD UNIBA: combine synset indexes (best sense) Improvements in MAP Univ. of Alicante: expand to all synonyms of best sense Improvement on train / decrease on test Univ. of Jaen: combine synset indexes (best sense) No improvement, except for some topics

CLEF Ǻrhus13 Bilingual results MAP and GMAP: best results for non-WSD Only IXA and UNIBA improve using WSD, but very low GMAP.

CLEF Ǻrhus14 Bilingual: using WSD IXA: wordnets as the sole sources for translation Improvement in MAP UNIGE: translation of topic for baseline No improvement UFRGS: association rules from parallel corpora, plus use of lemmas (no WSD) No improvement UNIBA: wordnets as the sole sources for translation Improvement in both MAP and GMAP

CLEF Ǻrhus15 Conclusions and future Novel dataset with WSD of documents Successful participation 8+2 participants Some positive results with top scoring systems Room for improvement and for new techniques Analysis Correlation with polysemy and difficult topics underway Manual analysis of topics which get improvement with WSD New proposal for 2009

CLEF Ǻrhus Robust – Word Sense Disambiguation exercise Thank you!

CLEF Ǻrhus17

CLEF Ǻrhus18 Word senses can help CLIR We will provide state-of-the-art WSD tags For the first time we offer sense-disambiguated collection All senses with confidence scores (error propag.) The participant can choose how to use it (e.g. nouns only) Also provide synonyms/translations for senses The disambiguated collection allows for: Expanding the collection to synonyms and broader terms Translation to all languages that have a wordnet Focused expansion/translation of collection Higher recall Sense-based blind relevance feedback There is more information in the documents

CLEF Ǻrhus19 CLIR WSD exercise Add the WSD tagged collection/topics as an additional “language” in the ad-hoc task Same topics Same document collection Just offer an additional resource An additional run: With and without WSD Tasks: X2ENG and ENG2ENG (control) Extra resources needed: Relevance assessment of the additional runs

CLEF Ǻrhus20 Usefulness of WSD on IR/CLIR disputed, but … Real compared to artificial experiments Expansion compared to just WSD Weighted list of senses compared to best sense Controlling which word to disambiguate WSD technology has improved Coarser-grained senses (90% acc. on Semeval 2007)

CLEF Ǻrhus21 QA WSD pilot exercise Add the WSD tagged collection/queries to the multilingual Q/A task Same topics LA94 GH95 (Not wikipedia) In addition to the word senses we provide: Synonyms / translations for those senses Need to send one run to the multilingual Q/A task 2 runs, with and without WSD Tasks: X2ENG and ENG2ENG (for QA WSD participants only) Extra resources needed: Relevance assessment of the additional runs

CLEF Ǻrhus22 QA WSD pilot exercise Details: Wikipedia won’t be disambiguated Only a subset of the main QA will be comparable In main QA, multiple answers are required In addition, to normal evaluation, evaluate first reply not coming from wikipedia

CLEF Ǻrhus23 WSD 4 AVE In addition to the word senses provide: Synonyms / translations for those senses Need to send two runs (one more than other part.): With and without WSD Tasks: X2ENG and ENG2ENG (control) Additional resources: Provide word sense tags to the snippets returned by QA results (automatic mapping to original doc. Collection)