CLEF Ǻrhus Robust – Word Sense Disambiguation exercise UBC: Eneko Agirre, Oier Lopez de Lacalle, Arantxa Otegi, German Rigau UVA & Irion: Piek Vossen UH: Thomas Mandl
CLEF Ǻrhus2 Introduction Robust: emphasize difficult topics using non-linear combination of topic results (GMAP) This year also automatic word sense annotation: English documents and topics (English WordNet) Spanish topics (Spanish WordNet - closely linked to the English WordNet) Participants explore how the word senses (plus the semantic information in wordnets) can be used in IR and CLIR See also QA-WSD exercise, which uses same set of documents
CLEF Ǻrhus3 Documents News collection: LA Times 94, Glasgow Herald 95 Sense information added to all content words Lemma Part of speech Weight of each sense in WordNet 1.6 XML with DTD provided Two leading WSD systems: National University of Singapore University of the Basque Country Significant effort (100Mword corpus) Special thanks to Hwee Tou Ng and colleagues from NUS and Oier Lopez de Lacalle from UBC
CLEF Ǻrhus4 Documents: example XML
CLEF Ǻrhus5 Topics We used existing CLEF topics in English and Spanish: 2001; 41-90; LA ; ; LA ; ; GH ; ; LA 94, GH ; ; LA 94, GH ; ; LA 94, GH 95 First three as training (plus relevance judg.) Last three for testing
CLEF Ǻrhus6 Topics: WSD English topics were disambiguated by both NUS and UBC systems Spanish topics: no large-scale WSD system available, so we used the first-sense heuristic Word sense codes are shared between Spanish and English wordnets Sense information added to all content words Lemma Part of speech Weight of each sense in WordNet 1.6 XML with DTD provided
CLEF Ǻrhus7 Topics: WSD example
CLEF Ǻrhus8 Evaluation Reused relevance assessments from previous years Relevance assessment for training topics were provided alongside the training topics MAP and GMAP Participants had to send at least one run which did not use WSD and one run which used WSD
CLEF Ǻrhus9 Participation 8 official participants, plus two late ones: Martínez et al. (Univ. of Jaen) Navarro et al. (Univ. of Alicante) 45 monolingual runs 18 bilingual runs
CLEF Ǻrhus10 Monolingual results MAP: non-WSD best, 3 participants improve it GMAP: WSD best, 3 participants improve it
CLEF Ǻrhus11 Monolingual: using WSD UNINE: synset indexes, combine with results from other indexes Improvement in GMAP UCM: query expansion using structured queries Improvement in MAP and GMAP IXA: expand to all synonyms of all senses in topics, best sense in documents Improvement in MAP GENEVA: synset indexes, expanding to synonyms and hypernyms No improvement, except for some topics UFRGS: only use lemmas (plus multiwords) Improvement in MAP and GMAP
CLEF Ǻrhus12 Monolingual: using WSD UNIBA: combine synset indexes (best sense) Improvements in MAP Univ. of Alicante: expand to all synonyms of best sense Improvement on train / decrease on test Univ. of Jaen: combine synset indexes (best sense) No improvement, except for some topics
CLEF Ǻrhus13 Bilingual results MAP and GMAP: best results for non-WSD Only IXA and UNIBA improve using WSD, but very low GMAP.
CLEF Ǻrhus14 Bilingual: using WSD IXA: wordnets as the sole sources for translation Improvement in MAP UNIGE: translation of topic for baseline No improvement UFRGS: association rules from parallel corpora, plus use of lemmas (no WSD) No improvement UNIBA: wordnets as the sole sources for translation Improvement in both MAP and GMAP
CLEF Ǻrhus15 Conclusions and future Novel dataset with WSD of documents Successful participation 8+2 participants Some positive results with top scoring systems Room for improvement and for new techniques Analysis Correlation with polysemy and difficult topics underway Manual analysis of topics which get improvement with WSD New proposal for 2009
CLEF Ǻrhus Robust – Word Sense Disambiguation exercise Thank you!
CLEF Ǻrhus17
CLEF Ǻrhus18 Word senses can help CLIR We will provide state-of-the-art WSD tags For the first time we offer sense-disambiguated collection All senses with confidence scores (error propag.) The participant can choose how to use it (e.g. nouns only) Also provide synonyms/translations for senses The disambiguated collection allows for: Expanding the collection to synonyms and broader terms Translation to all languages that have a wordnet Focused expansion/translation of collection Higher recall Sense-based blind relevance feedback There is more information in the documents
CLEF Ǻrhus19 CLIR WSD exercise Add the WSD tagged collection/topics as an additional “language” in the ad-hoc task Same topics Same document collection Just offer an additional resource An additional run: With and without WSD Tasks: X2ENG and ENG2ENG (control) Extra resources needed: Relevance assessment of the additional runs
CLEF Ǻrhus20 Usefulness of WSD on IR/CLIR disputed, but … Real compared to artificial experiments Expansion compared to just WSD Weighted list of senses compared to best sense Controlling which word to disambiguate WSD technology has improved Coarser-grained senses (90% acc. on Semeval 2007)
CLEF Ǻrhus21 QA WSD pilot exercise Add the WSD tagged collection/queries to the multilingual Q/A task Same topics LA94 GH95 (Not wikipedia) In addition to the word senses we provide: Synonyms / translations for those senses Need to send one run to the multilingual Q/A task 2 runs, with and without WSD Tasks: X2ENG and ENG2ENG (for QA WSD participants only) Extra resources needed: Relevance assessment of the additional runs
CLEF Ǻrhus22 QA WSD pilot exercise Details: Wikipedia won’t be disambiguated Only a subset of the main QA will be comparable In main QA, multiple answers are required In addition, to normal evaluation, evaluate first reply not coming from wikipedia
CLEF Ǻrhus23 WSD 4 AVE In addition to the word senses provide: Synonyms / translations for those senses Need to send two runs (one more than other part.): With and without WSD Tasks: X2ENG and ENG2ENG (control) Additional resources: Provide word sense tags to the snippets returned by QA results (automatic mapping to original doc. Collection)