Information Retrieval and Extration 期末專題實驗 — Relevant Sentence Detection.

Slides:



Advertisements
Similar presentations
Towards Methods for the Collective Gathering and Quality Control of Relevance Assessments SIGIR´09, July 2009.
Advertisements

Query Chain Focused Summarization Tal Baumel, Rafi Cohen, Michael Elhadad Jan 2014.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Dialogue – Driven Intranet Search Suma Adindla School of Computer Science & Electronic Engineering 8th LANGUAGE & COMPUTATION DAY 2009.
Language Model based Information Retrieval: University of Saarland 1 A Hidden Markov Model Information Retrieval System Mahboob Alam Khalid.
Explorations in Tag Suggestion and Query Expansion Jian Wang and Brian D. Davison Lehigh University, USA SSM 2008 (Workshop on Search in Social Media)
楊竹星 國立成功大學電機工程系 98學年第一學期
Evaluating Search Engine
Search Engines and Information Retrieval
Information Retrieval Ling573 NLP Systems and Applications April 26, 2011.
計算機視覺研究室 專題實作簡報 張元翔 老師.
Information Retrieval in Practice
SLIDE 1IS 240 – Spring 2009 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Information Access I Measurement and Evaluation GSLT, Göteborg, October 2003 Barbara Gawronska, Högskolan i Skövde.
Reference Collections: Task Characteristics. TREC Collection Text REtrieval Conference (TREC) –sponsored by NIST and DARPA (1992-?) Comparing approaches.
語文補救教學助教心得分享 馬信鏘
Data Mining CS 341, Spring 2007 Project Discussion.
1 Discussion Class 5 TREC. 2 Discussion Classes Format: Questions. Ask a member of the class to answer. Provide opportunity for others to comment. When.
Intranet in a Box Introduction and Demo by SharePoint Implemented LLC, New Orleans, LA.
Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University.
ISP 433/633 Week 6 IR Evaluation. Why Evaluate? Determine if the system is desirable Make comparative assessments.
Data Mining CS 341, Spring 2007 Final Project: presentation & report & codes.
The Relevance Model  A distribution over terms, given information need I, (Lavrenko and Croft 2001). For term r, P(I) can be dropped w/o affecting the.
Evaluation Information retrieval Web. Purposes of Evaluation System Performance Evaluation efficiency of data structures and methods operational profile.
Combining Content-based and Collaborative Filtering Department of Computer Science and Engineering, Slovak University of Technology
Search and Retrieval: Relevance and Evaluation Prof. Marti Hearst SIMS 202, Lecture 20.
The Evolution of Shared-Task Evaluation Douglas W. Oard College of Information Studies and UMIACS University of Maryland, College Park, USA December 4,
台南市語文學習領域英語 科種子教師研習報告 後甲國中 陳招玲老師 1. 決定課程的內容或文法基礎 2. 涵蓋哪些部分 3. 教材的出現方式 (reading or dialogue) 4. 文法須隱藏在主題之下 自訂或自編教材單元時的考慮 要素 要素.
Processing of large document collections Part 3 (Evaluation of text classifiers, applications of text categorization) Helena Ahonen-Myka Spring 2005.
Search Engines and Information Retrieval Chapter 1.
Evaluation Experiments and Experience from the Perspective of Interactive Information Retrieval Ross Wilkinson Mingfang Wu ICT Centre CSIRO, Australia.
Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?
1 A Unified Relevance Model for Opinion Retrieval (CIKM 09’) Xuanjing Huang, W. Bruce Croft Date: 2010/02/08 Speaker: Yu-Wen, Hsu.
Evaluation INST 734 Module 5 Doug Oard. Agenda Evaluation fundamentals  Test collections: evaluating sets Test collections: evaluating rankings Interleaving.
Modern Information Retrieval: A Brief Overview By Amit Singhal Ranjan Dash.
Evaluating What’s Been Learned. Cross-Validation Foundation is a simple idea – “ holdout ” – holds out a certain amount for testing and uses rest for.
25/10/20151Gianluca Demartini Desktop Search Evaluation Sergey Chernov and Gianluca Demartini TREC 2006, 16th November 2006 Pre-Track Workshop.
Assessing The Retrieval A.I Lab 박동훈. Contents 4.1 Personal Assessment of Relevance 4.2 Extending the Dialog with RelFbk 4.3 Aggregated Assessment.
Firestone Tires Who: Firestone Tire When: May 2000, the National Highway Transportation Safety Administration issued a letter to Ford and Firestone requesting.
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
1 01/10/09 1 INFILE CEA LIST ELDA Univ. Lille 3 - Geriico Overview of the INFILE track at CLEF 2009 multilingual INformation FILtering Evaluation.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Department of Software and Computing Systems Research Group of Language Processing and Information Systems The DLSIUAES Team’s Participation in the TAC.
Basic Introduction of Education Research By Jingping Chen For Reading Research Seminar of Sichuan Province Pujiang, Chengdu
A Critique and Improvement of an Evaluation Metric for Text Segmentation A Paper by Lev Pevzner (Harvard University) Marti A. Hearst (UC, Berkeley) Presented.
AN EFFECTIVE STATISTICAL APPROACH TO BLOG POST OPINION RETRIEVAL Ben He Craig Macdonald Iadh Ounis University of Glasgow Jiyin He University of Amsterdam.
Advantages of Query Biased Summaries in Information Retrieval by A. Tombros and M. Sanderson Presenters: Omer Erdil Albayrak Bilge Koroglu.
Threshold Setting and Performance Monitoring for Novel Text Mining Wenyin Tang and Flora S. Tsai School of Electrical and Electronic Engineering Nanyang.
Digital Storytelling Course Outline & Weekly Agenda Spring 2010.
Active Feedback in Ad Hoc IR Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
TAKS Reading Process Analyze the Task Activate Prior Knowledge Plan and Predict Read Use Information.
1 13/05/07 1/20 LIST – DTSI – Interfaces, Cognitics and Virtual Reality Unit The INFILE project: a crosslingual filtering systems evaluation campaign Romaric.
Topic by Topic Performance of Information Retrieval Systems Walter Liggett National Institute of Standards and Technology TREC-7 (1999)
Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst.
Evaluation of Information Retrieval Systems Xiangming Mu.
Blogs and forums Finding ways to engage and interact with your target market both on your own site and within other relative industry forums.
Information Retrieval (based on Jurafsky and Martin) Miriam Butt October 2003.
Indri at TREC 2004: UMass Terabyte Track Overview Don Metzler University of Massachusetts, Amherst.
Information Retrieval and Extraction 2010 Term Project – Modern Web Search Advisor: 陳信希 TA: 許名宏 & 王界人.
Information Retrieval and Extraction 2009 Term Project – Modern Web Search Advisor: 陳信希 TA: 蔡銘峰&許名宏.
1 INFILE - INformation FILtering Evaluation Evaluation of adaptive filtering systems for business intelligence and technology watch Towards real use conditions.
BELL RINGER **Read the “Warm-Up” on the top of page 18** Which shampoo would you buy and why??
Information Retrieval in Practice
Information Retrieval (in Practice)
Quality Management Systems (QMS)
IR Theory: Evaluation Methods
Industrial Training Provider ,
Cumulated Gain-Based Evaluation of IR Techniques
The experiments based on word-embedding and SVM
Topic A Grade 1.
Presentation transcript:

Information Retrieval and Extration 期末專題實驗 — Relevant Sentence Detection

Overview 實驗目標 實驗目標 – 句子層次的主題相關性偵測 分組 分組 – 每組 1~4 人 Deadline and Demo Deadline and Demo –6/21

Typical Ad Hoc Retrieval

Relevant Sentence Detection

Topics and Collection TREC 2004 Novelty Track TREC 2004 Novelty Track –10 topics and their relevance judgments for system training and parameter tuning “ TrainingTopics.txt ” in the dataset “ TrainingTopics.txt ” in the dataset –3-5 testing topics for demo

Topic Example <top> Number: N54 Number: N54<title> Firestone Tire Recall <toptype>Event Description: Description: The widespread affects of the Firestone tire recall Narrative: Narrative: Opinion of the public, personal, business, or company asto the general scope of the recall (too much, not enough);as to the type of tires or vehicle that should beincluded; as well as any customer complaints or onany actions taken are relevant. Documents that brieflyreport on the recall with no enlightening details are notrelevant. <documents>APW NYT </top>

Relevant Document Example <DOC><DOCNO> APW APW </DOCNO><DOCTYPE> NEWS STORY NEWS STORY </DOCTYPE><DATE_TIME> : :19 </DATE_TIME><BODY><HEADLINE> Source: Firestone To Recall Tires Source: Firestone To Recall Tires </HEADLINE> By NEDRA PICKLER By NEDRA PICKLER <TEXT> Most of the Firestone ATX, ATX II and Wilderness AT tires are on Ford Explorers, the industry's top-selling SUV, but the recall will include tires on all brands of vehicles, the source said on condition of anonymity. Most of the Firestone ATX, ATX II and Wilderness AT tires are on Ford Explorers, the industry's top-selling SUV, but the recall will include tires on all brands of vehicles, the source said on condition of anonymity. The recalled tires will be replaced by other Firestone tires, the source said. The recalled tires will be replaced by other Firestone tires, the source said. </TEXT></BODY></DOC>

Evaluation Precision, Recall and F-measure Precision, Recall and F-measure –Usage of the evaluation program ( “ 04.eval_novelty_run.pl ” in the dataset) Usage of the evaluation programUsage of the evaluation program

List of Content in the Dataset “ TrainingTopics.txt ” (file) “ TrainingTopics.txt ” (file) –Training topics for system development “ RelevantDocsForTrainingTopics ” (dir) “ RelevantDocsForTrainingTopics ” (dir) –Relevant documents for each training topic “ 04.qrels.relevant(TrainingTopics).txt ” (file) “ 04.qrels.relevant(TrainingTopics).txt ” (file) –The set of relevant sentences for training topics “ 04.eval_novelty_run.pl ” (file) “ 04.eval_novelty_run.pl ” (file) –Program for evaluation “ AdditionalDocuments(LATIMES) ” (dir) “ AdditionalDocuments(LATIMES) ” (dir) –Additional (not necessary) document collection