Selective Sampling for Information Extraction with a Committee of Classifiers Evaluating Machine Learning for Information Extraction, Track 2 Ben Hachey,

Slides:



Advertisements
Similar presentations
The SEER Project Team and Others
Advertisements

Atomatic summarization of voic messages using lexical and prosodic features Koumpis and Renals Presented by Daniel Vassilev.
Multi-Document Person Name Resolution Michael Ben Fleischman (MIT), Eduard Hovy (USC) From Proceedings of ACL-42 Reference Resolution workshop 2004.
Coreference Based Event-Argument Relation Extraction on Biomedical Text Katsumasa Yoshikawa 1), Sebastian Riedel 2), Tsutomu Hirao 3), Masayuki Asahara.
Presenters: Arni, Sanjana.  Subtask of Information Extraction  Identify known entity names – person, places, organization etc  Identify the boundaries.
Supervised Learning Techniques over Twitter Data Kleisarchaki Sofia.
Using Query Patterns to Learn the Durations of Events Andrey Gusev joint work with Nate Chambers, Pranav Khaitan, Divye Khilnani, Steven Bethard, Dan Jurafsky.
Robust Extraction of Named Entity Including Unfamiliar Word Masatoshi Tsuchiya, Shinya Hida & Seiichi Nakagawa Reporter: Chia-Ying Lee Advisor: Prof. Hsin-Hsi.
Overview of the KBP 2013 Slot Filler Validation Track Hoa Trang Dang National Institute of Standards and Technology.
Lecture 22: Evaluation April 24, 2010.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
Vamshi Ambati | Stephan Vogel | Jaime Carbonell Language Technologies Institute Carnegie Mellon University A ctive Learning and C rowd-Sourcing for Machine.
Ang Sun Ralph Grishman Wei Xu Bonan Min November 15, 2011 TAC 2011 Workshop Gaithersburg, Maryland USA.
Confidence Estimation for Machine Translation J. Blatz et.al, Coling 04 SSLI MTRG 11/17/2004 Takahiro Shinozaki.
CS Word Sense Disambiguation. 2 Overview A problem for semantic attachment approaches: what happens when a given lexeme has multiple ‘meanings’?
Classifying Movie Scripts by Genre Alex Blackstock Matt Spitz 6/9/08.
Predicting the Semantic Orientation of Adjective Vasileios Hatzivassiloglou and Kathleen R. McKeown Presented By Yash Satsangi.
Unsupervised Information Extraction from Unstructured, Ungrammatical Data Sources on the World Wide Web Mathew Michelson and Craig A. Knoblock.
ECOC for Text Classification Hybrids of EM & Co-Training (with Kamal Nigam) Learning to build a monolingual corpus from the web (with Rosie Jones) Effect.
Using Error-Correcting Codes For Text Classification Rayid Ghani This presentation can be accessed at
Performance Evaluation in Computer Vision Kyungnam Kim Computer Vision Lab, University of Maryland, College Park.
Course Summary LING 572 Fei Xia 03/06/07. Outline Problem description General approach ML algorithms Important concepts Assignments What’s next?
Extracting Interest Tags from Twitter User Biographies Ying Ding, Jing Jiang School of Information Systems Singapore Management University AIRS 2014, Kuching,
1 Efficiently Learning the Accuracy of Labeling Sources for Selective Sampling by Pinar Donmez, Jaime Carbonell, Jeff Schneider School of Computer Science,
Today Evaluation Measures Accuracy Significance Testing
Keyphrase Extraction in Scientific Documents Thuy Dung Nguyen and Min-Yen Kan School of Computing National University of Singapore Slides available at.
Title Extraction from Bodies of HTML Documents and its Application to Web Page Retrieval Microsoft Research Asia Yunhua Hu, Guomao Xin, Ruihua Song, Guoping.
Employing EM and Pool-Based Active Learning for Text Classification Andrew McCallumKamal Nigam Just Research and Carnegie Mellon University.
Unsupervised Learning Reading: Chapter 8 from Introduction to Data Mining by Tan, Steinbach, and Kumar, pp , , (
1 A study on automatically extracted keywords in text categorization Authors:Anette Hulth and Be´ata B. Megyesi From:ACL 2006 Reporter: 陳永祥 Date:2007/10/16.
Towards Improving Classification of Real World Biomedical Articles Kostas Fragos TEI of Athens Christos Skourlas TEI of Athens
Text Classification, Active/Interactive learning.
The CoNLL-2013 Shared Task on Grammatical Error Correction Hwee Tou Ng, Yuanbin Wu, and Christian Hadiwinoto 1 Siew.
1 Named Entity Recognition based on three different machine learning techniques Zornitsa Kozareva JRC Workshop September 27, 2005.
1 Statistical NLP: Lecture 9 Word Sense Disambiguation.
This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number.
Part-Of-Speech Tagging using Neural Networks Ankur Parikh LTRC IIIT Hyderabad
A Weakly-Supervised Approach to Argumentative Zoning of Scientific Documents Yufan Guo Anna Korhonen Thierry Poibeau 1 Review By: Pranjal Singh Paper.
Recognizing Names in Biomedical Texts: a Machine Learning Approach GuoDong Zhou 1,*, Jie Zhang 1,2, Jian Su 1, Dan Shen 1,2 and ChewLim Tan 2 1 Institute.
Modelling Human Thematic Fit Judgments IGK Colloquium 3/2/2005 Ulrike Padó.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Instance Filtering for Entity Recognition Advisor : Dr.
Combining multiple learners Usman Roshan. Bagging Randomly sample training data Determine classifier C i on sampled data Goto step 1 and repeat m times.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
COLING 2012 Extracting and Normalizing Entity-Actions from Users’ comments Swapna Gottipati, Jing Jiang School of Information Systems, Singapore Management.
CS 6998 NLP for the Web Columbia University 04/22/2010 Analyzing Wikipedia and Gold-Standard Corpora for NER Training William Y. Wang Computer Science.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
Prototype-Driven Learning for Sequence Models Aria Haghighi and Dan Klein University of California Berkeley Slides prepared by Andrew Carlson for the Semi-
Paired Sampling in Density-Sensitive Active Learning Pinar Donmez joint work with Jaime G. Carbonell Language Technologies Institute School of Computer.
Active learning Haidong Shi, Nanyi Zeng Nov,12,2008.
Number Sense Disambiguation Stuart Moore Supervised by: Anna Korhonen (Computer Lab)‏ Sabine Buchholz (Toshiba CRL)‏
WERST – Methodology Group
UWMS Data Mining Workshop Content Analysis: Automated Summarizing Prof. Marti Hearst SIMS 202, Lecture 16.
John Lafferty Andrew McCallum Fernando Pereira
4. Relationship Extraction Part 4 of Information Extraction Sunita Sarawagi 9/7/2012CS 652, Peter Lindes1.
Shallow Parsing for South Asian Languages -Himanshu Agrawal.
Exploiting Named Entity Taggers in a Second Language Thamar Solorio Computer Science Department National Institute of Astrophysics, Optics and Electronics.
Event-Based Extractive Summarization E. Filatova and V. Hatzivassiloglou Department of Computer Science Columbia University (ACL 2004)
Information Extraction Entity Extraction: Statistical Methods Sunita Sarawagi.
Virtual Examples for Text Classification with Support Vector Machines Manabu Sassano Proceedings of the 2003 Conference on Emprical Methods in Natural.
Competition II: Springleaf Sha Li (Team leader) Xiaoyan Chong, Minglu Ma, Yue Wang CAMCOS Fall 2015 San Jose State University.
BOOTSTRAPPING INFORMATION EXTRACTION FROM SEMI-STRUCTURED WEB PAGES Andrew Carson and Charles Schafer.
Multi-Criteria-based Active Learning for Named Entity Recognition ACL 2004.
The P YTHY Summarization System: Microsoft Research at DUC 2007 Kristina Toutanova, Chris Brockett, Michael Gamon, Jagadeesh Jagarlamudi, Hisami Suzuki,
Predicting Consensus Ranking in Crowdsourced Setting Xi Chen Mentors: Paul Bennett and Eric Horvitz Collaborator: Kevyn Collins-Thompson Machine Learning.
Word Sense and Subjectivity (Coling/ACL 2006) Janyce Wiebe Rada Mihalcea University of Pittsburgh University of North Texas Acknowledgements: This slide.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
Language Identification and Part-of-Speech Tagging
Introduction Feature Extraction Discussions Conclusions Results
Family History Technology Workshop
Presentation transcript:

Selective Sampling for Information Extraction with a Committee of Classifiers Evaluating Machine Learning for Information Extraction, Track 2 Ben Hachey, Markus Becker, Claire Grover & Ewan Klein University of Edinburgh

13/04/2005Selective Sampling for IE with a Committee of Classifiers 2 Overview Introduction –Approach & Results Discussion –Alternative Selection Metrics –Costing Active Learning –Error Analysis Conclusions

13/04/2005Selective Sampling for IE with a Committee of Classifiers 3 Approaches to Active Learning Uncertainty Sampling (Cohn et al., 1995) Usefulness ≈ uncertainty of single learner –Confidence: Label examples for which classifier is the least confident –Entropy: Label examples for which output distribution from classifier has highest entropy Query by Committee (Seung et al., 1992) Usefulness ≈ disagreement of committee of learners –Vote entropy: disagreement between winners –KL-divergence: distance between class output distributions –F-score: distance between tag structures

13/04/2005Selective Sampling for IE with a Committee of Classifiers 4 Committee Creating a Committee –Bagging or randomly perturbing event counts, random feature subspaces (Abe and Mamitsuka, 1998; Argamon-Engelson and Dagan, 1999; Chawla 2005) Automatic, but not ensured diversity… –Hand-crafted feature split (Osborne & Baldridge, 2004) Can ensure diversity Can ensure some level of independence We use a hand crafted feature split with a maximum entropy Markov model classifier (Klein et al., 2003; Finkel et al., 2005)

13/04/2005Selective Sampling for IE with a Committee of Classifiers 5 Feature Split Feature Set 1Feature Set 2 Word Featuresw i, w i-1, w i+1 TnT POS tagsPOS i, POS i-1, POS i+1 Disjunction of 5 prev wordsPrev NENE i-1, NE i-2 + NE i-1 Disjunction of 5 next wordsPrev NE + POSNE i-1 + POS i-1 + POS i Word Shapeshape i, shape i-1, shape i+1 NE i-2 + NE i-1 + POS i-2 + POS i-1 + POS i shape i + shape i+1 Occurrence PatternsCapture multiple references to NEs shape i + shape i-1 + shape i+1 Prev NENE i-1, NE i-2 + NE i-1 NE i-3 + NE i-2 + NE i-1 Prev NE + WordNE i-1 + w i Prev NE + shapeNE i-1 + shape i NE i-1 + shape i+1 NE i-1 + shape i-1 + shape i NE i-2 + NE i-1 + shape i-2 + shape i-1 + shape i PositionDocument Position Words, Word shapes, Document position Parts-of-speech, Occurrence patterns of proper nouns

13/04/2005Selective Sampling for IE with a Committee of Classifiers 6 KL-divergence (McCallum & Nigam, 1998) Document-level –Average Quantifies degree of disagreement between distributions:

13/04/2005Selective Sampling for IE with a Committee of Classifiers 7 Evaluation Results

13/04/2005Selective Sampling for IE with a Committee of Classifiers 8 Discussion Best average improvement over baseline learning curve: 1.3 points f-score Average % improvement: 2.1% f-score Absolute scores middle of the pack

13/04/2005Selective Sampling for IE with a Committee of Classifiers 9 Overview Introduction –Approach & Results Discussion –Alternative Selection Metrics –Costing Active Learning –Error Analysis Conclusions

13/04/2005Selective Sampling for IE with a Committee of Classifiers 10 Other Selection Metrics KL-max –Maximum per-token KL-divergence F-complement (Ngai & Yarowsky, 2000) –Structural comparison between analyses –Pairwise f-score between phrase assignments:

13/04/2005Selective Sampling for IE with a Committee of Classifiers 11 Related Work: BioNER NER-annotated sub-set of GENIA corpus (Kim et al., 2003) –Bio-medical abstracts –5 entities: DNA, RNA, cell line, cell type, protein Used 12,500 sentences for simulated AL experiments –Seed:500 –Pool:10,000 –Test:2,000

13/04/2005Selective Sampling for IE with a Committee of Classifiers 12 Costing Active Learning Want to compare reduction in cost (annotator effort & pay) Plot results with several different cost metrics –# Sentence, # Tokens, # Entities

13/04/2005Selective Sampling for IE with a Committee of Classifiers 13 Simulation Results: Sentences Cost: 10.0/19.3/26.7 Error: 1.6/4.9/4.9

13/04/2005Selective Sampling for IE with a Committee of Classifiers 14 Simulation Results: Tokens Cost: 14.5/23.5/16.8 Error: 1.8/4.9/2.6

13/04/2005Selective Sampling for IE with a Committee of Classifiers 15 Simulation Results: Entities Cost: 28.7/12.1/11.4 Error: 5.3/2.4/1.9

13/04/2005Selective Sampling for IE with a Committee of Classifiers 16 Costing AL Revisited (BioNLP data) Averaged KL does not have a significant effect on sentence length  Expect shorter per sent annotation times. Relatively high concentration of entities  Expect more positive examples for learning. MetricTokensEntitiesEnt/Tok Random26.7 (0.8)2.8 (0.1)10.5 % F-comp25.8 (2.4)2.2 (0.7)8.5 % MaxKL30.9 (1.5)3.3 (0.2)10.7 % AveKL27.1 (1.8)3.3 (0.2)12.2 %

13/04/2005Selective Sampling for IE with a Committee of Classifiers 17 Document Cost Metric (Dev)

13/04/2005Selective Sampling for IE with a Committee of Classifiers 18 Token Cost Metric (Dev)

13/04/2005Selective Sampling for IE with a Committee of Classifiers 19 Discussion Difficult to do comparison between metrics –Document unit cost not necessarily realistic estimate real cost Suggestion for future evaluation: –Use corpus with measure of annotation cost at some level (document, sentence, token)

13/04/2005Selective Sampling for IE with a Committee of Classifiers 20 Longest Document Baseline

13/04/2005Selective Sampling for IE with a Committee of Classifiers 21 Confusion Matrix Token-level B-, I- removed Random Baseline –Trained on 320 documents Selective Sampling –Trained on documents

selective O wshm wsnm cfnm wsac wslo cfac wsdt wssdt wsndt wscdt cfhm O wshm wsnm cfnm wsac wslo cfac wsdt wssdt wsndt wscdt cfhm random Owshmwsnmcfnm wsacwslocfacwsdt wssdt wsndtwscdtcfhm O wshm wsnm cfnm wsac wslo cfac wsdt wssdt sndt wscdt cfhm

selective O wshm wsnm cfnm wsac wslo cfac wsdt wssdt wsndt wscdt cfhm O wshm wsnm cfnm wsac wslo cfac wsdt wssdt wsndt wscdt cfhm random Owshmwsnmcfnm wsacwslocfacwsdt wssdt wsndtwscdtcfhm O wshm wsnm cfnm wsac wslo cfac wsdt wssdt sndt wscdt cfhm

selective O wshm wsnm cfnm wsac wslo cfac wsdt wssdt wsndt wscdt cfhm O wshm wsnm cfnm wsac wslo cfac wsdt wssdt wsndt wscdt cfhm random Owshmwsnmcfnm wsacwslocfacwsdt wssdt wsndtwscdtcfhm O wshm wsnm cfnm wsac wslo cfac wsdt wssdt sndt wscdt cfhm

13/04/2005Selective Sampling for IE with a Committee of Classifiers 25 Overview Introduction –Approach & Results Discussion –Alternative Selection Metrics –Costing Active Learning –Error Analysis Conclusions

13/04/2005Selective Sampling for IE with a Committee of Classifiers 26 Conclusions AL for IE with a Committee of Classifiers: Approach using KL-divergence to measure disagreement amongst MEMM classifiers –Classification framework: simplification of IE task Ave. Improvement: 1.3 absolute, 2.1 % f-score Suggestions: Interaction between AL methods and text-based cost estimates –Comparison of methods will benefit from real cost information… Full simulation?

13/04/2005Selective Sampling for IE with a Committee of Classifiers 27 Thank you

Bea Alex, Markus Becker, Shipra Dingare, Rachel Dowsett, Claire Grover, Ben Hachey, Olivia Johnson, Ewan Klein, Yuval Krymolowski, Jochen Leidner, Bob Mann, Malvina Nissim, Bonnie Webber Chris Cox, Jenny Finkel, Chris Manning, Huy Nguyen, Jamie Nicolson Stanford: Edinburgh: The SEER/EASIE Project Team

13/04/2005Selective Sampling for IE with a Committee of Classifiers 29

13/04/2005Selective Sampling for IE with a Committee of Classifiers 30 More Results

13/04/2005Selective Sampling for IE with a Committee of Classifiers 31 Evaluation Results: Tokens

13/04/2005Selective Sampling for IE with a Committee of Classifiers 32 Evaluation Results: Entities

13/04/2005Selective Sampling for IE with a Committee of Classifiers 33 Entity Cost Metric (Dev)

13/04/2005Selective Sampling for IE with a Committee of Classifiers 34 More Analysis

13/04/2005Selective Sampling for IE with a Committee of Classifiers 35 Boundaries: Acc+class/Acc-class RoundRandomSelective / / / / / /0.975

13/04/2005Selective Sampling for IE with a Committee of Classifiers 36 Boundaries: Full/Left/Right F-score RoundRandomSelective∆ /0.593/ /0.594/ /0.001/ /0.648/ /0.643/ /-.005/ /0.669/ /0.684/ /0.015/0.013