Download presentation
Presentation is loading. Please wait.
1
Partial Prebracketing to Improve Parser Performance John Judge NCLT Seminar Series 7 th December 2005
2
Overview Background and Motivation Prebracketing NE/MWE Markup Labelled Bracketing Constituent Markup NE/MWE + LBCM Combined Grammars compared Conclusions
3
Background and Motivation Parse annotated corpora are crucial for developing machine learning and statistics based parsing resources Large treebanks are available for major languages For other languages there is a lack of such resources for grammar induction
4
Background and Motivation Treebank construction is usually semi- automatic (Penn Treebank, NEGRA) –Raw text is parsed –Annotator corrects parser output Propose a new method –Text is pre-processed to help parser –Pre-processed text is parsed –Annotator corrects parser output Hopefully the output will be better quality and correction will be quicker and easier for the annotator
5
Relevance to my work Previous work on question analysis has identified a need for a suitable training corpus Plan to be able to use this technique for developing a question treebank Method is general so it can be applied to other areas/languages
6
Prebracketing Marking up the input text with information that will help the parser parse the sentence properly –Named Entities –Multi-Word Expressions –Constituents VP, PP Prebracketing can be done automatically (NE, MWE) or manually (constituents) Parser (LoPar, Schmidt (2000)) will respect this markup when parsing
7
Prebracketing Robert Erwin, president of Biosource, called Plant Genetic’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary rather than competitive. ’’
8
Prebracketing: Named Entities Robert Erwin, president of Biosource, called Plant Genetic’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary rather than competitive. ’’
9
Prebracketing: Multi-Word Expressions Robert Erwin, president of Biosource, called Plant Genetic’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary rather than competitive. ’’
10
Prebracketing: Constituents Robert Erwin, president of Biosource, called Plant Genetic’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary rather than competitive. ’’
11
Named Entities Names of people, places, companies, products, etc. Similar to Proper Nouns Internal structure isn’t really important Can be treated as a single lexical unit
12
Parsing NE’s Parser doesn’t normally output NEs Retrain on NE annotated treebank Add NE annotation to sections 2-21 of Penn-II Treebank to use as training Add NE annotations to section 23 as test set
13
Transforming the Corpus Use named entity recogniser, Lingpipe, to pick out the entities (http://alias-i.com/lingpipe/)http://alias-i.com/lingpipe/ Use (slightly hacked) NE transformation routine in the LFG Annotation Algortithm to transform the trees
14
Example Tree Before Transformation NP-SBJ NP,, NNP RobertErwin,, NP PP IN NNP NN president of Biosource
15
Example Tree After Transformation NP-SBJ NE, NP, Robert Erwin,, NP NE PP IN NN president of Biosource
16
Example Prebracketed Input Parser input is generated by systematically stripping markup from the gold standard trees, leaving in only NE markup ((S (NP-SBJ (NE Robert Erwin)…(JJ competitive))))))(..)('' ''))) (NE Robert Erwin), president of (NE Biosource), called (NE Plant Genetic)’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary rather than competitive. ’’
17
Results for NE Parsing Section 23 BaselineMarked up Coverage99.4299.63 F-Score62.9566.09
18
Multi-Word Expressions Short expressions that are interpreted as a single unit eg. with respect to, vice versa, all of a sudden, according to, as well as … Can correspond to a number of syntactic categories Treat as a single lexical unit and ignore internal structure
19
Parsing MWE’s Parser doesn’t normally output MWE’s As with NE’s Penn-II is transformed to contain MWE’s Parser is retrained on MWE annotated treebank Tested on MWE annotated version of Section 23
20
Transforming the corpus Use a list of multi-word expressions from the Stanford University Multi-Word Expression Project (http://mwe.stanford.edu/) Transform the corpus using (an even more hacked) NE transformation routine in the LFG Annotation Algorithm
21
Example Tree Before Transformation PP RB INADJP ratherthan JJ competitive
22
Example Tree After Transformation PP MWE ADJP rather than JJ competitive
23
Example Prebracketed Input Parser input is generated by systematically stripping markup from the gold standard trees, leaving in only MWE markup ((S (NP-SBJ (NP (NNP Robert) (NNP Erwin)…(JJ competitive))))))(..)('' ''))) Robert Erwin, president of Biosource, called Plant Genetic’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary (MWE rather than) competitive. ’’
24
Results for MWE Parsing Section 23 BaselineMarked up Coverage99.4299.55 F-score63.8864.26
25
NE and MWE Markup Combined Marking up NEs and MWEs individually has yielded an improvement on the baseline Try doing both together Treebank is transformed as before but marking up both NEs and MWEs Prebracketed input can also contain both NEs and MWEs
26
Results for NE/MWE Combined Parsing BaselineMWENENE&MWE Coverage99.67 F-Score65.1165.1365.4065.53 MWE score is almost 1% better than when using MWEs alone NE score is over half a percent worse than when using NEs alone Coverage is up slightly on both runs
27
Labelled Bracketing Constituent Markup Instead of marking up lexicalised units, sentence constituents (VP, PP) are marked up No transformations are necessary as original treebank trees produce constituents in output
28
Generating LBCM input Systematically strip markup from the gold standard trees, leaving in only selected markup ((S (NP-SBJ (NP (NNP Robert) (NNP Erwin)…(JJ competitive))))))(..)('' ''))) Robert Erwin, president of Biosource, (VP called Plant Genetic’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary rather than competitive). ’’ Simulating “ideal” human generated input
29
Results for LBCM BaselineTop VP Bottom VP Top PP Bottom PP Coverage99.8799.1198.9899.5598.55 F-Score67.1971.1669.7470.4868.44
30
Combining NE, MWE and LBCM Prebracketing Taking the corpora from NE and MWE combination and preprocessing input as for LBCM (NE Robert Erwin), president of (NE Biosource), (VP called (NE Plant Genetic)’s approach `` Interesting ’’ and ``novel, ’’ and `` complementary (MWE rather than) competitive). ’’
31
Results for all 3 PrebracketedCoverageF-Score None (Baseline)99.6765.11 MWE99.6765.13 NE99.6765.40 MWE & NE99.6765.53 NE & Top VP99.1370.06 MWE & Top VP99.2169.63 NE, MWE & Top VP99.2169.91
32
Looking at the Grammar/Lexicon Expect to reduce grammar/lexicon size by conflating NEs and MWEs Compare 4 grammars and lexicons –Plain PCFG –NE PCFG –MWE PCFG –NE/MWE PCFG
33
Comparison VanillaNEMWECombi Rules15427162531679117661 NP Rules6456675465446853 NP …NE… -1515-1520 NP …MWE… --198197 … NE-2123-2549 … MWE--18211793 Lexical Entries 45357512574562251521 NEs in Lex-14773- MWEs in Lex--323318
34
Why are the Grammars Growing? Expectation was that the grammar/lexicon would shrink Instead they’re growing in size Caused by the lexicon –Many MWEs are appearing as a single MWE entry and also their individual words –Likewise for NEs –Introducing new categories –Correspondingly more rules in the grammar
35
Multi-Word Expression Example all of a sudden –all of a sudden MWE 2 –All of a sudden MWE 2 –all DT 800 PDT 182 RB 36 –of IN 22741 RP 2 –a DT 19895 FW 6 LS 1 NNP 2 SYM 10 –sudden JJ 32
36
Named Entity Example Winston Churchill –Winston Churchill NE 1 –Churchill NE 2 George Bush –George Bush NE 26 –George NE 5 –Bush NE 344
37
Knock-on effects Grammar/lexicon size is growing –Parse time is increasing A likely cause of the gains in terms of precision, recall and f-score being small –NE/MWE analysis of a phrase is not the most likely according to the grammar –Consequently a less likely parse is output
38
Summary Generating NE/MWE PCFG grammars in this way is possible Unexpectedly, these grammars are larger than plain PCFG grammars Results show something can be gained from prebracketing input However, even the best result 70.16 (prebracketing topmost VP) is considerabley less than history-based parsers (Charniak, Collins, Bikel)
39
Further Work Better NE recognition –More sophisticated transformation method Different/larger MWE lists Experiment with history-based parsers for better results*
40
Thanks Any questions or comments?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.