Download presentation
Presentation is loading. Please wait.
Published byEvangeline Howard Modified over 6 years ago
1
Improving a Pipeline Architecture for Shallow Discourse Parsing
Yangqiu Song, Haoruo Peng, Parisa Kordjamshidi, Mark Sammons, and Dan Roth Cognitive Computation Group, University of Illinois Problem System Architecture Results Discourse Structure We present results for the individual components (based on cross-validation over the training corpus) and for components and the complete system on the blind test corpus. Error analysis suggests argument boundaries are problematic when the gold arguments omit some internal content; the sense classification requires better abstraction over argument content. Text that humans produce – documents, letters, s, reports, news articles – have a coherent structure that imposes order over individual sentences. It can be thought of as relations between sentences, paragraphs, section headers, and titles that a human reader perceives in terms of explicit or implicit connections relating the information contained in each. This discourse structure is complex, and to infer it will require deep understanding of text. The task of shallow discourse parsing simplifies the problem with the goal of establishing some first steps towards inferring the deeper structure. Shallow discourse parsers identify and label local discourse relations that identify statements of interest and the connections between statements that occur within one or two sentences of each other. Connectives may be explicit, in which case the connection is expressed as one or more words (such as “next”, “if… then”, “as a result”, “in contrast”). They may also be implicit, in which case they can be inferred by the reader based on the semantics of neighboring statements. For example: The Shallow Discourse Parsing task identifies 15 possible types of discourse connection, and requires participating systems to identify the extent of the statements (the arguments) that are connected. This means leaving out irrelevant content such as attribution clauses: Documents Explicit Connective Identifier Argument Position Identifier Preprocessor A list of 99 connective phrases is used to identify candidates, which are classified using syntactic path features. The input is raw text. The evaluation used Penn Treebank and a new, blind test set. The text is processed with Part-of-Speech, Chunker, and syntactic constituent and dependency parsers. For each predicted connective, a classifier predicts whether the first argument is in the previous or the same sentence. Component Performance Prec Rec F1 Explicit Connective Identifier 92.97 93.91 93.44 Argument Positions 98.15 Argument 1 64.41 64.95 64.68 Argument 2 87.06 86.06 86.56 Explicit Connective Sense 83.18 Implicit Connective Sense 34.58 Attribution Identification* 82.94 58.02 68.27 Shallow Discourse Parsing Attribution Identifier Implicit Connective Classifier Explicit Connective Classifier Argument Labeler An attribution clause identifier labels candidate segments of arguments for exclusion from the final output. Candidate sentence pairs are considered for implicit connectives. A multi-class classifier using lexical features from the two sentences predicts either “no connective” or one of the possible connective labels. A multi-class classifier using lexico-syntactic features is used to label the explicit connectives identified earlier. Argument candidates are generated from syntactic parse constituents, and the candidates are scored and ranked using a machine-learned model. Argument 1 Table 1. Component-level performance measured on training data (10-fold cross-validation). ‘*’ indicates training data was from different Discourse data set. He added that ”[having just one firm do this isn't going to mean a hill of beans]. But [if this prompts others to consider the same thing, then it may become much more important].” Explicit connective: Comparison.Concession Component Performance Prec Rec F1 Explicit Connective Identifier 89.11 86.87 87.98 Argument 1 49.52 51.61 50.55 Argument 2 66.83 69.64 68.21 Argument 1 & Argument 2 40.48 42.18 41.31 Connective Sense (all) 21.02 16.81 16.49 Parser (overall) 17.62 18.36 17.98 Explorations and Innovations Argument 2 Our system follows (Lin et al. 2014) quite closely in the design of the pipeline architecture and basic features. We experimented with a number of enhancements to the original model. We experimented with an abstraction of constituency parse tree path features that uses function words rather than node labels where appropriate. This improved the performance of the argument classifier. Following work by (Peng et al. 2015), we used polarity features to improve the performance of the implicit connective classifier. Polarity features relate the positive/negative sentiment of predicates of consecutive sentences in a way that accounts for their context. We tried to use general statistical context features to improve implicit connective prediction. These features are based on correlations between entities and verbs/nouns/other entities in their immediate context. However, we were not able to improve overall performance on this task. We modeled global characteristics of sequences of connective labels by using neighboring label predictions as features when classifying connectives. The implementation used these features at test time but not during training (i.e., joint inference not joint learning). However, we were not able to improve system performance with this approach. Entity Context Features Argument 1 According to The Times, [Barings Bank was hammered by losses incurred by rogue trader Nick Leeson]. [The bank had to file for bankruptcy]. Function Words Table 2. Component-level and system performance measured on blind test set Implicit connective: Contingency.Cause.Result Joint Inference Argument 2 References Shared Task Polarity Features Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. “A PDTB- Styled End-to-End Discourse Parser”, Journal of Natural Language Engineering, 2014 Haoruo Peng, Daniel Khashabi, and Dan Roth, "Solving Hard Coreference Problems", Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics This work is supported by DARPA, NIGMS and the Multimodal Information Access & Synthesis Center at UIUC.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.