Download presentation
Presentation is loading. Please wait.
Published byMarian Welch Modified over 9 years ago
1
Hedge Detection with Latent Features SU Qi sukia@pku.edu.cn CLSW2013, Zhengzhou, Henan May 12, 2013
2
1. Introduction The importance of Information credibility Hedge –hedges are “words whose job is to make things fuzzier or less fuzzy”. [Lakoff, 1972] –to weaken or intensify the speaker’s commitment to a proposition. –narrowed down by some linguists only to keep it as a detensifier. CoNLL-2010 shared task of hedge detection –Detecting hedges and their scopes
3
1. Introduction –Examples It is possible that false allegations may be over- represented, because many true victims of child sexual abuse never tell anyone at all about what happened. Some studies break down the level of false allegations by the age of the child. It is suggested that parents have consistently underestimated the seriousness of their child's distress when compared to accounts of their own children.
4
1. Introduction –sequence labeling models, e.g. conditional random fields and svm-hmm –binary classification –shallow features (e.g. word, lemma, POS tags, etc.) The complication of hedge detection is in the sense that the same word types occasionally have different, non-hedging uses auxiliaries (may, might), hedging verbs (suggest, question), adjectives (probable, possible), adverbs (likely), conjunctions (or, and, either…or), nouns (speculation), etc. can only marginally improve the accuracy of a bag-of-word representation
5
2. The Main Points in This Paper Basic assumption: –high-level (latent) features work better for sequence labeling –projects words to a lower dimensional latent space thus improves generalizability to unseen items, and helps disambiguate some ambiguous items
6
3. Our Work we perform LDA training and inference by Gibbs sampling, then train the CRF model by adding topic IDs as additional external features. As an unsupervised model, LDA allows us to train and infer on an unlabeled dataset, thus relax the re- striction of the labeled dataset used for CRF train- ing.
7
4. Corpus and Experiments biological scientific articles three different levels of feature set –Level 1: token; whether the token is a potential hedge cue (occurring in the pre-extracted hedge cue list) or part of a hedge cue; its context within the scope of [-2, 2] –Level 2: lemma; part-of-speech tag; whether the token belongs to a chunk; whether it is a named entity GENIA tagger –Level 3: topic ID (inferred by the LDA model)
8
4. Corpus and Experiments
9
5. Analysis and Conclusion Hedge is a relatively “close” set A significant improvement can be found between the baselines and all the other experimental settings. The performance of sequence labeling outperforms both naïve methods significantly. The topics generated by LDA are effective Our work suggests a potential research direction of incorporating topical information for hedge detection.
10
Thank you! sukia@pku.edu.cn
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.