Presentation is loading. Please wait.

Presentation is loading. Please wait.

Xinhao Wang, Jiazhong Nie, Dingsheng Luo, and Xihong Wu Speech and Hearing Research Center, Department of Machine Intelligence, Peking University September.

Similar presentations


Presentation on theme: "Xinhao Wang, Jiazhong Nie, Dingsheng Luo, and Xihong Wu Speech and Hearing Research Center, Department of Machine Intelligence, Peking University September."— Presentation transcript:

1 Xinhao Wang, Jiazhong Nie, Dingsheng Luo, and Xihong Wu Speech and Hearing Research Center, Department of Machine Intelligence, Peking University September 18 th, 2008 A Joint Segmenting and Labeling Approach for Chinese Lexical Analysis ECML PKDD 2008, Antwerp

2 Speech and Hearing Research Center, Peking University Cascaded Subtasks in NLP Chunking and Parsing Word Segmentation and Named Entity Recognition POS Tagging Word Sense Disambiguation Drawbacks:  Errors introduced by earlier subtasks propagate through the pipeline and will never be recovered in downstream subtasks.  The information sharing among different subtasks is prohibited by this pipeline manner.

3 Speech and Hearing Research Center, Peking University Researchers’ Efforts on Joint Processing  Reranking (Shi, 2007; Sutton, 2005; Zhang, 2003) As an approximation of joint processing, it may miss the true optimal result, which often lies out of the k-best list.  Taking multiple subtasks as a single one (Luo, 2003; Miller, 2000; Yi, 2005; Nakagawa, 2007, Ng, 2004) The obstacle is the requirement of corpus annotated with multi-level information.  Unified probabilistic models (Sutton, 2004, Duh, 2005) Dynamical Conditional Random Fields (DCRFs) and Factorial Hidden Markov Models (FHMMs), which are trained jointly and performs the subtasks all at once. Both DCRFs and FHMMs suffer from the absence of multi-level data annotation.

4 Speech and Hearing Research Center, Peking University A Unified Framework for Joint Processing  A WFSTs based approach is presented to jointly perform a cascade of segmentation and labeling tasks, which holds two remarkable features as below: WFST offers a unified framework that can represent many widely used models, like lexical constraints, n-gram language model and Hidden Markov Models (HMMs), and thus a unified transducer representation for modeling multiple knowledge sources can be achieved. Multiple WFSTs can be integrated into a fully composed single WFST, which makes it possible to perform a cascade of subtasks with a one-pass decoding.

5 Speech and Hearing Research Center, Peking University Weighted Finite State Transducers (WFSTs)  The WFST is the generalization of the finite state automata, which is capable of realizing a weighted relation between strings.  Composition operation Example of WFSTs composition. Two simple WFSTs are showed in (a) and (b), in which states are represented by circles and labeled with their unique numbers. The bold circles represented initial states and double circles of final states. The input and output labels as well as weight of transition t are marked as in(t):out(t)/weight(t). In (c), the composition of (a) and (b) is illustrated.

6 Speech and Hearing Research Center, Peking University Joint Chinese Lexical Analysis  The WFST based approach Uniform Representation for Multiple Subtask Models. Integration of Multiple Models.  Tasks word segmentation, part-of-speech tagging, and person and location names recognition.

7 Speech and Hearing Research Center, Peking University Multiple Subtasks Modeling  An n-gram language model based on word classes is adopted for word segmentation.  Hidden Markov Models (HMMs) are adopted both for names recognition and POS tagging.  In names recognition, both Chinese characters and words are considered as model units, and it is performed with word segmentation simultaneously

8 Speech and Hearing Research Center, Peking University The Pipeline System vs. The Joint System Pipeline BaselineIntegrated Analyzer Decode Compose The Best Segmentation Output Decode Compose Output

9 Speech and Hearing Research Center, Peking University Simulation Setup  Corpus: People’s Daily of China annotated by the Institute of Computational Linguistics of Peking University 01-05(98) is used as the training set 06(98) is the test set The first 2000 sentences of the test set are taken as the development set System Word Segmentation F1(%) POS Tagging F1(%) Person Names Recognition F1(%) Place Names Recognition F1(%) Pipeline Baseline 95.9491.0683.3189.90 Integrated Analyzer 96.7791.8188.5190.91

10 Speech and Hearing Research Center, Peking University The Statistical Significance Test  The approximate randomization approach (Yeh, 2000) is adopted to test the performance improvement produced by the joint processing. The evaluation metric F1-value of word segmentation is tested. The responses for each sentence produced by two systems are shuffled and equally resigned to each system, and then the significance level is computed based on the shuffled results 10 sets, 500 sentences for each, are randomly selected and tested. For all the selected 10 sets, the significance level p-values are all far smaller than 0.001.

11 Speech and Hearing Research Center, Peking University Discussions  This approach holds the full search space and chooses the optimal results based on the multi-level sources, rather than reranking the k- best candidates.  The models for each level subtask are trained separately, while the decoding is conducted jointly. Accordingly, it avoids the necessary of corpus annotated with multi-level information.  In the case when a segmentation task precedes a labeling task, the WFSTs based approach naturally ensures the consistency restriction imposed by the segmentation.  The unified framework of WFSTs provides the opportunity to easily apply the presented analyzer in other natural language related applications which are also based on WFSTs, such as speech recognition and machine translation

12 Speech and Hearing Research Center, Peking University Conclusion  In this research, within the unified framework of WFSTs, a joint processing approach is presented to perform a cascade of segmentation and labeling subtasks.  It has been demonstrated that the joint processing is superior to the traditional pipeline manner.  The finding suggests two directions for future research More linguistic knowledge will be integrated in the analyzer, such as organization names recognition and shallow parsing. Since rich linguistic knowledge will play an important role for the tough tasks, such as ASR and MT, incorporating our integrated analyzer may lead to a promising performance improvement.

13 Speech and Hearing Research Center, Peking University Thank you for your attention!

14 Speech and Hearing Research Center, Peking University Uniform Representation (1) Lexicon WFSTs. (a) is the FSA representing an input example; (b) is the FST representing a toy dictionary.

15 Speech and Hearing Research Center, Peking University Uniform Representation (2) The WFSA representing a toy bigram language model, where un(w1) denotes the unigram of w1; bi(w1;w2) and back(w1) respectively denotes the bigram of w2 and the backoff weight given the word history w1. ClassesDescription wiwi The i th word listed in the dictionary CNAMEChinese person names TNAMETranslated person names LOCLocation names NUMNumber expressions LETTERLetter strings NON Other non Chinese character strings BEGINBeginnings of sentences ENDEnds of sentences

16 Speech and Hearing Research Center, Peking University Uniform Representation (3) POS WFSTs. (a) is the WFST representing the relationship between the word and the pos; (b) is the WFSA representing a toy bigram of POS surname the first character of the given name The second character of the given name CNAME

17 Speech and Hearing Research Center, Peking University The Statistical Significance Test  The approximate randomization approach (Yeh, 2000). The responses for each sentence produced by two systems are shuffled and equally resigned to each system, and then the significance level is computed based on the shuffled results. The shuffle times is fixed as: Since in our test set there are more than 21,000 sentences, the use of 2 20 shuffles to approximate 2 21000 shuffles turns unreasonable any more. Thus, 10 sets, 500 sentences for each, are randomly selected and tested. For all the selected 10 sets, the significance level p-values are all far smaller than 0.001.


Download ppt "Xinhao Wang, Jiazhong Nie, Dingsheng Luo, and Xihong Wu Speech and Hearing Research Center, Department of Machine Intelligence, Peking University September."

Similar presentations


Ads by Google