Resource Acquisition for Syntax-based MT from Parsed Parallel data Alon Lavie, Alok Parlikar and Vamshi Ambati Language Technologies Institute Carnegie.

Slides:



Advertisements
Similar presentations
Cluster Computing for Statistical Machine Translation Qin Gao, Kevin Gimpel, Alok Palikar, Andreas Zollmann Stephan Vogel, Noah Smith.
Advertisements

Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Approaches to Parsing.
Using Percolated Dependencies in PBSMT Ankit K. Srivastava and Andy Way Dublin City University CLUKI XII: April 24, 2009.
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Enabling MT for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University.
1 A Tree Sequence Alignment- based Tree-to-Tree Translation Model Authors: Min Zhang, Hongfei Jiang, Aiti Aw, et al. Reporter: 江欣倩 Professor: 陳嘉平.
Automatic Rule Learning for Resource-Limited Machine Translation Alon Lavie, Katharina Probst, Erik Peterson, Jaime Carbonell, Lori Levin, Ralf Brown Language.
1 Improving a Statistical MT System with Automatically Learned Rewrite Patterns Fei Xia and Michael McCord (Coling 2004) UW Machine Translation Reading.
ACL 2005 WORKSHOP ON BUILDING AND USING PARALLEL TEXTS (WPT-05), Ann Arbor, MI. June Competitive Grouping in Integrated Segmentation and Alignment.
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
CS 4705 Basic Parsing with Context-Free Grammars.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Low-Density Languages Katharina Probst April 5, 2002.
Breaking the Resource Bottleneck for Multilingual Parsing Rebecca Hwa, Philip Resnik and Amy Weinberg University of Maryland.
Probabilistic Parsing Ling 571 Fei Xia Week 5: 10/25-10/27/05.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Czech-to-English Translation: MT Marathon 2009 Session Preview Jonathan Clark Greg Hanneman Language Technologies Institute Carnegie Mellon University.
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
PFA Node Alignment Algorithm Consider the parse trees of a Chinese-English parallel pair of sentences.
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
Advanced Signal Processing 05/06 Reinisch Bernhard Statistical Machine Translation Phrase Based Model.
July 24, 2007GALE Update: Alon Lavie1 Statistical Transfer and MEMT Activities Multi-Engine Machine Translation –MEMT service within the cross-GALE IOD.
Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Scalable Inference and Training of Context- Rich Syntactic Translation Models Michel Galley, Jonathan Graehl, Keven Knight, Daniel Marcu, Steve DeNeefe.
Statistical XFER: Hybrid Statistical Rule-based Machine Translation Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Recent Major MT Developments at CMU Briefing for Joe Olive February 5, 2008 Alon Lavie and Stephan Vogel Language Technologies Institute Carnegie Mellon.
The ICT Statistical Machine Translation Systems for IWSLT 2007 Zhongjun He, Haitao Mi, Yang Liu, Devi Xiong, Weihua Luo, Yun Huang, Zhixiang Ren, Yajuan.
Reordering Model Using Syntactic Information of a Source Tree for Statistical Machine Translation Kei Hashimoto, Hirohumi Yamamoto, Hideo Okuma, Eiichiro.
Advanced MT Seminar Spring 2008 Instructors: Alon Lavie and Stephan Vogel.
Machine Translation Syntax-Based Translation Models – Principles, Approaches, Acquisition Alon Lavie 16 March 2011.
Approaches to Machine Translation CSC 5930 Machine Translation Fall 2012 Dr. Tom Way.
Rapid Prototyping of a Transfer-based Hebrew-to-English Machine Translation System Alon Lavie Language Technologies Institute Carnegie Mellon University.
Rule Learning - Overview Goal: Syntactic Transfer Rules 1) Flat Seed Generation: produce rules from word- aligned sentence pairs, abstracted only to POS.
AMTEXT: Extraction-based MT for Arabic Faculty: Alon Lavie, Jaime Carbonell Students and Staff: Laura Kieras, Peter Jansen Informant: Loubna El Abadi.
Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Robert Frederking, Ralf Brown, Jaime Carbonell Students: Shyamsundar Jayaraman, Satanjeev Banerjee.
What’s in a translation rule? Paper by Galley, Hopkins, Knight & Marcu Presentation By: Behrang Mohit.
Hebrew-to-English XFER MT Project - Update Alon Lavie June 2, 2004.
INSTITUTE OF COMPUTING TECHNOLOGY Forest-to-String Statistical Translation Rules Yang Liu, Qun Liu, and Shouxun Lin Institute of Computing Technology Chinese.
Chinese Word Segmentation Adaptation for Statistical Machine Translation Hailong Cao, Masao Utiyama and Eiichiro Sumita Language Translation Group NICT&ATR.
NRC Report Conclusion Tu Zhaopeng NIST06  The Portage System  For Chinese large-track entry, used simple, but carefully- tuned, phrase-based.
A non-contiguous Tree Sequence Alignment-based Model for Statistical Machine Translation Jun Sun ┼, Min Zhang ╪, Chew Lim Tan ┼ ┼╪
A Trainable Transfer-based MT Approach for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University Joint.
Supertagging CMSC Natural Language Processing January 31, 2006.
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
A Trainable Transfer-based MT Approach for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University Joint.
The CMU Mill-RADD Project: Recent Activities and Results Alon Lavie Language Technologies Institute Carnegie Mellon University.
Statistical Machine Translation Part II: Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Error Analysis of Two Types of Grammar for the purpose of Automatic Rule Refinement Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies.
Avenue Architecture Learning Module Learned Transfer Rules Lexical Resources Run Time Transfer System Decoder Translation Correction Tool Word- Aligned.
October 10, 2003BLTS Kickoff Meeting1 Transfer with Strong Decoding Learning Module Transfer Rules {PP,4894} ;;Score: PP::PP [NP POSTP] -> [PREP.
CMU Statistical-XFER System Hybrid “rule-based”/statistical system Scaled up version of our XFER approach developed for low-resource languages Large-coverage.
Eliciting a corpus of word- aligned phrases for MT Lori Levin, Alon Lavie, Erik Peterson Language Technologies Institute Carnegie Mellon University.
A Syntax-Driven Bracketing Model for Phrase-Based Translation Deyi Xiong, et al. ACL 2009.
Seed Generation and Seeded Version Space Learning Version 0.02 Katharina Probst Feb 28,2002.
CMU MilliRADD Small-MT Report TIDES PI Meeting 2002 The CMU MilliRADD Team: Jaime Carbonell, Lori Levin, Ralf Brown, Stephan Vogel, Alon Lavie, Kathrin.
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Robert Frederking, Ralf Brown, Jaime Carbonell Students: Shyamsundar Jayaraman, Satanjeev Banerjee.
LING 575 Lecture 5 Kristina Toutanova MSR & UW April 27, 2010 With materials borrowed from Philip Koehn, Chris Quirk, David Chiang, Dekai Wu, Aria Haghighi.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Minority Languages Katharina Probst Language Technologies Institute Carnegie Mellon.
Enabling MT for Languages with Limited Resources Alon Lavie and Lori Levin Language Technologies Institute Carnegie Mellon University.
LingWear Language Technology for the Information Warrior Alex Waibel, Lori Levin Alon Lavie, Robert Frederking Carnegie Mellon University.
The AVENUE Project: Automatic Rule Learning for Resource-Limited Machine Translation Faculty: Alon Lavie, Jaime Carbonell, Lori Levin, Ralf Brown Students:
Basic Parsing with Context Free Grammars Chapter 13
Urdu-to-English Stat-XFER system for NIST MT Eval 2008
Stat-Xfer מציגים: יוגב וקנין ועומר טבח, 05/01/2012
Stat-XFER: A General Framework for Search-based Syntax-driven MT
Statistical Machine Translation Papers from COLING 2004
Stat-XFER: A General Framework for Search-based Syntax-driven MT
Presentation transcript:

Resource Acquisition for Syntax-based MT from Parsed Parallel data Alon Lavie, Alok Parlikar and Vamshi Ambati Language Technologies Institute Carnegie Mellon University

Research Goals Long-term research agenda (since 2000) focused on developing a unified framework for MT that addresses the core fundamental weaknesses of previous approaches: –Representation – explore richer formalisms that can capture complex divergences between languages –Ability to handle morphologically complex languages –Methods for automatically acquiring MT resources from available data and combining them with manual resources –Ability to address both rich and poor resource scenarios Main research funding sources: NSF (AVENUE and LETRAS projects) and DARPA (GALE) June 20, 20082SSST-2

June 20, 2008SSST-23 CMU Statistical Transfer (Stat-XFER) MT Approach Integrate the major strengths of rule-based and statistical MT within a common framework: –Linguistically rich formalism that can express complex and abstract compositional transfer rules –Rules can be written by human experts and also acquired automatically from data –Easy integration of morphological analyzers and generators –Word and syntactic-phrase correspondences can be automatically acquired from parallel text –Search-based decoding from statistical MT adapted to find the best translation within the search space: multi-feature scoring, beam-search, parameter optimization, etc. –Framework suitable for both resource-rich and resource-poor language scenarios

June 20, 2008SSST-24 Stat-XFER MT Systems General Stat-XFER framework under development for past seven years Systems so far: –Chinese-to-English –Hebrew-to-English –Urdu-to-English –German-to-English –French-to-English –Hindi-to-English –Dutch-to-English –Mapudungun-to-Spanish In progress or planned: –Arabic-to-English –Brazilian Portuguese-to-English –Inupiaq-to-English –Hebrew-to-Arabic –Quechua-to-Spanish –Turkish-to-English

Stat-XFER Framework Source Input Preprocessing Morphology Transfer Engine Transfer Rules Bilingual Lexicon Translation Lattice Second-Stage Decoder Language Model Weighted Features Target Output June 20, 20085SSST-2

Transfer Engine Language Model + Additional Features Transfer Rules {NP1,3} NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ((X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1)) Translation Lexicon N::N |: ["$WR"] -> ["BULL"] ((X1::Y1) ((X0 NUM) = s) ((Y0 lex) = "BULL")) N::N |: ["$WRH"] -> ["LINE"] ((X1::Y1) ((X0 NUM) = s) ((Y0 lex) = "LINE")) Source Input בשורה הבאה Decoder English Output in the next line Translation Output Lattice (0 1 (1 1 (2 2 (1 2 "THE (0 2 "IN (0 4 "IN THE NEXT Preprocessing Morphology

June 20, 2008SSST-27 Transfer Rule Formalism Type information Part-of-speech/constituent information Alignments x-side constraints y-side constraints xy-constraints, e.g. ((Y1 AGR) = (X1 AGR)) ; SL: the old man, TL: ha-ish ha-zaqen NP::NP [DET ADJ N] -> [DET N DET ADJ] ( (X1::Y1) (X1::Y3) (X2::Y4) (X3::Y2) ((X1 AGR) = *3-SING) ((X1 DEF = *DEF) ((X3 AGR) = *3-SING) ((X3 COUNT) = +) ((Y1 DEF) = *DEF) ((Y3 DEF) = *DEF) ((Y2 AGR) = *3-SING) ((Y2 GENDER) = (Y4 GENDER)) )

MT Resource Acquisition in Resource-rich Scenarios Scenario: Significant amounts of parallel-text at sentence-level are available –Parallel sentences can be word-aligned and parsed (at least on one side, ideally on both sides) Goal: Acquire both broad-coverage translation lexicons and transfer rule grammars automatically from the data Syntax-based translation lexicons: –Broad-coverage constituent-level translation equivalents at all levels of syntactic granularity –Can serve as the elementary building blocks for transfer trees constructed at runtime using the transfer rules June 20, 20088SSST-2

Acquisition Process Automatic Process for Extracting Syntax-driven Rules and Lexicons from sentence-parallel data: 1.Word-align the parallel corpus (GIZA++) 2.Parse the sentences independently for both languages 3.Run our new PFA Constituent Aligner over the parsed sentence pairs 4.Extract all aligned constituents from the parallel trees 5.Extract all derived synchronous transfer rules from the constituent-aligned parallel trees 6.Construct a “data-base” of all extracted parallel constituents and synchronous rules with their frequencies and model them statistically (assign them max-likelihood probabilities) June 20, 20089SSST-2

PFA Constituent Node Aligner Input: a bilingual pair of parsed and word-aligned sentences Goal: find all sub-sentential constituent alignments between the two trees which are translation equivalents of each other Equivalence Constraint: a pair of constituents are considered translation equivalents if: –All words in yield of are aligned only to words in yield of (and vice- versa) –If has a sub-constituent that is aligned to, then must be a sub-constituent of (and vice-versa) Algorithm is a bottom-up process starting from word-level, marking nodes that satisfy the constraints June 20, SSST-2

PFA Node Alignment Algorithm Each of the nodes stores a value. All nodes are initialized with the value 1. Each Word to Word alignment is assigned a unique prime number.

PFA Node Alignment Algorithm For every word to word alignment, we do the following: Let p be the unique prime value assigned to the alignment. Let w s and w t be the aligned words on the source and target side. Assign the value p to the POS nodes corresponding to the words w s and w t. Example: “Australia” gets value 2, “is” gets value 3.

PFA Node Alignment Algorithm In case there are “one-to- many” alignments, they are considered as multiple “one-to-one” alignments, and all of these alignments are given the same prime value. Example: “North Korea” is just one word on Chinese side. That word is assigned the value 25, which is a product 5*5.

PFA Node Alignment Algorithm Once all the lexical items have values, we propogate the values up the tree as follows: Work bottom-up A node updates its value as the product of the values of its children.

PFA Node Alignment Algorithm Once all the lexical items have values, we propogate the values up the tree as follows: Work bottom-up A node updates its value as the product of the values of its children. Values could become large!

PFA Node Alignment Algorithm Once all nodes have values, they can be aligned as follows: If a node on Chinese side has a value same as node on English side, align them. If two nodes have equal values, take the node at lowest level in the tree.

PFA Node Alignment Algorithm Once all nodes have values, they can be aligned as follows: If a node on Chinese side has a value same as node on English side, align them. If two nodes have equal values, take the node at lowest level in the tree.

PFA Node Alignment Algorithm Features of the algorithm: Aligned constituents can have different labels Order of the sub- constituents does not matter in node alignment Unaligned words in constituents are allowed, but we are conservative (attach low).

PFA Node Alignment Algorithm Extraction of Phrases: Get the yields of the aligned nodes and add them to a phrase table tagged with syntactic categories on source and target sides. Example: NP # NP :: 澳洲 # Australia

PFA Node Alignment Algorithm All Phrases from this tree: 1.IP # S :: 澳洲 是 与 北韩 有 邦交 的 少数 国家 之一 。 # Australia is one of the few countries that have diplomatic relations with North Korea. 2.VP # VP :: 是 与 北韩 有 邦交 的 少数 国家 之一 # is one of the few countries that have diplomatic relations with North Korea 3.NP # NP :: 与 北韩 有 邦交 的 少数 国家 之一 # one of the few countries that have diplomatic relations with North Korea 4.VP # VP :: 与 北韩 有 邦交 # have diplomatic relations with North Korea 5.NP # NP :: 邦交 # diplomatic relations 6.NP # NP :: 北韩 # North Korea 7.NP # NP :: 澳洲 # Australia

PFA Constituent Node Alignment Performance Compare with manually-aligned constituent nodes: Selected 30 sentences from Chinese-English parallel treebank Bilingual expert manually aligned the nodes in the trees Main sources of disagreement: –1-to-many and many-to-many word alignments –Errors or inconsistencies in the manual word alignments June 20, SSST-2 PrecisionRecallF-1F

PFA Constituent Node Alignment Performance Evaluation Data: Chinese-English Treebank –Parallel Chinese-English Treebank with manual word-alignments –3342 Sentence Pairs Created a “Gold Standard” constituent alignments using the manual word-alignments and treebank trees –Node Alignments: (About 12/tree pair) –NP to NP Alignments: 5427 Evaluation: Run PFA Aligner with automatic word alignments on same data and compare with the “gold Standard” alignments June 20, SSST-2

PFA Constituent Node Alignment Performance Viterbi CombinationPrecisionRecallF-1 Intersection Union Sym-1 (Thot Toolkit) Sym-2 (Thot Toolkit) Grow-Diag-Final Grow-Diag-Final-and Viterbi word alignments from Chinese-English and reverse directions were merged using different algorithms Tested the performance of Node-Alignment with each resulting alignment June 20, SSST-2

Transfer Rule Learning Input: Constituent-aligned parallel trees Idea: Aligned nodes act as possible decomposition points of the parallel trees –The sub-trees of any aligned pair of nodes can be broken apart at lower-level aligned nodes, creating an inventory of “tree-fragment” correspondences –Synchronous “tree-frags” can be converted into synchronous rules –Similar in nature to [Galley et al 2004, 2006] Algorithm: –Find all possible minimal tree fragment decompositions from the node aligned trees –“Flatten” the tree fragments into Stat-XFER style synchronous CFG rules June 20, SSST-2

Rule Extraction Algorithm Tree-fragment extraction: Extract Sub-tree segments including synchronous alignment information in the target tree. All the sub-trees and the super- tree are extracted.

Rule Extraction Algorithm Flat Rule Creation: Each of the tree fragment pairs is flattened to create a Rule in the “Stat-XFER” Formalism – Four major parts to the rule: 1. Type of the rule: Source and Target side type information 2. Constituent sequence of the synchronous flat rule 3. Alignment information of the constituents 4. Constraints in the rule (Currently not extracted)

Rule Extraction Algorithm Flat Rule Creation: Sample rule: IP::S [ NP VP.] -> [NP VP.] ( ;; Alignments (X1::Y1) (X2::Y2) ;;Constraints )

Rule Extraction Algorithm Flat Rule Creation: Sample rule: NP::NP [VP 北 CD 有 邦交 ] -> [one of the CD countries that VP] ( ;; Alignments (X1::Y7) (X3::Y4) ) Note: 1.Any one-to-one aligned words are elevated to Part-Of-Speech in flat rule. 2.Any non-aligned words on either source or target side remain lexicalized

Rule Extraction Algorithm All rules extracted: VP::VP [VC NP] -> [VBZ NP] ( ;; Alignments (X1::Y1) (X2::Y2) ) NP::NP [NR] -> [NNP] ( ;; Alignments (X1::Y1) (X2::Y2) ) VP::VP [ 北 NP VE NP] -> [ VBP NP with NP] ( ;; Alignments (X2::Y4) (X3::Y1) (X4::Y2) ) All rules extracted: NP::NP [VP 北 CD 有 邦交 ] -> [one of the CD countries that VP] ( ;; Alignments (X1::Y7) (X3::Y4) ) IP::S [ NP VP ] -> [NP VP ] ( ;; Alignments (X1::Y1) (X2::Y2) ) NP::NP [ “ 北韩 ”] -> [“North” “Korea”] ( ;Many to one alignment is a phrase )

Chinese-English Rule Learning Transfer Rules: –61 manually developed transfer rules –High-accuracy rules extracted from manually word-aligned parallel data June 20, SSST-2

June 20, 2008SSST-231 Translation Example SrcSent 3 澳洲是与北韩有邦交的少数国家之一。 Gloss: Australia is with north korea have diplomatic relations DE few country world Reference: Australia is one of the few countries that have diplomatic relations with North Korea. Translation:Australia is one of the few countries that has diplomatic relations with north korea. Overall: , Prob: , Rules: , TransSGT: , TransTGS: , Frag: , Length: , Words: 11,15 ( 0 10 "Australia is one of the few countries that has diplomatic relations with north korea" " 澳洲 是 与 北韩 有 邦交 的 少数 国家 之一 " "(S1, (S, (NP,2 (NB,1 (LDC_N,1267 'Australia') ) ) (VP, (MISC_V,1 'is') (NP, (LITERAL 'one') (LITERAL 'of') (NP, (NP, (NP,1 (LITERAL 'the') (NUMNB,2 (LDC_NUM,420 'few') (NB,1 (WIKI_N,62230 'countries') ) ) ) (LITERAL 'that') (VP, (LITERAL 'has') (FBIS_NP,11916 'diplomatic relations') ) ) (FBIS_PP,84791 'with north korea') ) ) ) ) ) ") ( "." " 。 " "(MISC_PUNC,20 '.')")

June 20, 2008SSST-232 Example: XFER Rules ;;SL::(2,4) 对 台 贸易 ;;TL::(3,5) trade to taiwan ;;Score::22 {NP, } NP::NP [PP NP ] -> [NP PP ] ((*score* ) (X2::Y1) (X1::Y2)) ;;SL::(2,7) 直接 提到 伟 哥 的 广告 ;;TL::(1,7) commercials that directly mention the name viagra ;;Score::5 {NP, } NP::NP [VP " 的 " NP ] -> [NP "that" VP ] ((*score* ) (X3::Y1) (X1::Y3)) ;;SL::(4,14) 有 一 至 多 个 高 新 技术 项目 或 产品 ;;TL::(3,14) has one or more new, high level technology projects or products ;;Score::4 {VP, } VP::VP [" 有 " NP ] -> ["has" NP ] ((*score* 0.1) (X2::Y2))

Current and Future Work Extraction based on both trees or trees on one side (with projection)? –Trees on both side provide accurate constituent boundaries, but divergent parser representations results in large coverage gaps –Compromise: trees on one side + low-level constituents (chunks) on the other side… Exploring the space of extracted rules: –Binarize the rules or not? –Collapse constituent categories (or refine some of them)? –Rule filtering strategies (keep only count > 1 ?) –Rule scoring strategies (currently only max likelihood scores) Refining word alignment errors Merging of resources acquired from data with manual lexicons and transfer rules June 20, 2008SSST-233

Conclusions Stat-XFER is a promising general MT framework, suitable to a variety of MT scenarios and languages Provides a complete solution for building end-to-end MT systems from parallel data, akin to phrase-based SMT systems (training, tuning, runtime system) Syntactic resources acquired from parallel corpora may be useful for other types of MT systems (high quality phrase tables) Complex but highly interesting set of open research issues June 20, SSST-2