Download presentation
Presentation is loading. Please wait.
Published byLucinda Price Modified over 9 years ago
1
MT for Languages with Limited Resources 11-731 Machine Translation April 20, 2011 Based on Joint Work with: Lori Levin, Jaime Carbonell, Stephan Vogel, Shuly Wintner, Danny Shacham, Katharina Probst, Erik Peterson, Christian Monson, Roberto Aranovich and Ariadna Font-Llitjos
2
April 20, 201111-731 Machine Translation2 Why Machine Translation for Minority and Indigenous Languages? Commercial MT economically feasible for only a handful of major languages with large resources (corpora, human developers) Is there hope for MT for languages with very limited resources? Benefits include: –Better government access to indigenous communities (Epidemics, crop failures, etc.) –Better indigenous communities participation in information-rich activities (health care, education, government) without giving up their languages. –Language preservation –Civilian and military applications (disaster relief)
3
April 20, 201111-731 Machine Translation3 MT for Minority and Indigenous Languages: Challenges Minimal amount of parallel text Possibly competing standards for orthography/spelling Often relatively few trained linguists Access to native informants possible Need to minimize development time and cost
4
April 20, 201111-731 Machine Translation4 MT for Low Resource Languages Possible Approaches: –Phrase-based SMT, with whatever small amounts of parallel data that is available –Build a rule-based system – need for bilingual experts and resources –Hybrid approaches, such as the AVENUE Project (Stat-XFER) approach: Incorporate acquired manual resources within a general statistical framework Augment with targeted elicitation and resource acquisition from bilingual non-experts
5
April 20, 201111-731 Machine Translation5 CMU Statistical Transfer (Stat-XFER) MT Approach Integrate the major strengths of rule-based and statistical MT within a common framework: –Linguistically rich formalism that can express complex and abstract compositional transfer rules –Rules can be written by human experts and also acquired automatically from data –Easy integration of morphological analyzers and generators –Word and syntactic-phrase correspondences can be automatically acquired from parallel text –Search-based decoding from statistical MT adapted to find the best translation within the search space: multi-feature scoring, beam-search, parameter optimization, etc. –Framework suitable for both resource-rich and resource- poor language scenarios
6
April 20, 201111-731 Machine Translation6 Stat-XFER Main Principles Framework: Statistical search-based approach with syntactic translation transfer rules that can be acquired from data but also developed and extended by experts Automatic Word and Phrase translation lexicon acquisition from parallel data Transfer-rule Learning: apply ML-based methods to automatically acquire syntactic transfer rules for translation between the two languages Elicitation: use bilingual native informants to produce a small high-quality word-aligned bilingual corpus of translated phrases and sentences Rule Refinement: refine the acquired rules via a process of interaction with bilingual informants XFER + Decoder: –XFER engine produces a lattice of possible transferred structures at all levels –Decoder searches and selects the best scoring combination
7
April 20, 201111-731 Machine Translation7 Stat-XFER Framework Source Input Preprocessing Morphology Transfer Engine Transfer Rules Bilingual Lexicon Translation Lattice Second-Stage Decoder Language Model Weighted Features Target Output
8
Transfer Engine English Language Model Transfer Rules {NP1,3} NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ((X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1)) Translation Lexicon N::N |: ["$WR"] -> ["BULL"] ((X1::Y1) ((X0 NUM) = s) ((Y0 lex) = "BULL")) N::N |: ["$WRH"] -> ["LINE"] ((X1::Y1) ((X0 NUM) = s) ((Y0 lex) = "LINE")) Hebrew Input בשורה הבאה Decoder English Output in the next line Translation Output Lattice (0 1 "IN" @PREP) (1 1 "THE" @DET) (2 2 "LINE" @N) (1 2 "THE LINE" @NP) (0 2 "IN LINE" @PP) (0 4 "IN THE NEXT LINE" @PP) Preprocessing Morphology
9
April 20, 201111-731 Machine Translation9 Transfer Rule Formalism Type information Part-of-speech/constituent information Alignments x-side constraints y-side constraints xy-constraints, e.g. ((Y1 AGR) = (X1 AGR)) ; SL: the old man, TL: ha-ish ha-zaqen NP::NP [DET ADJ N] -> [DET N DET ADJ] ( (X1::Y1) (X1::Y3) (X2::Y4) (X3::Y2) ((X1 AGR) = *3-SING) ((X1 DEF = *DEF) ((X3 AGR) = *3-SING) ((X3 COUNT) = +) ((Y1 DEF) = *DEF) ((Y3 DEF) = *DEF) ((Y2 AGR) = *3-SING) ((Y2 GENDER) = (Y4 GENDER)) )
10
April 20, 201111-731 Machine Translation10 Transfer Rule Formalism (II) Value constraints Agreement constraints ;SL: the old man, TL: ha-ish ha-zaqen NP::NP [DET ADJ N] -> [DET N DET ADJ] ( (X1::Y1) (X1::Y3) (X2::Y4) (X3::Y2) ((X1 AGR) = *3-SING) ((X1 DEF = *DEF) ((X3 AGR) = *3-SING) ((X3 COUNT) = +) ((Y1 DEF) = *DEF) ((Y3 DEF) = *DEF) ((Y2 AGR) = *3-SING) ((Y2 GENDER) = (Y4 GENDER)) )
11
April 20, 201111-731 Machine Translation11 Translation Lexicon: Hebrew-to-English Examples (Semi-manually-developed) PRO::PRO |: ["ANI"] -> ["I"] ( (X1::Y1) ((X0 per) = 1) ((X0 num) = s) ((X0 case) = nom) ) PRO::PRO |: ["ATH"] -> ["you"] ( (X1::Y1) ((X0 per) = 2) ((X0 num) = s) ((X0 gen) = m) ((X0 case) = nom) ) N::N |: ["$&H"] -> ["HOUR"] ( (X1::Y1) ((X0 NUM) = s) ((Y0 NUM) = s) ((Y0 lex) = "HOUR") ) N::N |: ["$&H"] -> ["hours"] ( (X1::Y1) ((Y0 NUM) = p) ((X0 NUM) = p) ((Y0 lex) = "HOUR") )
12
April 20, 201111-731 Machine Translation12 Translation Lexicon: French-to-English Examples (Automatically-acquired) DET::DET |: [“le"] -> [“the"] ( (X1::Y1) ) Prep::Prep |:[“dans”] -> [“in”] ( (X1::Y1) ) N::N |: [“principes"] -> [“principles"] ( (X1::Y1) ) N::N |: [“respect"] -> [“accordance"] ( (X1::Y1) ) NP::NP |: [“le respect"] -> [“accordance"] ( ) PP::PP |: [“dans le respect"] -> [“in accordance"] ( ) PP::PP |: [“des principes"] -> [“with the principles"] ( )
13
April 20, 201111-731 Machine Translation13 Hebrew-English Transfer Grammar Example Rules (Manually-developed) {NP1,2} ;;SL: $MLH ADWMH ;;TL: A RED DRESS NP1::NP1 [NP1 ADJ] -> [ADJ NP1] ( (X2::Y1) (X1::Y2) ((X1 def) = -) ((X1 status) =c absolute) ((X1 num) = (X2 num)) ((X1 gen) = (X2 gen)) (X0 = X1) ) {NP1,3} ;;SL: H $MLWT H ADWMWT ;;TL: THE RED DRESSES NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ( (X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1) )
14
April 20, 201111-731 Machine Translation14 French-English Transfer Grammar Example Rules (Automatically-acquired) {PP,24691} ;;SL: des principes ;;TL: with the principles PP::PP [“des” N] -> [“with the” N] ( (X1::Y1) ) {PP,312} ;;SL: dans le respect des principes ;;TL: in accordance with the principles PP::PP [Prep NP] -> [Prep NP] ( (X1::Y1) (X2::Y2) )
15
April 20, 201111-731 Machine Translation15 The Transfer Engine Input: source-language input sentence, or source- language confusion network Output: lattice representing collection of translation fragments at all levels supported by transfer rules Basic Algorithm: “bottom-up” integrated “parsing- transfer-generation” chart-parser guided by the synchronous transfer rules –Start with translations of individual words and phrases from translation lexicon –Create translations of larger constituents by applying applicable transfer rules to previously created lattice entries –Beam-search controls the exponential combinatorics of the search-space, using multiple scoring features
16
April 20, 201111-731 Machine Translation16 The Transfer Engine Some Unique Features: –Works with either learned or manually-developed transfer grammars –Handles rules with or without unification constraints –Supports interfacing with servers for morphological analysis and generation –Can handle ambiguous source-word analyses and/or SL segmentations represented in the form of lattice structures
17
April 20, 201111-731 Machine Translation17 Hebrew Example (From [Lavie et al., 2004]) Input word: B$WRH 0 1 2 3 4 |--------B$WRH--------| |-----B-----|$WR|--H--| |--B--|-H--|--$WRH---|
18
April 20, 201111-731 Machine Translation18 Hebrew Example (From [Lavie et al., 2004]) Y0: ((SPANSTART 0) Y1: ((SPANSTART 0) Y2: ((SPANSTART 1) (SPANEND 4) (SPANEND 2) (SPANEND 3) (LEX B$WRH) (LEX B) (LEX $WR) (POS N) (POS PREP)) (POS N) (GEN F) (GEN M) (NUM S) (NUM S) (STATUS ABSOLUTE)) (STATUS ABSOLUTE)) Y3: ((SPANSTART 3) Y4: ((SPANSTART 0) Y5: ((SPANSTART 1) (SPANEND 4) (SPANEND 1) (SPANEND 2) (LEX $LH) (LEX B) (LEX H) (POS POSS)) (POS PREP)) (POS DET)) Y6: ((SPANSTART 2) Y7: ((SPANSTART 0) (SPANEND 4) (SPANEND 4) (LEX $WRH) (LEX B$WRH) (POS N) (POS LEX)) (GEN F) (NUM S) (STATUS ABSOLUTE))
19
April 20, 201111-731 Machine Translation19 XFER Output Lattice (28 28 "AND" -5.6988 "W" "(CONJ,0 'AND')") (29 29 "SINCE" -8.20817 "MAZ " "(ADVP,0 (ADV,5 'SINCE')) ") (29 29 "SINCE THEN" -12.0165 "MAZ " "(ADVP,0 (ADV,6 'SINCE THEN')) ") (29 29 "EVER SINCE" -12.5564 "MAZ " "(ADVP,0 (ADV,4 'EVER SINCE')) ") (30 30 "WORKED" -10.9913 "&BD " "(VERB,0 (V,11 'WORKED')) ") (30 30 "FUNCTIONED" -16.0023 "&BD " "(VERB,0 (V,10 'FUNCTIONED')) ") (30 30 "WORSHIPPED" -17.3393 "&BD " "(VERB,0 (V,12 'WORSHIPPED')) ") (30 30 "SERVED" -11.5161 "&BD " "(VERB,0 (V,14 'SERVED')) ") (30 30 "SLAVE" -13.9523 "&BD " "(NP0,0 (N,34 'SLAVE')) ") (30 30 "BONDSMAN" -18.0325 "&BD " "(NP0,0 (N,36 'BONDSMAN')) ") (30 30 "A SLAVE" -16.8671 "&BD " "(NP,1 (LITERAL 'A') (NP2,0 (NP1,0 (NP0,0 (N,34 'SLAVE')) ) ) ) ") (30 30 "A BONDSMAN" -21.0649 "&BD " "(NP,1 (LITERAL 'A') (NP2,0 (NP1,0 (NP0,0 (N,36 'BONDSMAN')) ) ) ) ")
20
April 20, 201111-731 Machine Translation20 The Lattice Decoder Stack Decoder, similar to standard Statistical MT decoders Searches for best-scoring path of non-overlapping lattice arcs No reordering during decoding Scoring based on log-linear combination of scoring features, with weights trained using Minimum Error Rate Training (MERT) Scoring components: –Statistical Language Model –Bi-directional MLE phrase and rule scores –Lexical Probabilities –Fragmentation: how many arcs to cover the entire translation? –Length Penalty: how far from expected target length?
21
April 20, 201111-731 Machine Translation21 XFER Lattice Decoder 0 0 ON THE FOURTH DAY THE LION ATE THE RABBIT TO A MORNING MEAL Overall: -8.18323, Prob: -94.382, Rules: 0, Frag: 0.153846, Length: 0, Words: 13,13 235 < 0 8 -19.7602: B H IWM RBI&I (PP,0 (PREP,3 'ON')(NP,2 (LITERAL 'THE') (NP2,0 (NP1,1 (ADJ,2 (QUANT,0 'FOURTH'))(NP1,0 (NP0,1 (N,6 'DAY')))))))> 918 < 8 14 -46.2973: H ARIH AKL AT H $PN (S,2 (NP,2 (LITERAL 'THE') (NP2,0 (NP1,0 (NP0,1 (N,17 'LION')))))(VERB,0 (V,0 'ATE'))(NP,100 (NP,2 (LITERAL 'THE') (NP2,0 (NP1,0 (NP0,1 (N,24 'RABBIT')))))))> 584 < 14 17 -30.6607: L ARWXH BWQR (PP,0 (PREP,6 'TO')(NP,1 (LITERAL 'A') (NP2,0 (NP1,0 (NNP,3 (NP0,0 (N,32 'MORNING'))(NP0,0 (N,27 'MEAL')))))))>
22
April 20, 201111-731 Machine Translation22 Stat-XFER MT Systems General Stat-XFER framework under development for past nine years Systems so far: –Chinese-to-English –French-to-English –Hebrew-to-English –Urdu-to-English –German-to-English –Hindi-to-English –Dutch-to-English –Turkish-to-English –Mapudungun-to-Spanish –Arabic-to-English –Brazilian Portuguese-to-English –English-to-Arabic –Hebrew-to-Arabic
23
April 20, 201111-731 Machine Translation23 Learning Transfer-Rules for Languages with Limited Resources Rationale: –Large bilingual corpora not available –Bilingual native informant(s) can translate and align a small pre-designed elicitation corpus, using elicitation tool –Elicitation corpus designed to be typologically comprehensive and compositional –Transfer-rule engine and rule learning approach support acquisition of generalized transfer-rules from the data
24
April 20, 201111-731 Machine Translation24 English-Chinese Example
25
April 20, 201111-731 Machine Translation25 English-Hindi Example
26
April 20, 201111-731 Machine Translation26 Spanish-Mapudungun Example
27
April 20, 201111-731 Machine Translation27 English-Arabic Example
28
April 20, 201111-731 Machine Translation28 The Typological Elicitation Corpus Translated, aligned by bilingual informant Corpus consists of linguistically diverse constructions Based on elicitation and documentation work of field linguists (e.g. Comrie 1977, Bouquiaux 1992) Organized compositionally: elicit simple structures first, then use them as building blocks Goal: minimize size, maximize linguistic coverage
29
April 20, 201111-731 Machine Translation29 The Structural Elicitation Corpus Designed to cover the most common phrase structures of English learn how these structures map onto their equivalents in other languages Constructed using the constituent parse trees from the Penn TreeBank –Extracted and frequency ranked all rules in parse trees –Selected top ~200 rules, filtered idiosyncratic cases –Revised lexical choices within examples Goal: minimize size, maximize linguistic coverage of structures
30
April 20, 201111-731 Machine Translation30 The Structural Elicitation Corpus Examples: srcsent: in the forest tgtsent: B H I&R aligned: ((1,1),(2,2),(3,3)) context: C-Structure:( (PREP in-1) ( (DET the-2) (N forest-3))) srcsent: steps tgtsent: MDRGWT aligned: ((1,1)) context: C-Structure:( (N steps-1)) srcsent: the boy ate the apple tgtsent: H ILD AKL AT H TPWX aligned: ((1,1),(2,2),(3,3),(4,5),(5,6)) context: C-Structure:( ( (DET the-1) (N boy-2)) ( (V ate-3) ( (DET the-4)(N apple-5)))) srcsent: the first year tgtsent: H $NH H RA$WNH aligned: ((1,1 3),(2,4),(3,2)) context: C-Structure:( (DET the-1) ( (ADJ first-2)) (N year-3))
31
April 20, 201111-731 Machine Translation31 A Limited Data Scenario for Hindi-to-English Conducted during a DARPA “Surprise Language Exercise” (SLE) in June 2003 Put together a scenario with “miserly” data resources: –Elicited Data corpus: 17589 phrases –Cleaned portion (top 12%) of LDC dictionary: ~2725 Hindi words (23612 translation pairs) –Manually acquired resources during the SLE: 500 manual bigram translations 72 manually written phrase transfer rules 105 manually written postposition rules 48 manually written time expression rules No additional parallel text!!
32
April 20, 201111-731 Machine Translation32 Examples of Learned Rules (Hindi-to-English) {NP,14244} ;;Score:0.0429 NP::NP [N] -> [DET N] ( (X1::Y2) ) {NP,14434} ;;Score:0.0040 NP::NP [ADJ CONJ ADJ N] -> [ADJ CONJ ADJ N] ( (X1::Y1) (X2::Y2) (X3::Y3) (X4::Y4) ) {PP,4894} ;;Score:0.0470 PP::PP [NP POSTP] -> [PREP NP] ( (X2::Y1) (X1::Y2) )
33
April 20, 201111-731 Machine Translation33 Manual Transfer Rules: Hindi Example ;; PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT VERB ;; passive of 43 (7b) {VP,28} VP::VP : [V V V] -> [Aux V] ( (X1::Y2) ((x1 form) = root) ((x2 type) =c light) ((x2 form) = part) ((x2 aspect) = perf) ((x3 lexwx) = 'jAnA') ((x3 form) = part) ((x3 aspect) = perf) (x0 = x1) ((y1 lex) = be) ((y1 tense) = past) ((y1 agr num) = (x3 agr num)) ((y1 agr pers) = (x3 agr pers)) ((y2 form) = part) )
34
April 20, 201111-731 Machine Translation34 Manual Transfer Rules: Example ; NP1 ke NP2 -> NP2 of NP1 ; Ex: jIvana ke eka aXyAya ; life of (one) chapter ; ==> a chapter of life ; {NP,12} NP::NP : [PP NP1] -> [NP1 PP] ( (X1::Y2) (X2::Y1) ; ((x2 lexwx) = 'kA') ) {NP,13} NP::NP : [NP1] -> [NP1] ( (X1::Y1) ) {PP,12} PP::PP : [NP Postp] -> [Prep NP] ( (X1::Y2) (X2::Y1) ) NP PP NP1 NP P Adj N N1 ke eka aXyAya N jIvana NP NP1 PP Adj N P NP one chapter of N1 N life
35
April 20, 201111-731 Machine Translation35 Manual Grammar Development Covers mostly NPs, PPs and VPs (verb complexes) ~70 grammar rules, covering basic and recursive NPs and PPs, verb complexes of main tenses in Hindi (developed in two weeks)
36
April 20, 201111-731 Machine Translation36 Testing Conditions Tested on section of JHU provided data: 258 sentences with four reference translations –SMT system (stand-alone) –EBMT system (stand-alone) –XFER system (naïve decoding) –XFER system with “strong” decoder No grammar rules (baseline) Manually developed grammar rules Automatically learned grammar rules –XFER+SMT with strong decoder (MEMT)
37
April 20, 201111-731 Machine Translation37 Results on JHU Test Set SystemBLEUM-BLEUNIST EBMT0.0580.1654.22 SMT0.0930.1914.64 XFER (naïve) man grammar 0.0550.1774.46 XFER (strong) no grammar 0.1090.2245.29 XFER (strong) learned grammar 0.1160.2315.37 XFER (strong) man grammar 0.1350.2435.59 XFER+SMT0.1360.2435.65
38
April 20, 201111-731 Machine Translation38 Effect of Reordering in the Decoder
39
April 20, 201111-731 Machine Translation39 Observations and Lessons (I) XFER with strong decoder outperformed SMT even without any grammar rules in the miserly data scenario –SMT Trained on elicited phrases that are very short –SMT has insufficient data to train more discriminative translation probabilities –XFER takes advantage of Morphology Token coverage without morphology: 0.6989 Token coverage with morphology: 0.7892 Manual grammar was somewhat better than automatically learned grammar –Learned rules were very simple –Large room for improvement on learning rules
40
April 20, 201111-731 Machine Translation40 Observations and Lessons (II) MEMT (XFER and SMT) based on strong decoder produced best results in the miserly scenario. Reordering within the decoder provided very significant score improvements –Much room for more sophisticated grammar rules –Strong decoder can carry some of the reordering “burden”
41
April 20, 201111-731 Machine Translation41 Modern Hebrew Native language of about 3-4 Million in Israel Semitic language, closely related to Arabic and with similar linguistic properties –Root+Pattern word formation system –Rich verb and noun morphology –Particles attach as prefixed to the following word: definite article (H), prepositions (B,K,L,M), coordinating conjuction (W), relativizers ($,K$)… Unique alphabet and Writing System –22 letters represent (mostly) consonants –Vowels represented (mostly) by diacritics –Modern texts omit the diacritic vowels, thus additional level of ambiguity: “bare” word word –Example: MHGR mehager, m+hagar, m+h+ger
42
April 20, 201111-731 Machine Translation42 Modern Hebrew Spelling Two main spelling variants –“KTIV XASER” (difficient): spelling with the vowel diacritics, and consonant words when the diacritics are removed –“KTIV MALEH” (full): words with I/O/U vowels are written with long vowels which include a letter KTIV MALEH is predominant, but not strictly adhered to even in newspapers and official publications inconsistent spelling Example: –niqud (spelling): NIQWD, NQWD, NQD –When written as NQD, could also be niqed, naqed, nuqad
43
April 20, 201111-731 Machine Translation43 Challenges for Hebrew MT Puacity in existing language resources for Hebrew –No publicly available broad coverage morphological analyzer –No publicly available bilingual lexicons or dictionaries –No POS-tagged corpus or parse tree-bank corpus for Hebrew –No large Hebrew/English parallel corpus Scenario well suited for CMU transfer-based MT framework for languages with limited resources
44
April 20, 201111-731 Machine Translation44 Morphological Analyzer We use a publicly available morphological analyzer distributed by the Technion’s Knowledge Center, adapted for our system Coverage is reasonable (for nouns, verbs and adjectives) Produces all analyses or a disambiguated analysis for each word Output format includes lexeme (base form), POS, morphological features Output was adapted to our representation needs (POS and feature mappings)
45
April 20, 201111-731 Machine Translation45 Morphology Example Input word: B$WRH 0 1 2 3 4 |--------B$WRH--------| |-----B-----|$WR|--H--| |--B--|-H--|--$WRH---|
46
April 20, 201111-731 Machine Translation46 Morphology Example Y0: ((SPANSTART 0) Y1: ((SPANSTART 0) Y2: ((SPANSTART 1) (SPANEND 4) (SPANEND 2) (SPANEND 3) (LEX B$WRH) (LEX B) (LEX $WR) (POS N) (POS PREP)) (POS N) (GEN F) (GEN M) (NUM S) (NUM S) (STATUS ABSOLUTE)) (STATUS ABSOLUTE)) Y3: ((SPANSTART 3) Y4: ((SPANSTART 0) Y5: ((SPANSTART 1) (SPANEND 4) (SPANEND 1) (SPANEND 2) (LEX $LH) (LEX B) (LEX H) (POS POSS)) (POS PREP)) (POS DET)) Y6: ((SPANSTART 2) Y7: ((SPANSTART 0) (SPANEND 4) (SPANEND 4) (LEX $WRH) (LEX B$WRH) (POS N) (POS LEX)) (GEN F) (NUM S) (STATUS ABSOLUTE))
47
April 20, 201111-731 Machine Translation47 Translation Lexicon Constructed our own Hebrew-to-English lexicon, based primarily on existing “Dahan” H-to-E and E-to-H dictionary made available to us, augmented by other public sources Coverage is not great but not bad as a start –Dahan H-to-E is about 15K translation pairs –Dahan E-to-H is about 7K translation pairs Base forms, POS information on both sides Converted Dahan into our representation, added entries for missing closed-class entries (pronouns, prepositions, etc.) Had to deal with spelling conventions Recently augmented with ~50K translation pairs extracted from Wikipedia (mostly proper names and named entities)
48
April 20, 201111-731 Machine Translation48 Manual Transfer Grammar (human-developed) Initially developed by Alon in a couple of days, extended and revised by Nurit over time Current grammar has 36 rules: –21 NP rules –one PP rule –6 verb complexes and VP rules –8 higher-phrase and sentence-level rules Captures the most common (mostly local) structural differences between Hebrew and English
49
April 20, 201111-731 Machine Translation49 Transfer Grammar Example Rules {NP1,2} ;;SL: $MLH ADWMH ;;TL: A RED DRESS NP1::NP1 [NP1 ADJ] -> [ADJ NP1] ( (X2::Y1) (X1::Y2) ((X1 def) = -) ((X1 status) =c absolute) ((X1 num) = (X2 num)) ((X1 gen) = (X2 gen)) (X0 = X1) ) {NP1,3} ;;SL: H $MLWT H ADWMWT ;;TL: THE RED DRESSES NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ( (X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1) )
50
April 20, 201111-731 Machine Translation50 Hebrew-to-English MT Prototype Initial prototype developed within a two month intensive effort Accomplished: –Adapted available morphological analyzer –Constructed a preliminary translation lexicon –Translated and aligned Elicitation Corpus –Learned XFER rules –Developed (small) manual XFER grammar –System debugging and development –Evaluated performance on unseen test data using automatic evaluation metrics
51
April 20, 201111-731 Machine Translation51 Example Translation Input: – לאחר דיונים רבים החליטה הממשלה לערוך משאל עם בנושא הנסיגה –After debates many decided the government to hold referendum in issue the withdrawal Output: –AFTER MANY DEBATES THE GOVERNMENT DECIDED TO HOLD A REFERENDUM ON THE ISSUE OF THE WITHDRAWAL
52
April 20, 201111-731 Machine Translation52 Noun Phrases – Construct State HXL@T[HNSIAHRA$WN] decision.3SF-CSthe-president.3SMthe-first.3SM החלטת הנשיא הראשון החלטת הנשיא הראשונה [HXL@THNSIA]HRA$WNH decision.3SF-CSthe-president.3SMthe-first.3SF THE DECISION OF THE FIRST PRESIDENT THE FIRST DECISION OF THE PRESIDENT
53
April 20, 201111-731 Machine Translation53 Noun Phrases - Possessives HNSIAHKRIZ$HM$IMHHRA$WNH$LWTHIH the-presidentannouncedthat-the-task.3SFthe-first.3SFof-himwill.3SF LMCWA PTRWNLSKSWKBAZWRNW to-findsolutionto-the-conflictin-region-POSS.1P הנשיא הכריז שהמשימה הראשונה שלו תהיה למצוא פתרון לסכסוך באזורנו Without transfer grammar: THE PRESIDENT ANNOUNCED THAT THE TASK THE BEST OF HIM WILL BE TO FIND SOLUTION TO THE CONFLICT IN REGION OUR With transfer grammar: THE PRESIDENT ANNOUNCED THAT HIS FIRST TASK WILL BE TO FIND A SOLUTION TO THE CONFLICT IN OUR REGION
54
April 20, 201111-731 Machine Translation54 Subject-Verb Inversion ATMWLHWDI&HHMM$LH yesterdayannounced.3SFthe-government.3SF אתמול הודיעה הממשלה שתערכנה בחירות בחודש הבא $T&RKNHBXIRWTBXWD$HBA that-will-be-held.3PFelections.3PFin-the-monththe-next Without transfer grammar: YESTERDAY ANNOUNCED THE GOVERNMENT THAT WILL RESPECT OF THE FREEDOM OF THE MONTH THE NEXT With transfer grammar: YESTERDAY THE GOVERNMENT ANNOUNCED THAT ELECTIONS WILL ASSUME IN THE NEXT MONTH
55
April 20, 201111-731 Machine Translation55 Subject-Verb Inversion LPNIKMH$BW&WTHWDI&HHNHLTHMLWN beforeseveralweeksannounced.3SFmanagement.3SF.CSthe-hotel לפני כמה שבועות הודיעה הנהלת המלון שהמלון יסגר בסוף השנה $HMLWNISGRBSWFH$NH that-the-hotel.3SMwill-be-closed.3SMat-end.3SM.CSthe-year Without transfer grammar: IN FRONT OF A FEW WEEKS ANNOUNCED ADMINISTRATION THE HOTEL THAT THE HOTEL WILL CLOSE AT THE END THIS YEAR With transfer grammar: SEVERAL WEEKS AGO THE MANAGEMENT OF THE HOTEL ANNOUNCED THAT THE HOTEL WILL CLOSE AT THE END OF THE YEAR
56
April 20, 201111-731 Machine Translation56 Evaluation Results Test set of 62 sentences from Haaretz newspaper, 2 reference translations SystemBLEUNISTPRMETEOR No Gram0.06163.41090.40900.44270.3298 Learned0.07743.54510.41890.44880.3478 Manual0.10263.77890.43340.44740.3617
57
April 20, 201111-731 Machine Translation57 Current and Future Work Issues specific to the Hebrew-to-English system: –Coverage: further improvements in the translation lexicon and morphological analyzer –Manual Grammar development –Acquiring/training of word-to-word translation probabilities –Acquiring/training of a Hebrew language model at a post- morphology level that can help with disambiguation General Issues related to XFER framework: –Discriminative Language Modeling for MT –Effective models for assigning scores to transfer rules –Improved grammar learning –Merging/integration of manual and acquired grammars
58
April 20, 201111-731 Machine Translation58 Conclusions Test case for the CMU XFER framework for rapid MT prototyping Preliminary system was a two-month, three person effort – we were quite happy with the outcome Core concept of XFER + Decoding is very powerful and promising for low-resource MT We experienced the main bottlenecks of knowledge acquisition for MT: morphology, translation lexicons, grammar...
59
April 20, 201111-731 Machine Translation59 Mapudungun-to-Spanish Example Mapudungun pelafiñ Maria Spanish No vi a María English I didn’t see Maria
60
April 20, 201111-731 Machine Translation60 Mapudungun-to-Spanish Example Mapudungun pelafiñ Maria pe-la-fi-ñMaria see-neg-3.obj-1.subj.indicativeMaria Spanish No vi a María negsee.1.subj.past.indicativeaccMaria English I didn’t see Maria
61
April 20, 201111-731 Machine Translation61 V pe pe-la-fi-ñ Maria
62
April 20, 201111-731 Machine Translation62 V pe pe-la-fi-ñ Maria VSuff la Negation = +
63
April 20, 201111-731 Machine Translation63 V pe pe-la-fi-ñ Maria VSuff la VSuffG Pass all features up
64
April 20, 201111-731 Machine Translation64 V pe pe-la-fi-ñ Maria VSuff la VSuffG VSuff fi object person = 3
65
April 20, 201111-731 Machine Translation65 V pe pe-la-fi-ñ Maria VSuff la VSuffGVSuff fi VSuffG Pass all features up from both children
66
April 20, 201111-731 Machine Translation66 V pe pe-la-fi-ñ Maria VSuff la VSuffGVSuff fi VSuffGVSuff ñ person = 1 number = sg mood = ind
67
April 20, 201111-731 Machine Translation67 V pe pe-la-fi-ñ Maria VSuff la VSuffGVSuff fi VSuffGVSuff ñ Pass all features up from both children VSuffG
68
April 20, 201111-731 Machine Translation68 V V pe pe-la-fi-ñ Maria VSuff la VSuffGVSuff fi VSuffGVSuff ñ Pass all features up from both children VSuffG Check that: 1) negation = + 2) tense is undefined
69
April 20, 201111-731 Machine Translation69 V pe pe-la-fi-ñ Maria VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG V NP N Maria N person = 3 number = sg human = +
70
April 20, 201111-731 Machine Translation70 Pass features up from V pe pe-la-fi-ñ Maria VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V Check that NP is human = + V VP
71
April 20, 201111-731 Machine Translation71 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S
72
April 20, 201111-731 Machine Translation72 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V Pass all features to Spanish side
73
April 20, 201111-731 Machine Translation73 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V Pass all features down
74
April 20, 201111-731 Machine Translation74 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V Pass object features down
75
April 20, 201111-731 Machine Translation75 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V Accusative marker on objects is introduced because human = +
76
April 20, 201111-731 Machine Translation76 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V VP::VP [VBar NP] -> [VBar "a" NP] ((X1::Y1) (X2::Y3) ((X2 type) = (*NOT* personal)) ((X2 human) =c +) (X0 = X1) ((X0 object) = X2) (Y0 = X0) ((Y0 object) = (X0 object)) (Y1 = Y0) (Y3 = (Y0 object)) ((Y1 objmarker person) = (Y3 person)) ((Y1 objmarker number) = (Y3 number)) ((Y1 objmarker gender) = (Y3 ender)))
77
April 20, 201111-731 Machine Translation77 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V V“no” Pass person, number, and mood features to Spanish Verb Assign tense = past
78
April 20, 201111-731 Machine Translation78 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V V“no” Introduced because negation = +
79
April 20, 201111-731 Machine Translation79 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V V“no” ver
80
April 20, 201111-731 Machine Translation80 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V V“no” ver vi person = 1 number = sg mood = indicative tense = past
81
April 20, 201111-731 Machine Translation81 V pe Transfer to Spanish: Top-Down VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V V“no” vi N María N Pass features over to Spanish side
82
April 20, 201111-731 Machine Translation82 V pe I Didn’t see Maria VSuff la VSuffGVSuff fi VSuffGVSuff ñ VSuffG NP N Maria N S V VP S NP“a” V V“no” vi N María N
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.