Download presentation
Presentation is loading. Please wait.
Published byBrooklynn Stakes Modified over 9 years ago
1
Statistical Translation Language Model Maryam Karimzadehgan mkarimz2@illinois.edu University of Illinois at Urbana-Champaign 1
2
2 Outline Motivation & Background –Language model (LM) for IR –Smoothing methods for IR Statistical Machine Translation – Cross-Lingual –Motivation –IBM Model 1 Statistical Translation Language Model – Monolingual –Synthetic Queries –Mutual Information-based approach –Regularization of self-translation probabilities Smoothing in Statistical Translation Language Model
3
The Basic LM Approach ([Ponte & Croft 98], [Hiemstra & Kraaij 98], [Miller et al. 99]) Document Text mining paper Food nutrition paper Language Model … text ? mining ? assocation ? clustering ? … food ? … food ? nutrition ? healthy ? diet ? … Query = “data mining algorithms” ? Which model would most likely have generated this query?
4
Ranking Docs by Query Likelihood d1d1 d2d2 dNdN q d1d1 d2d2 dNdN Doc LM p(q| d 1 ) p(q| d 2 ) p(q| d N ) Query likelihood
5
Retrieval as LM Estimation Document ranking based on query likelihood Retrieval problem Estimation of p(w i |d) Smoothing is an important issue, and distinguishes different approaches Document language model
6
6 How to Estimate p(w|d)? Simplest solution: Maximum Likelihood Estimator –P(w|d) = relative frequency of word w in d –What if a word doesn’t appear in the text? P(w|d)=0 In general, what probability should we give a word that has not been observed? If we want to assign non-zero probabilities to such words, we’ll have to discount the probabilities of observed words This is what “smoothing” is about …
7
Language Model Smoothing P(w) w Max. Likelihood Estimate Smoothed LM
8
Smoothing Methods for IR Method 1(Linear interpolation, Jelinek- Mercer): Method 2 (Dirichlet Prior/Bayesian): parameterML estimate parameter (Zhai & Lafferty 01)
9
9 Outline Motivation & Background –Language model (LM) for IR –Smoothing methods for IR Statistical Machine Translation – Cross-Lingual –Motivation –IBM Model 1 Statistical Translation Language Model – Monolingual –Synthetic Queries –Mutual Information-based approach –Regularization of self-translation probabilities Smoothing in Statistical Translation Language Model
10
10 A Brief History Machine translation was one of the first applications envisioned for computers Warren Weaver (1949): “I have a text in front of me which is written in Russian but I am going to pretend that it is really written in English and that it has been coded in some strange symbols. All I need to do is strip off the code in order to retrieve the information contained in the text.” First demonstrated by IBM in 1954 with a basic word-for-word translation system
11
11 Interest in Machine Translation Commercial interest: –U.S. has invested in MT for intelligence purposes –MT is popular on the web—it is the most used of Google’s special features –EU spends more than $1 billion on translation costs each year. –(Semi-)automated translation could lead to huge savings
12
12 Interest in Machine Translation Academic interest: –One of the most challenging problems in NLP research –Requires knowledge from many NLP sub-areas, e.g., lexical semantics, parsing, morphological analysis, statistical modeling,… –Being able to establish links between two languages allows for transferring resources from one language to another
13
13 Word-Level Alignments Given a parallel sentence pair we can link (align) words or phrases that are translations of each other:
14
Machine Translation -- Concepts We are trying to model P(e|f) –I give you a French sentence –You give me back English How are we going to model this? –The maximum likelihood estimation of P(e | f) is: freq(e,f)/freq(f). –Way too specific to get any reasonable frequencies! Vast majority of unseen data will have zero counts!
15
Machine Translation – Alternative way We could use Bayes rule Why using Bayes rule and not directly estimating p(e|f) ? It is important that our model for p(e|f) concentrates its probability as much as possible on well-formed English sentences. But it is not important that our model for P(f|e) concentrate its probability on well-formed French sentences. Given a French sentence f, we could do a search for an e that maximizes p(e|f).
16
16 Statistical Machine Translation The noisy channel model –Assumptions: An English word can be aligned with multiple French words while each French word is aligned with at most one English word Independence of the individual word-to-word translations Language ModelTranslation ModelDecoder e: English f: French |e|=l |f|=m
17
17 Estimation of Probabilities -- IBM Model 1 Simplest of the IBM models. (There are 5 models) Does not consider word order (bag-of- words approach) Does not model one-to-many alignments Computationally inexpensive Useful for parameter estimations that are passed on to more elaborate models
18
18 IBM Model 1 Three important components involved –Language model Give the probability p(e). –Translation model Estimate the Translation Probability p(f|e). –Decoder
19
19 IBM Model 1- Translation Model
20
20 IBM Model 1- Translation Model
21
21 IBM Model 1 – Translation Model all possible alignments (the English word that a French word f j is aligned with) translation probability EM algorithm is used to estimate the translation probabilities.
22
22 Outline Motivation & Background –Language model (LM) for IR –Smoothing methods for IR Statistical Machine Translation – Cross-Lingual –Motivation –IBM Model 1 Statistical Translation Language Model – Monolingual –Synthetic Queries –Mutual Information-based approach –Regularization of self-translation probabilities Smoothing in Statistical Translation Language Model
23
The Problem of Vocabulary Gap Query = auto wash auto wash … car wash vehicle d1 auto buy … auto d2 d3 P(“auto”)P(“wash”) P(“auto”)P(“wash”) How to support inexact matching? {“car”, “vehicle”} == “auto” “buy” ==== “wash” 23
24
Translation Language Models for IR [Berger & Lafferty 99] Query = auto wash auto wash … car wash vehicle d1 auto buy auto d2 d3 “auto” “car” “translate” “auto” Query = car wash P(“auto”)P(“wash”) “car” “auto” P(“car”|d3) P t (“auto”| “car”) “vehicle” P(“vehicle”|d3) P t (“auto”| “vehicle”) P(“auto” |d3)= p(“car”|d3) x p t (“auto”| “car”) + p(“vehicle”|d3) x p t (“auto”| “vehicle”) How to estimate? 24
25
When relevance judgments are available, (q,d) serves as data to train the translation model Without relevance judgments, we can use synthetic data [Berger & Lafferty 99 ], [ Jin et al. 02 ] Basic translation model Translation model Regular doc LM Estimation of Translation Model: p t (w|u)
26
Estimation of Translation Model – Synthetic Queries ([Berger & Lafferty 99])
27
Estimation of Translation Model – Synthetic Queries Algorithm ([Berger & Lafferty 99]) Training data Limitations: 1.Can’t translate into words not seen in the training queries 2.Computational complexity
28
A simpler and more efficient method for estimating p t (w|u) with higher coverage was proposed in: M. Karimzadehgan and C. Zhai. Estimation of Statistical Translation Models Based on Mutual Information for Ad Hoc Information Retrieval. ACM SIGIR, pages 323-330, 2010 28
29
Estimation of Translation Model Based on Mutual Information 1. Calculate Mutual information for each pair of two words in the collection (measuring co-occurrences) 2. Normalize mutual information score to obtain a translation probability: 29 presence/absence of word w in a document
30
Computation Detail Xw=1 Xu=1 N 30 Xw Xu D1 0 0 D2 1 1 D3 1 0 …. … D N 0 0 Exploit index to speed up computation
31
Sample Translation Probabilities (AP90) qp(q|w) everest0.079 climber0.042 climb0.0365 mountain0.0359 mount0.033 reach0.0312 expedit0.0314 summit0.0253 whittak0.016 peak0.0149 p(w| “everest”) qp(q|w) everest0.1051 climber0.0423 mount0.0339 0280.0308 expedit0.0303 peak0.0155 himalaya0.01532 nepal0.015 sherpa0.01431 hillari0.01431 Mutual Information 31 Synthetic Query
32
Regularizing Self-Translation Probability Self-translation probability can be under- estimated An exact match would be counted less than an exact match Solution: Interpolation with “1.0 self-translation” w = u = 1 basic query likelihood model = 0 original MI estimate 32
33
Query Likelihood and Translation Language Model Document ranking based on query likelihood Document language model Translation Language Model Do you see any problem?
34
Further Smoothing of Translation Model for Computing Query Likelihood Linear interpolation (Jelinek-Mercer): Bayesian interpolation (Dirichlet prior): p ml (w|d) 34 p ml (w|d)
35
Experiment Design MI vs. Synthetic query estimation –Data Sets: Associated Press (AP90) and San Jose Mercury News (SJMN) + TREC topics 51-100 –Relatively small data sets in order to compare our results with Synthetic queries in [Berger& Lafferty 99]. MI Translation model vs. Basic query likelihood –Larger Data Sets: TREC7, TREC8 (plus AP90, SJMN) –TREC topics 351-400 for TREC7 and 401-450 for TREC8 Additional issues –Regularization of self-translation? –Influence of smoothing on translation models? –Translation model + pseudo feedback? 35
36
Mutual information outperforms synthetic queries in both MAP and P@10 AP90 + queries 51-100, Dirichlet Prior Smoothing 36 Syn. Query MI Syn. Query MI
37
Upper Bound Comparison of Mutual Information and Synthetic Queries Dirichlet Prior Smoothing Data MAP Precision @10 Mutual InfoSyn. QueryMutual Info.Syn. Query AP900.264*0.250.3810.357 SJMN0.197*0.1890.2520.267 37 JM Smoothing Data MAP Precision @10 Mutual InfoSyn. QueryMutual Info.Syn. Query AP900.272*0.2510.4230.404 SJMN0.2*0.1950.280.266
38
Mutual information translation model outperforms basic query likelihood Data MAP Precision @10 Basic QLMI Trans.Basic QLMI Trans. AP900.2480.272*0.3980.423 SJMN0.1950.2*0.2660.28 TREC70.1830.187*0.4120.404 TREC80.2480.2490.4520.456 JM Smoothing DataMAPPrecision @10 Basic QLMI Trans.Basic QLMI Trans. AP900.2460.264*0.3570.381 SJMN0.1880.197*0.2520.267 TREC70.1650.1720.3540.362 TREC80.2360.244*0.4280.436 Dir. Prior Smoothing 38
39
Translation model appears to need less collection smoothing than basic QL 39 Translation model Basic query likelihood
40
Translation model and pseudo feedback exploit word co-occurrences differently Data MAP Precision @10 BLPFBPFB+TMBLPFBPFB+TM AP900.2460.2710.2980.3570.3830.411 SJMN0.1880.2290.2340.2520.3160.313 TREC70.1650.2090.2220.3540.380.384 TREC80.2360.2400.2810.4280.40.452 JM Smoothing Query model from pseudo FB Smoothed Translation Model 40
41
Regularization of self-translation is beneficial AP Data Set, Dirichlet Prior 41
42
Summary Statistical Translation language model are effective for bridging the vocabulary gap. Mutual information is more effective and more efficient than synthetic queries for estimating translation model probabilities. Regularization of self-translation is beneficial Translation model outperforms basic query likelihood on small and large collections and is more robust Translation model and pseudo feedback exploit word co- occurrences differently and can be combined to further improve performance 42
43
References [1] A. Berger and J. Lafferty. Information Retrieval as Statistical Translation. ACM SIGIR, pages 222–229, 1999. [2] P. Brown, S. A. D. Pietra, V. J. D. Pietra, and R. Mercer. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, 1993. [3] M. Karimzadehgan and C. Zhai. Estimation of Statistical Translation Models Based on Mutual Information for Ad Hoc Information Retrieval. ACM SIGIR, pages 323-330, 2010. 43
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.