Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical Language Models for Information Retrieval Tutorial at ACM SIGIR 2005 Aug. 15, 2005 ChengXiang Zhai Department of Computer Science University.

Similar presentations


Presentation on theme: "Statistical Language Models for Information Retrieval Tutorial at ACM SIGIR 2005 Aug. 15, 2005 ChengXiang Zhai Department of Computer Science University."— Presentation transcript:

1 Statistical Language Models for Information Retrieval Tutorial at ACM SIGIR 2005 Aug. 15, 2005 ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign http://www-faculty.cs.uiuc.edu/~czhai czhai@cs.uiuc.edu

2 © ChengXiang Zhai, 2005 2 Goal of the Tutorial Introduce the emerging area of applying statistical language models (SLMs) to information retrieval (IR). Targeted audience: –IR practitioners who are interested in acquiring advanced modeling techniques –IR researchers who are looking for new research problems in IR models Accessible to anyone with basic knowledge of probability and statistics

3 © ChengXiang Zhai, 2005 3 Scope of the Tutorial What will be covered –Brief background on IR and SLMs –Review of recent applications of unigram SLMs in IR –Details of some specific methods that are either empirically effective or theoretically important –A framework for systematically exploring SLMs in IR –Outstanding research issues in applying SLMs to IR What will not be covered –Traditional IR methods –Implementation of IR systems –Discussion of high-order or other complex SLMs –Application of SLMs in supervised learning See [Manning & Schutze 99] and [Jelinek 98] See any IR textbook e.g., [Baeza-Yates & Ribeiro-Neto 99, Grossman & Frieder 04] See [Witten et al. 99] E.g., TDT, Text Categorization…. See publications in Machine Learning, Speech Recognition, and Natural Language Processing

4 © ChengXiang Zhai, 2005 4 Tutorial Outline 1.Introduction 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary

5 © ChengXiang Zhai, 2005 5 Part 1: Introduction 1.Introduction -Information Retrieval (IR) -Statistical Language Models (SLMs) -Applications of SLMs to IR 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

6 © ChengXiang Zhai, 2005 6 What is Information Retrieval (IR)? Narrow sense (= ad hoc text retrieval) –Given a collection of text documents (information items) –Given a text query from a user (information need) –Retrieve relevant documents from the collection A broader sense of IR may include –Retrieving non-textual information (e.g., images) –Other tasks (e.g., filtering, categorization or summarization) In this tutorial, IR  ad hoc text retrieval Ad hoc text retrieval is fundamental to IR and has many applications (e.g., search engines, digital libraries, …)

7 © ChengXiang Zhai, 2005 7 IR “is Easy”? Easy queries –Try “ACM SIGIR 2005” (or just “SIGIR 2005”) with Google, and you’ll get the conference home page right on the top –Try “retrieval applications”, and you’ll be happy to see many pages mentioning “retrieval applications” IR CAN be perceived as being easy because –Queries can be specific and match the words in a page exactly –The user can’t easily judge the completeness of results -- you’ll be happy if Google returns 3 relevant pages on the top, even if there are 30 more relevant pages missing Harder queries: –“design philosophy of Microsoft windows XP”, “progress in developing new retrieval models”, “IR applications”, …

8 © ChengXiang Zhai, 2005 8 IR is Hard! Under/over-specified queries –Ambiguous: “buying CDs” (money or music?) –Incomplete: What kind of CDs? –What if “CD” is never mentioned in document? Vague semantics of documents –Ambiguity: e.g., word-sense, structural –Incomplete: Inferences required Even hard for people! –~ 80% agreement in human judgments(?)

9 © ChengXiang Zhai, 2005 9 Formalization of IR Tasks Vocabulary V={w 1, w 2, …, w N } of language Query q = q 1,…,q m, where q i  V Document d i = d i1,…,d im i, where d ij  V Collection C= {d 1, …, d k } Set of relevant documents R(q)  C –Generally unknown and user-dependent –Query is a “hint” on which doc is in R(q) Task = compute R’(q), an “approximate R(q)”

10 © ChengXiang Zhai, 2005 10 Computing R’(q): Doc Selection vs. Ranking + + + + - - - - - - - - - - - - - - + - - Doc Selection f(d,q)=? + + + + - - + - + - - - - - - - - 1 0 R’(q) True R(q) R(q) = {d  C|f(d,q)>  }, where f(d,q)  is a ranking function;  is a cutoff implicitly set by the user R(q)={d  C|f(d,q)=1}, where f(d,q)  {0,1} is an indicator function (classifier) 0.98 d 1 + 0.95 d 2 + 0.83 d 3 - 0.80 d 4 + 0.76 d 5 - 0.56 d 6 - 0.34 d 7 - 0.21 d 8 + 0.21 d 9 - Doc Ranking f(d,q)=? R’(q)  =0.77

11 © ChengXiang Zhai, 2005 11 Problems with Doc Selection The classifier is unlikely accurate –“Over-constrained” query (terms are too specific): no relevant documents found –“Under-constrained” query (terms are too general): over delivery –It is extremely hard to find the right position between these two extremes Even if it is accurate, all relevant documents are not equally relevant Relevance is a matter of degree!

12 © ChengXiang Zhai, 2005 12 Ranking is often preferred A user can stop browsing anywhere, so the boundary/cutoff is controlled by the user –High recall users would view more items –High precision users would view only a few Theoretical justification: Probability Ranking Principle [Robertson 77], Risk Minimization [Zhai 02, Zhai & Lafferty 03] The retrieval problem is now reduced to defining a ranking function f, such that, for all q, d 1, d 2, f(q,d 1 ) > f(q,d 2 ) iff p(Relevant|q,d 1 ) >p(Relevant|q,d 2 ) Function f is an operational definition of relevance Most IR research is centered on finding a good f…

13 © ChengXiang Zhai, 2005 13 Two Well-Known Traditional Retrieval Formulas [Singhal 01] [ ] Key retrieval heuristics: TF (Term Frequency) IDF (Inverse Doc Freq.) + Length normalization Other heuristics: Stemming Stop word removal Phrases Similar quantities will occur in the LMs…

14 © ChengXiang Zhai, 2005 14 Feedback in IR Judgments: d 1 + d 2 - d 3 + … d k -... Query Retrieval Engine Results: d 1 3.5 d 2 2.4 … d k 0.5... User Document collection Judgments: d 1 + d 2 + d 3 + … d k -... top 10 Pseudo feedback Assume top 10 docs are relevant Relevance feedbackUser judges documents Updated query Feedback Learn from Examples

15 © ChengXiang Zhai, 2005 15 Feedback in IR (cont.) An essential component in any IR method Relevance feedback is always desirable, but a user may not be willing to provide explicit judgments Pseudo/automatic feedback is always possible, and often improves performance on average through –Exploiting word co-occurrences –Enriching a query with additional related words –Indirectly addressing issues such as ambiguous words and synonyms

16 © ChengXiang Zhai, 2005 16 Evaluation of Retrieval Performance 1. d 1  2. d 2  3. d 3  4. d 4  5. d 5  6. d 6  7. d 7  8. d 8  9. d 9  10. d 10  Total # relevant docs = 8 PR-curve As a ranked list precision recall x x x x 1.0 0.0 As a SET of results How do we compare different rankings? Which is the best? 0.0 A C B A>C B>C But is A>B? Summarize a ranking with a single number p i = prec at the rank where the i-th rel doc is retrieved p i =0 if the i-th rel doc is not retrieved k is the total # of rel docs Avg. Prec. is sensitive to the position of each rel doc! AvgPrec = (1/1+2/2+3/4+4/10+0+0+0+0)/8=0.394

17 © ChengXiang Zhai, 2005 17 Part 1: Introduction (cont.) 1.Introduction -Information Retrieval (IR) -Statistical Language Models (SLMs) -Application of SLMs to IR 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

18 © ChengXiang Zhai, 2005 18 What is a Statistical LM? A probability distribution over word sequences –p(“ Today is Wednesday ”)  0.001 –p(“ Today Wednesday is ”)  0.0000000000001 –p(“ The eigenvalue is positive” )  0.00001 Context/topic dependent! Can also be regarded as a probabilistic mechanism for “generating” text, thus also called a “generative” model

19 © ChengXiang Zhai, 2005 19 Why is a LM Useful? Provides a principled way to quantify the uncertainties associated with natural language Allows us to answer questions like: –Given that we see “ John ” and “ feels ”, how likely will we see “ happy ” as opposed to “ habit ” as the next word? (speech recognition) –Given that we observe “baseball” three times and “game” once in a news article, how likely is it about “sports”? (text categorization, information retrieval) –Given that a user is interested in sports news, how likely would the user use “baseball” in a query? (information retrieval)

20 © ChengXiang Zhai, 2005 20 Source-Channel Framework (Model of Communication System [Shannon 48] ) Source Transmitter (encoder) Destination Receiver (decoder) Noisy Channel P(X) P(Y|X) X YX’ P(X|Y)=? When X is text, p(X) is a language model (Bayes Rule) Many Examples: Speech recognition: X=Word sequence Y=Speech signal Machine translation: X=English sentence Y=Chinese sentence OCR Error Correction: X=Correct word Y= Erroneous word Information Retrieval: X=Document Y=Query Summarization: X=Summary Y=Document

21 © ChengXiang Zhai, 2005 21 The Simplest Language Model (Unigram Model) Generate a piece of text by generating each word independently Thus, p(w 1 w 2... w n )=p(w 1 )p(w 2 )…p(w n ) Parameters: {p(w i )} p(w 1 )+…+p(w N )=1 (N is voc. size) Essentially a multinomial distribution over words A piece of text can be regarded as a sample drawn according to this word distribution

22 © ChengXiang Zhai, 2005 22 Text Generation with Unigram LM (Unigram) Language Model  p(w|  ) … text 0.2 mining 0.1 assocation 0.01 clustering 0.02 … food 0.00001 … Topic 1: Text mining … food 0.25 nutrition 0.1 healthy 0.05 diet 0.02 … Topic 2: Health Document D Text mining paper Food nutrition paper Sampling Given , p(D|  ) varies according to D

23 © ChengXiang Zhai, 2005 23 Estimation of Unigram LM (Unigram) Language Model  p(w|  )=? Document text 10 mining 5 association 3 database 3 algorithm 2 … query 1 efficient 1 … text ? mining ? assocation ? database ? … query ? … Estimation Total #words =100 10/100 5/100 3/100 1/100 How good is the estimated model ? It gives our document sample the highest prob, but it doesn’t generalize well… More about this later…

24 © ChengXiang Zhai, 2005 24 More Sophisticated LMs N-gram language models –In general, p(w 1 w 2... w n )=p(w 1 )p(w 2 |w 1 )…p(w n |w 1 …w n-1 ) –n-gram: conditioned only on the past n-1 words –E.g., bigram: p(w 1... w n )=p(w 1 )p(w 2 |w 1 ) p(w 3 |w 2 ) …p(w n |w n-1 ) Remote-dependence language models (e.g., Maximum Entropy model) Structured language models (e.g., probabilistic context-free grammar) Will barely be covered in this tutorial. If interested, read [Jelinek 98, Manning & Schutze 99, Rosenfeld 00]

25 © ChengXiang Zhai, 2005 25 Why Just Unigram Models? Difficulty in moving toward more complex models –They involve more parameters, so need more data to estimate (A doc is an extremely small sample) –They increase the computational complexity significantly, both in time and space Capturing word order or structure may not add so much value for “topical inference” But, using more sophisticated models can still be expected to improve performance...

26 © ChengXiang Zhai, 2005 26 Evaluation of SLMs Direct evaluation criterion: How well does the model fit the data to be modeled? –Example measures: Data likelihood, perplexity, cross entropy, Kullback-Leibler divergence (mostly equivalent) Indirect evaluation criterion: Does the model help improve the performance of the task? –Specific measure is task dependent –For retrieval, we look at whether a model helps improve retrieval accuracy –We hope more “reasonable” LMs would achieve better retrieval performance

27 © ChengXiang Zhai, 2005 27 Part 1: Introduction (cont.) 1.Introduction -Information Retrieval (IR) -Statistical Language Models (SLMs) -Application of SLMs to IR 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

28 © ChengXiang Zhai, 2005 28 Representative LMs for IR 199819992000200120022003 Beyond unigram Song & Croft 99 Smoothing examined Zhai & Lafferty 01a Bayesian Query likelihood Zaragoza et al. 03. Theoretical justification Lafferty & Zhai 01a Two-stage LMs Zhai & Lafferty 02 Lavrenko 04 Kraaij 04 Zhai 02 Dissertations Hiemstra 01 Berger 01 Ponte 98 Feedback LMs Translation model Berger & Lafferty 99 Lafferty & Zhai 01bZhai & Lafferty 03 Framework Basic LMs URL prior Kraaij et al. 02 Lavrenko et al. 02 Ogilvie & Callan 03 Zhai et al. 03 Xu et al. 01 Zhang et al. 02 Cronen-Townsend et al. 02 Si et al. 02 Special IR tasks Xu & Croft 99 2004 Lavrenko 04 Parsimonious LM Hiemstra et al. 04 Cluster LM Liu & Croft 04; Kurland & Lee 04 Relevance LM Lavrenko & Croft 01 Dep;endency LM Gao et al. 04 Model-based FB Zhai & Lafferty 01b Rel. Query FB Nallanati et al 03 Query likelihood scoring Ponte & Croft 98 Hiemstra & Kraaij 99; Miller et al. 99 Parameter tuning Ng 00 Title LM Jin et al. 02 Term-specific smoothing Hiemstra 02 Concept Likelihood Srikanth & Srihari 03 Time prior Li & Croft 03 Shen et al. 05

29 © ChengXiang Zhai, 2005 29 Ponte & Croft’s Pioneering Work [Ponte & Croft 98] Contribution 1: –A new “query likelihood” scoring method: p(Q|D) –[Maron and Kuhns 60] had the idea of query likelihood, but didn’t work out how to estimate p(Q|D) Contribution 2: –Connecting LMs with text representation and weighting in IR –[Wong & Yao 89] had the idea of representing text with a multinomial distribution (relative frequency), but didn’t study the estimation problem Good performance is reported using the simple query likelihood method

30 © ChengXiang Zhai, 2005 30 Early Work (1998-1999) Slightly after SIGIR 98, in TREC 7, two groups explored similar ideas independently: BBN [Miller et al., 99] & Univ. of Twente [Hiemstra & Kraaij 99] In TREC-8, Ng from MIT motivated the same query likelihood method in a different way [Ng 99] All following the simple query likelihood method; methods differ in the way the model is estimated and the event model for the query All show promising empirical results Main problems: –Feedback is explored heuristically –Lack of understanding why the method works….

31 © ChengXiang Zhai, 2005 31 Later Work (1999-) Attempt to understand why LMs work [Zhai & Lafferty 01a, Lafferty & Zhai 01a, Ponte 01, Greiff & Morgan 03, Sparck Jones et al. 03, Lavrenko 04] Further extend/improve the basic LMs [Song & Croft 99, Berger & Lafferty 99, Jin et al. 02, Nallapati & Allan 02, Hiemstra 02, Zaragoza et al. 03, Srikanth & Srihari 03, Nallapati et al 03, Gao et al. 04, Li & Croft 04, Kurland & Lee 04,Hiemstra et al. 04] Explore alternative ways of using LMs for ad hoc IR [Xu & Croft 99, Lavrenko & Croft 01, Lafferty & Zhai 01a, Zhai & Lafferty 01b, Lavrenko 04] Explore the use of SLMs for special retrieval tasks [Xu & Croft 99, Xu et al. 01, Lavrenko et al. 02, Cronen-Townsend et al. 02, Zhang et al. 02, Ogilvie & Callan 03, Zhai et al. 03, Shen et al. 05]

32 © ChengXiang Zhai, 2005 32 Part 2: The Basic LM Approach 1.Introduction 2.The Basic Language Modeling Approach -Query Likelihood Document Ranking -Smoothing of Language Models -Why does it work? -Variants of the basic LM 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

33 © ChengXiang Zhai, 2005 33 The Basic LM Approach [Ponte & Croft 98] Document Text mining paper Food nutrition paper Language Model … text ? mining ? assocation ? clustering ? … food ? … food ? nutrition ? healthy ? diet ? … Query = “data mining algorithms” ? Which model would most likely have generated this query?

34 © ChengXiang Zhai, 2005 34 Ranking Docs by Query Likelihood d1d1 d2d2 dNdN q d1d1 d2d2 dNdN Doc LM p(q|  d 1 ) p(q|  d 2 ) p(q|  d N ) Query likelihood

35 © ChengXiang Zhai, 2005 35 Modeling Queries: Different Assumptions Multi-Bernoulli –Event: word presence/absence –Q= (x 1, …, x |V| ), x i =1 for presence of word w i ; x i =0 for absence –Parameters: {p(w i =1|D), p(w i =0|D)} p(w i =1|D)+ p(w i =0|D)=1 Multinomial (Unigram Language Model) –Event: word selection/sampling –Q = (n 1, …, n |V| ), n i : frequency of word w i n=n 1 +…+ n |V| –Conditioned on fixed n, Q=q 1,…q n, p(Q|D)=p(q 1 |D)…p(q n |D) –Parameters: {p(w i |D)} p(w 1 |D)+… p(w |v| |D) = 1 [Ponte & Croft 98] uses Multi-Bernoulli; all other work uses multinomial Multinomial appears to work better [Song & Croft 99, McCallum & Nigam 98,Lavrenko 04]

36 © ChengXiang Zhai, 2005 36 Retrieval as LM Estimation Document ranking based on query likelihood Retrieval problem  Estimation of p(w i |d) Smoothing is an important issue, and distinguishes different approaches Document language model P(w|D)

37 © ChengXiang Zhai, 2005 37 How to Estimate p(w|D) Simplest solution: Maximum Likelihood Estimator –P(w|D) = relative frequency of word w in D –What if a word doesn’t appear in the text? P(w|D)=0 In general, what probability should we give a word that has not been observed? If we want to assign non-zero probabilities to such words, we’ll have to discount the probabilities of observed words This is what “smoothing” is about …

38 © ChengXiang Zhai, 2005 38 Part 2: The Basic LM Approach (cont.) 1.Introduction 2.The Basic Language Modeling Approach -Query Likelihood Document Ranking -Smoothing of Language Models -Why does it work? -Variants of the basic LM 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

39 © ChengXiang Zhai, 2005 39 Language Model Smoothing (Illustration) P(w) Word w Max. Likelihood Estimate Smoothed LM

40 © ChengXiang Zhai, 2005 40 How to Smooth? All smoothing methods try to –discount the probability of words seen in a document –re-allocate the extra counts so that unseen words will have a non-zero count Method 1 Additive smoothing [Chen & Goodman 98]: Add a constant  to the counts of each word, e.g., “add 1” “Add one”, Laplace Vocabulary size Counts of w in d Length of d (total counts)

41 © ChengXiang Zhai, 2005 41 Improve Additive Smoothing Should all unseen words get equal probabilities? We can use a reference model to discriminate unseen words Discounted ML estimate Reference language model Normalizer Prob. Mass for unseen words

42 © ChengXiang Zhai, 2005 42 Other Smoothing Methods Method 2 Absolute discounting [Ney et al. 94]: Subtract a constant  from the counts of each word Method 3 Linear interpolation [Jelinek-Mercer 80]: “Shrink” uniformly toward p(w|REF) # unique words parameter ML estimate

43 © ChengXiang Zhai, 2005 43 Other Smoothing Methods (cont.) Method 4 Dirichlet Prior/Bayesian [MacKay & Peto 95, Zhai & Lafferty 01a, Zhai & Lafferty 02]: Assume pseudo counts  p(w|REF) Method 5 Good Turing [Good 53]: Assume total # unseen events to be n 1 (# of singletons), and adjust the seen events in the same way parameter Heuristics needed

44 So, which method is the best? It depends on the data and the task! Cross validation is generally used to choose the best method and/or set the smoothing parameters… For retrieval, Dirichlet prior performs well… Backoff smoothing [Katz 87] doesn’t work well due to a lack of 2 nd -stage smoothing… Note that many other smoothing methods exist See [Chen & Goodman 98] and other publications in speech recognition…

45 © ChengXiang Zhai, 2005 45 Comparison of Three Methods [Zhai & Lafferty 01a] Comparison is performed on a variety of test collections

46 © ChengXiang Zhai, 2005 46 Part 2: The Basic LM Approach (cont.) 1.Introduction 2.The Basic Language Modeling Approach -Query Likelihood Document Ranking -Smoothing of Language Models -Why does it work? -Variants of the basic LM 3.More Advanced Language Models 4.Language Models for Different Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

47 © ChengXiang Zhai, 2005 47 Understanding Smoothing Discounted ML estimate Reference language model Retrieval formula using the general smoothing scheme Key rewriting step Similar rewritings are very common when using LMs for IR…

48 © ChengXiang Zhai, 2005 48 Smoothing & TF-IDF Weighting [Zhai & Lafferty 01a] Plug in the general smoothing scheme to the query likelihood retrieval formula, we obtain Ignore for ranking IDF-like weighting TF weighting Doc length normalization (long doc is expected to have a smaller  d ) Smoothing with p(w|C)  TF-IDF + length norm. Smoothing implements traditional retrieval heuristics LMs with simple smoothing can be computed as efficiently as traditional retrieval models Words in both query and doc

49 © ChengXiang Zhai, 2005 49 The D ual-Role of Smoothing [Zhai & Lafferty 02] Verbose queries Keyword queries Why does query type affect smoothing sensitivity? long short long

50 © ChengXiang Zhai, 2005 50 Another Reason for Smoothing p( “algorithms”|d1) = p(“algorithm”|d2) p( “data”|d1) < p(“data”|d2) p( “mining”|d1) < p(“mining”|d2) So we should make p(“the”) and p(“for”) less different for all docs, and smoothing helps achieve this goal… Content words Intuitively, d2 should have a higher score, but p(q|d1)>p(q|d2)… Query = “the algorithms for data mining” d1: 0.04 0.001 0.02 0.002 0.003 d2: 0.02 0.001 0.01 0.003 0.004 Query = “the algorithms for data mining” P(w|REF) 0.2 0.00001 0.2 0.00001 0.00001 Smoothed d1: 0.04*0.1 0.001*0.1 0.02*0.1 0.002*0.1 0.003*0.1 +0.2*0.9 +0.00001*0.9 +0.2*0.9 +0.00001*0.9 +0.00001*0.9 = 0.184 = 0.000109 = 0.182 = 0.000209 = 0.000309 Smoothed d2: 0.02*0.1 0.001*0.1 0.01*0.1 0.003*0.1 0.004*0.1 +0.2*0.9 +0.00001*0.9 +0.2*0.9 +0.00001*0.9 +0.001*0.9 = 0.182 = 0.000109 = 0.181 = 0.000309 = 0.000409

51 © ChengXiang Zhai, 2005 51 Two-stage Smoothing [Zhai & Lafferty 02] c(w,d) |d| P(w|d) = +  p(w|C) ++ Stage-1 -Explain unseen words -Dirichlet prior(Bayesian)  Collection LM (1- )+ p(w|U) Stage-2 -Explain noise in query -2-component mixture User background model Can be approximated by p(w|C)

52 © ChengXiang Zhai, 2005 52 Estimating  using leave-one-out [Zhai & Lafferty 02] P(w 1 |d - w 1 ) P(w 2 |d - w 2 ) log-likelihood Maximum Likelihood Estimator Newton’s Method Leave-one-out w1w1 w2w2 P(w n |d - w n ) wnwn...

53 © ChengXiang Zhai, 2005 53 Why would “leave-one-out” work? abc abc ab c d d abc cd d d abd ab ab ab ab cd d e cd e 20 word by author1 20 word by author2 abc abc ab c d d abe cb e f acf fb ef aff abef cdc db ge f s Suppose we keep sampling and get 10 more words. Which author is likely to “write” more new words? Now, suppose we leave “e” out…  must be big! I.e. more smoothing  doesn’t have to be big The amount of smoothing is closely related to the underlying vocabulary size

54 © ChengXiang Zhai, 2005 54 Estimating using Mixture Model [Zhai & Lafferty 02] Query Q=q 1 …q m 11 NN... Maximum Likelihood Estimator Expectation-Maximization (EM) algorithm P(w|d 1 )d1d1  P(w|d N )dNdN  …... Stage-1 (1- )p(w|d 1 )+ p(w|U) (1- )p(w|d N )+ p(w|U) Stage-2 Estimated in stage-1

55 © ChengXiang Zhai, 2005 55 Automatic 2-stage results  Optimal 1-stage results [Zhai & Lafferty 02] Average precision (3 DB’s + 4 query types, 150 topics) * Indicates significant difference Completely automatic tuning of parameters IS POSSIBLE!

56 © ChengXiang Zhai, 2005 56 The Notion of Relevance Relevance  (Rep(q), Rep(d)) Similarity P(r=1|q,d) r  {0,1} Probability of Relevance P(d  q) or P(q  d) Probabilistic inference Different rep & similarity Vector space model (Salton et al., 75) Prob. distr. model (Wong & Yao, 89) … Generative Model Regression Model (Fox 83) Classical prob. Model (Robertson & Sparck Jones, 76) Doc generation Query generation Basic LM approach (Ponte & Croft, 98) Prob. concept space model (Wong & Yao, 95) Different inference system Inference network model (Turtle & Croft, 91) The first application of LMs to IR Later, LMs are used along these lines too

57 © ChengXiang Zhai, 2005 57 Justification of Query Likelihood [Lafferty & Zhai 01a] The General Probabilistic Retrieval Model –Define P(Q,D|R) –Compute P(R|Q,D) using Bayes’ rule –Rank documents by O(R|Q,D) Special cases –Document “generation”: P(Q,D|R)=P(D|Q,R)P(Q|R) –Query “generation”: P(Q,D|R)=P(Q|D,R)P(D|R) Ignored for ranking D Doc generation leads to the classic Robertson-Sparck Jones model Query generation leads to the query likelihood language modeling approach

58 © ChengXiang Zhai, 2005 58 Query Generation [Lafferty & Zhai 01a] Assuming uniform prior, we have Query likelihood p(q|  d )Document prior Computing P(Q|D, R=1) generally involves two steps: (1) estimate a language model based on D (2) compute the query likelihood according to the estimated model P(Q|D)=P(Q|D, R=1)! Probability that a user who likes D would pose a query Q Relevance-based interpretation of the so-called “document language model”

59 © ChengXiang Zhai, 2005 59 Part 2: The Basic LM Approach (cont.) 1.Introduction 2.The Basic Language Modeling Approach -Query Likelihood Document Ranking -Smoothing of Language Models -Why does it work? -Variants of the basic LM 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

60 © ChengXiang Zhai, 2005 60 Variants of the Basic LM Approach Different smoothing strategies –Hidden Markov Models (essentially linear interpolation) [Miller et al. 99] –Smoothing with an IDF-like reference model [Hiemstra & Kraaij 99] –Performance tends to be similar to the basic LM approach –Many other possibilities for smoothing [Chen & Goodman 98] Different priors –Link information as prior leads to significant improvement of Web entry page retrieval performance [Kraaij et al. 02] –Time as prior [Li & Croft 03] Passage retrieval [Liu & Croft 02]

61 © ChengXiang Zhai, 2005 61 Part 3: More Advanced LMs 1.Introduction 2.The Basic Language Modeling Approach 3.More Advanced Language Models -Improving the basic LM approach -Feedback and alternative ways of using LMs 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

62 © ChengXiang Zhai, 2005 62 Improving the Basic LM Approach Capturing limited dependencies –Bigrams/Trigrams [Song & Croft 99]; Grammatical dependency [Nallapati & Allan 02, Srikanth & Srihari 03, Gao et al. 04] –Generally insignificant improvement as compared with other extensions such as feedback Full Bayesian query likelihood [Zaragoza et al. 03] –Performance similar to the basic LM approach Translation model for p(Q|D,R) [Berger & Lafferty 99, Jin et al. 02] –Address polesemy and synonyms; improves over the basic LM methods, but computationally expensive Cluster-based smoothing/scoring [Liu & Croft 04, Kurland & Lee 04] –Improves over the basic LM, but computationally expensive Parsimonious LMs [Hiemstra et al. 04]: –Using a mixture model to “factor out” non-discriminative words

63 © ChengXiang Zhai, 2005 63 Translation Models Directly modeling the “translation” relationship between words in the query and words in a doc When relevance judgments are available, (q,d) serves as data to train the translation model Without relevance judgments, –Synthetic data can be used [Berger & Lafferty 99] – can be used as an approximation [Jin et al. 02] A basic translation model Translation modelRegular doc LM

64 © ChengXiang Zhai, 2005 64 Cluster-based Smoothing/Scoring Cluster-based smoothing: Smooth a document LM with a cluster of similar documents [Liu & Croft 04] Cluster-based query likelihood: Similar to the translation model, but “translate” the whole document to the query through a set of clusters [Kurland & Lee 04] cluster LM How likely doc D belongs to cluster C “self” LM Improves over the basic LM method, but insignificantly Only effective when interpolated with the basic LM scores Likelihood of Q given C

65 © ChengXiang Zhai, 2005 65 Part 3: More Advanced LMs (cont.) 1.Introduction 2.The Basic Language Modeling Approach 3.More Advanced Language Models -Improving the basic LM approach -Feedback and Alternative ways of using LMs 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary We are here

66 © ChengXiang Zhai, 2005 66 Feedback and Doc/Query Generation Classic Prob. Model Query likelihood (“Language Model”) Rel. doc model NonRel. doc model “Rel. query” model P(D|Q,R=1) P(D|Q,R=0) P(Q|D,R=1) (q 1,d 1,1) (q 1,d 2,1) (q 1,d 3,1) (q 1,d 4,0) (q 1,d 5,0) (q 3,d 1,1) (q 4,d 1,1) (q 5,d 1,1) (q 6,d 2,1) (q 6,d 3,0) Parameter Estimation Initial retrieval: - query as rel doc vs. doc as rel query - P(Q|D,R=1) is more accurate Feedback: - P(D|Q,R=1) can be improved for the current query and future doc - P(Q|D,R=1) can also be improved, but for current doc and future query Doc-based feedback Query-based feedback

67 © ChengXiang Zhai, 2005 67 Difficulty in Feedback with Query Likelihood Traditional query expansion [Ponte 98, Miller et al. 99, Ng 99] –Improvement is reported, but there is a conceptual inconsistency –What’s an expanded query, a piece of text or a set of terms? Avoid expansion –Query term reweighting [Hiemstra 01, Hiemstra 02] –Translation models [Berger & Lafferty 99, Jin et al. 02] –Only achieving limited feedback Doing relevant query expansion instead [Nallapati et al 03] The difficulty is due to the lack of a query/relevance model The difficulty can be overcome with alternative ways of using LMs for retrieval –Relevance model estimation [Lavrenko & Croft 01] –Query model estimation [Lafferty & Zhai 01b; Zhai & Lafferty 01b]

68 © ChengXiang Zhai, 2005 68 Two Alternative Ways of Using LMs Classic Probabilistic Model :Doc-Generation as opposed to Query-generation –Natural for relevance feedback –Challenge: Estimate p(D|Q,R=1) without relevance feedback [Lavrenko & Croft 01] (p(D|Q,R=0) can be approximated by p(D)) Probabilistic Distance Model :Similar to the vector-space model, but with LMs as opposed to TF-IDF weight vectors –A popular distance function: Kullback-Leibler (KL) divergence, covering query likelihood as a special case –Retrieval is now to estimate query & doc models and feedback is treated as query LM updating [Lafferty & Zhai 01b; Zhai & Lafferty 01b] Both methods provide a more principled way for full feedback and are empirically effective

69 © ChengXiang Zhai, 2005 69 Relevance Model Estimation [Lavrenko & Croft 01] Question: How to estimate P(D|Q,R) (or p(w|Q,R)) without relevant documents? Key idea: –Treat query as observations about p(w|Q,R) –Approximate the model space with all the document models Two methods for decomposing p(w,Q) –Independent sampling (Bayesian model averaging) – Conditional sampling: p(w,Q)=p(w)p(Q|w) Original formula in [Lavranko &Croft 01]

70 © ChengXiang Zhai, 2005 70 Kernel-based Allocation [Lavrenko 04] A general generative model for text Choices of the kernel function –Delta kernel: –Dirichlet kernel: allow a training point to “spread” its influence An infinite mixture model Kernel-based density function Kernel function Average probability of w 1 …w n over all training points

71 © ChengXiang Zhai, 2005 71 Query Model Estimation [Lafferty & Zhai 01b, Zhai & Lafferty 01b] Question: How to estimate a better query model than the ML estimate based on the original query? “Massive feedback” (Markov Chain) [Lafferty & Zhai 01b]: –Improve a query model through co-occurrence pattern learned from a document-term Markov chain that outputs the query Model-based feedback (model interpolation) [ Zhai & Lafferty 01b]: –Estimate a feedback topic model based on feedback documents –Update the query model by interpolating the original query model with the learned feedback model

72 © ChengXiang Zhai, 2005 72 Feedback as Model Interpolation [Zhai & Lafferty 01b] Query Q Document D Results Feedback Docs F={d 1, d 2, …, d n } Generative model Divergence minimization  =0 No feedback  =1 Full feedback

73 © ChengXiang Zhai, 2005 73  F Estimation Method I: Generative Mixture Model w w F={D 1, …, D n } Maximum Likelihood P(w|  ) P(w| C) 1- P(source) Background words Topic words The learned topic model is called a “parsimonious language model” in [Hiemstra et al. 04]

74 © ChengXiang Zhai, 2005 74  F Estimation Method II: Empirical Divergence Minimization D1D1 F={D 1, …, D n } DnDn  close Empirical divergence Divergence minimization far ( ) C Background model

75 © ChengXiang Zhai, 2005 75 Example of Feedback Query Model Trec topic 412: “airport security” =0.9 =0.7 Mixture model approach Web database Top 10 docs

76 © ChengXiang Zhai, 2005 76 Model-based feedback Improves over Simple LM [Zhai & Lafferty 01b] Translation models, Relevance models, and Feedback-based query models have all been shown to improve performance significantly over the simple LMs (Parameter tuning is necessary in many cases…)

77 © ChengXiang Zhai, 2005 77 Part 4: LMs for Special Retrieval Tasks 1.Introduction 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks -Cross-lingual IR -Distributed IR -Structured document retrieval -Personalized/context-sensitive search -Modeling redundancy -Predicting query difficulty -Subtopic retrieval 5.A General Framework for Applying SLMs to IR 6.Summary We are here

78 © ChengXiang Zhai, 2005 78 Cross-lingual IR Use query in language A (e.g., English) to retrieve documents in language B (e.g., Chinese) Cross-lingual p(Q|D,R) [Xu et al 01] Cross-lingual p(D|Q,R) [Lavrenko et al 02] EnglishChinese Translation model EnglishChinese word Method 1: Method 2: Estimate with parallel corpora Estimate with a bilingual lexicon Or Parallel corpora

79 © ChengXiang Zhai, 2005 79 Distributed IR Retrieve documents from multiple collections The task is generally decomposed into two subtasks: Collection selection and result fusion Using LMs for collection selection [Xu & Croft 99, Si et al. 02] –Treat collection selection as a “retrieving collections” as opposed to “documents” –Estimate each collection model by maximum likelihood estimate [Si et al. 02] or clustering [Xu & Croft 99] Using LMs for result fusion [ Si et al. 02] –Assume query likelihood scoring for all collections, but on each collection, a distinct reference LM is used for smoothing –Adjust the bias score p(Q|D,Collection) to recover the fair score p(Q|D)

80 © ChengXiang Zhai, 2005 80 Structured Document Retrieval [Ogilvie & Callan 03] Title Abstract Body-Part1 Body-Part2 … D D1D1 D2D2 D3D3 DkDk -Want to combine different parts of a document with appropriate weights -Anchor text can be treated as a “part” of a document - Applicable to XML retrieval “part selection” prob. Serves as weight for D j Can be trained using EM Select D j and generate a query word using D j

81 © ChengXiang Zhai, 2005 81 Personalized/Context-Sensitive Search [Shen et al. 05] KL-divergence retrieval model: –Task1: estimating a query model –Task2: estimating a doc model User information and search context information can be used to estimate a better query model Refinement of this model leads to specific retrieval formulas Simple models often end up interpolating many unigram language models based on different sources of evidence [Shen et al. 05]

82 © ChengXiang Zhai, 2005 82 Modeling Redundancy Given two documents D 1 and D 2, decide how redundant D 1 (or D 2 ) is w.r.t. D 2 (or D 1 ) Redundancy of D 1  “to what extent can D 1 be explained by a model estimated based on D 2 ” Use a unigram mixture model [Zhai 02] [Zhang et al. 02] explored a more sophisticated redundancy model (3-component) Maximum Likelihood estimator EM algorithm Reference LM LM for D 2 Measure of redundancy

83 © ChengXiang Zhai, 2005 83 Predicting Query Difficulty [Cronen-Townsend et al. 02] Observations: –Discriminative queries tend to be easier –Comparison of the query model and the collection model can indicate how discriminative a query is Method: –Define “query clarity” as the KL-divergence between an estimated query model or relevance model and the collection LM –An enriched query LM can be estimated by exploiting pseudo feedback (e.g., relevance model) Correlation between the clarity scores and retrieval performance is found

84 © ChengXiang Zhai, 2005 84 Subtopic Retrieval [Zhai 02, Zhai et al 03] Subtopic retrieval –Assume existence of subtopics for a query, and aim at retrieving as many distinct subtopics as possible –E.g., Retrieve “different applications of robotics” –Need to go beyond independent relevance Two methods explored in [Zhai 02] –Maximal Marginal Relevance: Maximizing subtopic coverage indirectly through redundancy elimination LMs can be used to model redundancy –Maximal Diverse Relevance: Maximizing subtopic coverage directly through subtopic modeling Define a retrieval function based on subtopic representation of query and documents Mixture LMs can be used to model subtopics (essentially clustering)

85 © ChengXiang Zhai, 2005 85 Unigram Mixture Models Each subtopic is modeled with one unigram LM A document is treated as observations from a mixture model involving all subtopic LMs Two different sampling strategies to generate a document –Strategy 1: Document Clustering Choose a subtopic model and use the chosen model to generate all the words in a document A document is always generated from one single LM –Strategy 2: Aspect Models [Hofmann 99; Blei et al 02] Choose a (potentially) different subtopic model when generating each word in a document A document may be generated from multiple LMs For subtopic retrieval, we assume a document may have multiple subtopics, so strategy 2 is more appropriate Many other applications…

86 © ChengXiang Zhai, 2005 86 Aspect Models P(w|  1 ) P(w|  2 ) P(w|  k ) Subtopic 1 Subtopic 2 Subtopic k  w Document D=d 1 … d n Latent Dirichlet Allocation [Blei et al 02, Lafferty & Minka 03] ’s are drawn from a common Dirichlet distribution is now regularized Prob. LSI [Hofmann 99]: Different D has a different set of ’s Flexible aspect distribution Need regularization

87 © ChengXiang Zhai, 2005 87 Part 5: A General Framework for Applying SLMs to IR 1.Introduction 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR -Risk minimization framework -Special cases 6.Summary We are here

88 © ChengXiang Zhai, 2005 88 Risk Minimization: Motivation Long-standing IR Challenges –Improve IR theory Develop theoretically sound and empirically effective models Go beyond the limited traditional notion of relevance (independent, topical relevance) –Improve IR practice Optimize retrieval parameters automatically SLMs are very promising tools … –How can we systematically exploit SLMs in IR? –Can SLMs offer anything hard/impossible to achieve in traditional IR?

89 © ChengXiang Zhai, 2005 89 Idea 1: Retrieval as Decision-Making (A more general notion of relevance) Unordered subset ? Clustering ? Given a query, - Which documents should be selected? (D) - How should these docs be presented to the user? (  ) Choose: (D,  ) Query … Ranked list ? 1234

90 © ChengXiang Zhai, 2005 90 Idea 2: Systematic Language Modeling Document Language Models Documents DOC MODELING Query Language Model QUERY MODELING Loss Function User USER MODELING Retrieval Decision: ?

91 © ChengXiang Zhai, 2005 91 Generative Model of Document & Query [Lafferty & Zhai 01b] observed Partially observed U User S Source inferred d Document q Query R

92 © ChengXiang Zhai, 2005 92 Applying Bayesian Decision Theory [Lafferty & Zhai 01b, Zhai 02, Zhai & Lafferty 03] Choice: (D 1,  1 ) Choice: (D 2,  2 ) Choice: (D n,  n )... query q user U doc set C source S qq 11 NN hiddenobservedloss Bayes risk for choice (D,  ) RISK MINIMIZATION Loss L

93 © ChengXiang Zhai, 2005 93 Special Cases Set-based models (choose D) Ranking models (choose  ) –Independent loss Relevance-based loss Distance-based loss –Dependent loss MMR loss MDR loss Boolean model Probabilistic relevance model Generative Relevance Theory [Lavrenko 04] Vector-space Model Subtopic retrieval model Two-stage LM KL-divergence model

94 © ChengXiang Zhai, 2005 94 Optimal Ranking for Independent Loss Decision space = {rankings} Sequential browsing Independent loss Independent risk = independent scoring “Risk ranking principle” [Zhai 02]

95 © ChengXiang Zhai, 2005 95 Automatic Parameter Tuning Retrieval parameters are needed to –model different user preferences –customize a retrieval model according to specific queries and documents Retrieval parameters in traditional models –EXTERNAL to the model, hard to interpret –Most parameters are introduced heuristically to implement our “intuition” –As a result, no principles to quantify them, must set through empirical experiments Lots of experimentation Optimality for new queries is not guaranteed ty for new queries is not guaranteed. So far, parameters have been set through empirical experimentation Language models make it possible to estimate parameters…

96 © ChengXiang Zhai, 2005 96 Parameter Setting in Risk Minimization Query Language Model Document Language Models Loss Function User Documents Query model parameters Doc model parameters User model parameters Estimate Set

97 © ChengXiang Zhai, 2005 97 Generative Relevance Hypothesis [Lavrenko 04] Generative Relevance Hypothesis: –For a given information need, queries expressing that need and documents relevant to that need can be viewed as independent random samples from the same underlying generative model A special case of risk minimization when document models and query models are in the same space Implications for retrieval models: “the same underlying generative model” makes it possible to – Match queries and documents even if they are in different languages or media – Estimate/improve a relevant document model based on example queries or vice versa

98 © ChengXiang Zhai, 2005 98 Risk Minimization: Summary Risk minimization is a general probabilistic retrieval framework –Retrieval as a decision problem (=risk min.) –Separate/flexible language models for queries and docs Advantages –A unified framework for existing models –Automatic parameter tuning due to LMs –Allows for modeling complex retrieval tasks Lots of potential for exploring LMs… For more information, see [Zhai 02]

99 © ChengXiang Zhai, 2005 99 Part 6: Summary 1.Introduction 2.The Basic Language Modeling Approach 3.More Advanced Language Models 4.Language Models for Special Retrieval Tasks 5.A General Framework for Applying SLMs to IR 6.Summary –SLMs vs. traditional methods: Pros & Cons –What we have achieved so far –Challenges and future directions We are here

100 © ChengXiang Zhai, 2005 100 SLMs vs. Traditional IR Pros: –Statistical foundations (better parameter setting) –More principled way of handling term weighting –More powerful for modeling subtopics, passages,.. –Leverage LMs developed in related areas (e.g., speech recognition, machine translation) –Empirically as effective as well-tuned traditional models with potential for automatic parameter tuning Cons: –Limitation due to generative models in general (lack of discrimination) –Less robust in some cases (e.g., when queries are semi-structured) –Computationally complex –Empirically, performance appears to be inferior to well-tuned full- fledged traditional methods (at least, no evidence for beating them)

101 © ChengXiang Zhai, 2005 101 What We Have Achieved So Far Framework and justification for using LMs for IR Several effective models are developed –Basic LM with Dirichlet prior smoothing is a reasonable baseline –Basic LM with informative priors often improves performance –Translation model handles polysemy & synonyms –Relevance model incorporates LMs into the classic probabilistic IR model –KL-divergence model ties feedback with query model estimation –Mixture models can model redundancy and subtopics Completely automatic tuning of parameters is possible LMs can be applied to virtually any retrieval task with great potential for modeling complex IR problems

102 © ChengXiang Zhai, 2005 102 Challenges and Future Directions Challenge 1: Establish a robust and effective LM that –Optimizes retrieval parameters automatically –Performs as well as or better than well-tuned traditional retrieval methods with pseudo feedback –Is as efficient as traditional retrieval methods Challenge 2: Demonstrate consistent and substantial improvement by going beyond unigram LMs –Model limited dependency between terms –Derive more principled weighting methods for phrases Can LMs completely and convincingly beat traditional methods? Can we do much better by going beyond unigram LMs?

103 © ChengXiang Zhai, 2005 103 Challenges and Future Directions (cont.) Challenge 3: Develop LMs that can model document structures and subtopics –Recognize query-specific boundaries of relevant passages –Passage-based/subtopic-based feedback –Combine different parts of a document Challenge 4: Develop LMs to support personalized search –Infer and track a user’s interests with LMs –Model search context with LMs –Incorporate user’s preferences and search context in retrieval –Customize/organize search results according to user’s interests How can we exploit user information and search context to improve search? How can we break the document unit in a principled way?

104 © ChengXiang Zhai, 2005 104 Challenges and Future Directions (cont.) Challenge 5: Generalize LMs to handle relational data –Develop LMs for semi-structured data (e.g., XML) –Develop LMs to handle structured queries –Develop LMs for keyword search in relational databases Challenge 6: Develop LMs for retrieval with complex information need, e.g., –Subtopic retrieval –Readability constrained retrieval –Entity retrieval How can we exploit LMs to develop models for complex retrieval tasks? What role can LMs play when combining text with relational data?

105 © ChengXiang Zhai, 2005 105 References [Baeza-Yates & Ribiero-Neto 99] Modern Information Retrieval, Addison-Wesley, 1999. [Berger & Lafferty 99] A. Berger and J. Lafferty. Information retrieval as statistical translation. Proceedings of the ACM SIGIR 1999, pages 222-229. [Blei et al. 02] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. In T G Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [Carbonell and Goldstein 98]J. Carbonell and J. Goldstein, The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceedings of SIGIR'98, pages 335--336. [Chen & Goodman 98] S. F. Chen and J. T. Goodman. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University. [Cronen-Townsend et al. 02] Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. Predicting query performance. In Proceedings of the ACM Conference on Research in Information Retrieval (SIGIR), 2002. [Croft & Lafferty 03] W. B. Croft and J. Lafferty (ed), Language Modeling and Information Retrieval. Kluwer Academic Publishers. 2003. [Fox 83] E. Fox. Expending the Boolean and Vector Space Models of Information Retrieval with P-Norm Queries and Multiple Concept Types. PhD thesis, Cornell University. 1983. [Fuhr 01] N. Fuhr. Language models and uncertain inference in information retrieval. In Proceedings of the Language Modeling and IR workshop, pages 6--11. [Gao et al. 04] J. Gao, J. Nie, G. Wu, and G. Cao, Dependence language model for information retrieval, In Proceedings of ACM SIGIR 2004. [Good 53] I. J. Good. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3 and 4):237--264, 1953. [Greiff & Morgan 03] W. Greiff and W. Morgan, Contributions of Language Modeling to the Theory and Practice of IR, In W. B. Croft and J. Lafferty (eds), Language Modeling for Information Retrieval, Kluwer Academic Pub. 2003. [Hiemstra & Kraaij 99] D. Hiemstra and W. Kraaij, Twenty-One at TREC-7: Ad-hoc and Cross-language track, In Proceedings of the Seventh Text REtrieval Conference (TREC-7), 1999. [Hiemstra 01] D. Hiemstra. Using Language Models for Information Retrieval. PhD dissertation, University of Twente, Enschede, The Netherlands, January 2001. [Hiemstra 02] D. Hiemstra. Term-specific smoothing for the language modeling approach to information retrieval: the importance of a query term. In Proceedings of ACM SIGIR 2002, 35-41

106 © ChengXiang Zhai, 2005 106 References (cont.) [Hiemstra et al. 04] D. Hiemstra, S. Robertson, and H. Zaragoza. Parsimonious language models for information retrieval, In Proceedings of ACM SIGIR 2004. [Hofmann 99] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings on the 22nd annual international ACM- SIGIR 1999, pages 50-57. [Jelinek 98] F. Jelinek, Statistical Methods for Speech Recognition, Cambirdge: MIT Press, 1998. [Jelinek & Mercer 80] F. Jelinek and R. L. Mercer. Interpolated estimation of markov source parameters from sparse data. In E. S. Gelsema and L. N. Kanal, editors, Pattern Recognition in Practice. 1980. Amsterdam, North-Holland,. [Jeon et al. 03] J. Jeon, V. Lavrenko and R. Manmatha, Automatic Image Annotation and Retrieval using Cross-media Relevance Models, In Proceedings of ACM SIGIR 2003 [Jin et al. 02] R. Jin, A. Hauptmann, and C. Zhai, Title language models for information retrieval, In Proceedings of ACM SIGIR 2002. [Kalt 96] T. Kalt. A new probabilistic model of text classication and retrieval. University of Massachusetts Technical report TR98-18,1996. [Katz 87] S. M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, volume ASSP-35:400--401. [Kraaij 04] W. Kraaij. Variations on Language Modeling for Information Retrieval, Ph.D. thesis, University of Twente, 2004, [Kurland & Lee 04] O. Kurland and L. Lee. Corpus structure, language models, and ad hoc information retrieval. In Proceedings of ACM SIGIR 2004. [Lafferty and Zhai 01a] J. Lafferty and C. Zhai, Probabilistic IR models based on query and document generation. In Proceedings of the Language Modeling and IR workshop, pages 1--5. [Lafferty & Zhai 01b] J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information retrieval. In Proceedings of the ACM SIGIR 2001, pages 111-119. [Lavrenko & Croft 01] V. Lavrenko and W. B. Croft. Relevance-based language models. In Proceedings of the ACM SIGIR 2001, pages 120-127. [Lavrenko et al. 02] V. Lavrenko, M. Choquette, and W. Croft. Cross-lingual relevance models. In Proceedings of SIGIR 2002, pages 175-182. [Lavrenko 04] V. Lavrenko, A generative theory of relevance. Ph.D. thesis, University of Massachusetts. 2004.

107 © ChengXiang Zhai, 2005 107 References (cont.) [Li & Croft 03] X. Li, and W.B. Croft, Time-Based Language Models, In Proceedings of CIKM'03, 2003 [Liu & Croft 02] X. Liu and W. B. Croft. Passage retrieval based on language models. In Proceedings of CIKM 2002, pages 15-19. [Liu & Croft 04] X. Liu and W. B. Croft. Cluster-based retrieval using language models. In Proceedings of ACM SIGIR 2004. [MacKay & Peto 95] D. MacKay and L. Peto. (1995). A hierarchical Dirichlet language model. Natural Language Engineering, 1(3):289--307. [Maron & Kuhns 60] M. E. Maron and J. L. Kuhns, On relevance, probabilistic indexing and information retrieval. Journal of the ACM, 7:216--244. [McCallum & Nigam 98] A. McCallum and K. Nigam (1998). A comparison of event models for Naïve Bayes text classification. In AAAI-1998 Learning for Text Categorization Workshop, pages 41--48. [Miller et al. 99] D. R. H. Miller, T. Leek, and R. M. Schwartz. A hidden Markov model information retrieval system. In Proceedings of ACM-SIGIR 1999, pages 214-221. [Minka & Lafferty 03] T. Minka and J. Lafferty, Expectation-propagation for the generative aspect model, In Proceedings of the UAI 2002, pages 352--359. [Nallanati & Allan 02] Ramesh Nallapati and James Allan, Capturing term dependencies using a language model based on sentence trees. In Proceedings of CIKM 2002. 383-390 [Nallanati et al 03] R. Nallanati, W. B. Croft, and J. Allan, Relevant query feedback in statistical language modeling, In Proceedings of CIKM 2003. [Ney et al. 94] H. Ney, U. Essen, and R. Kneser. On Structuring Probabilistic Dependencies in Stochastic Language Modeling. Comput. Speech and Lang., 8(1), 1-28. [Ng 00]K. Ng. A maximum likelihood ratio information retrieval model. In Voorhees, E. and Harman, D., editors, Proceedings of the Eighth Text REtrieval Conference (TREC-8), pages 483--492. 2000. [Ogilvie & Callan 03] P. Ogilvie and J. Callan Combining Document Representations for Known Item Search. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2003), pp. 143-150

108 © ChengXiang Zhai, 2005 108 References (cont.) [Ponte & Croft 98]] J. M. Ponte and W. B. Croft. A language modeling approach to information retrieval. In Proceedings of ACM-SIGIR 1998, pages 275-281. [Ponte 98] J. M. Ponte. A language modeling approach to information retrieval. Phd dissertation, University of Massachusets, Amherst, MA, September 1998. [Ponte 01] J. Ponte. Is information retrieval anything more than smoothing? In Proceedings of the Workshop on Language Modeling and Information Retrieval, pages 37-41, 2001. [Robertson & Sparch-Jones 76] S. Robertson and K. Sparck Jones. (1976). Relevance Weighting of Search Terms. JASIS, 27, 129-146. [Robertson 77] S. E. Robertson. The probability ranking principle in IR. Journal of Documentation, 33:294-304, 1977. [Rosenfeld 00] R. Rosenfeld, Two decades of statistical language modeling: where do we go from here? In Proceedings of IEEE, volume~88. [Salton et al. 75] G. Salton, A. Wong and C. S. Yang, A vector space model for automatic indexing. Communications of the ACM, 18(11):613--620. [Shannon 48] Shannon, C. E. (1948).. A mathematical theory of communication. Bell System Tech. J. 27, 379-423, 623-656. [Shen et al. 05] X. Shen, B. Tan, and C. Zhai. Context-sensitive information retrieval with implicit feedback. In Proceedings of ACM SIGIR 2005. [Si et al. 02] L. Si, R. Jin, J. Callan and P.l Ogilvie. A Language Model Framework for Resource Selection and Results Merging. In Proceedings of the 11th International Conference on Information and Knowledge Management (CIKM). 2002 [Singhal 01] A. Singhal, Modern Information Retrieval: A Brief Overview. Amit Singhal. In IEEE Data Engineering Bulletin 24(4), pages 35-43, 2001. [Song & Croft 99] F. Song and W. B. Croft. A general language model for information retrieval. In Proceedings of Eighth International Conference on Information and Knowledge Management (CIKM 1999)

109 © ChengXiang Zhai, 2005 109 References (cont.) [Sparck Jones et al. 00] K. Sparck Jones, S. Walker, and S. E. Robertson, A probabilistic model of information retrieval: development and comparative experiments - part 1 and part 2. Information Processing and Management, 36(6):779--808 and 809--840. [Sparck Jones et al. 03] K. Sparck Jones, S. Robertson, D. Hiemstra, H. Zaragoza, Language Modeling and Relevance, In W. B. Croft and J. Lafferty (eds), Language Modeling for Information Retrieval, Kluwer Academic Pub. 2003. [Srikanth & Srihari 03] M. Srikanth, R. K. Srihari. Exploiting Syntactic Structure of Queries in a Language Modeling Approach to IR. in Proceedings of Conference on Information and Knowledge Management(CIKM'03). [Turtle & Croft 91]H. Turtle and W. B. Croft, Evaluation of an inference network-based retrieval model. ACM Transactions on Information Systems, 9(3):187--222. [van Rijsbergen 86] C. J. van Rijsbergen. A non-classical logic for information retrieval. The Computer Journal, 29(6). [Witten et al. 99] I.H. Witten, A. Mo#at, and T.C. Bell. Managing Gigabytes - Compressing and Indexing Documents and Images. Academic Press, San Diego, 2nd edition, 1999. [Wong & Yao 89] S. K. M. Wong and Y. Y. Yao, A probability distribution model for information retrieval. Information Processing and Management, 25(1):39--53. [Wong & Yao 95] S. K. M. Wong and Y. Y. Yao. On modeling information retrieval with probabilistic inference. ACM Transactions on Information Systems, 13(1):69--99. [Kraaij et al. 02] Wessel Kraaij,Thijs Westerveld, Djoerd Hiemstra: The Importance of Prior Probabilities for Entry Page Search. Proceedings of SIGIR 2002, pp. 27-34 [Xu & Croft 99] J. Xu and W. B. Croft. Cluster-based language models for distributed retrieval. In Proceedings of the ACM SIGIR 1999, pages 15-19, [Xu et al. 01]J. Xu, R. Weischedel, and C. Nguyen. Evaluating a probabilistic model for cross-lingual information retrieval. In Proceedings of the ACM-SIGIR 2001, pages 105-110. [Zaragoza et al. 03] Hugo Zaragoza, D. Hiemstra and M. Tipping, Bayesian extension to the language model for ad hoc information retrieval. In Proceedings of SIGIR 2003: 4-9

110 © ChengXiang Zhai, 2005 110 References (cont.) [Zhai & Lafferty 01a] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the ACM-SIGIR 2001, pages 334-342. [Zhai & Lafferty 01b] C. Zhai and J. Lafferty. Model-based feedback in the language modeling approach to information retrieval, In Proceedings of the Tenth International Conference on Information and Knowledge Management (CIKM 2001). [Zhai & Lafferty 02] C. Zhai and J. Lafferty. Two-stage language models for information retrieval. In Proceedings of the ACM-SIGIR 2002, pages 49-56. [Zhai & Lafferty 03] C. Zhai and J. Lafferty, A risk minimization framework for information retrieval, In Proceedings of the ACM SIGIR 2003 Workshop on Mathematical/Formal Methods in IR. [Zhai et al. 03] C. Zhai, W. Cohen, and J. Lafferty, Beyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval, In Proceedings of ACM SIGIR 2003. [Zhai 02] C. Zhai, Language Modeling and Risk Minimization in Text Retrieval, Ph.D. thesis, Carnegie Mellon University, 2002. [Zhang et al. 02] Y. Zhang, J. Callan, and Thomas P. Minka, Novelty and redundancy detection in adaptive filtering. In Proceedings of SIGIR 2002, 81-88


Download ppt "Statistical Language Models for Information Retrieval Tutorial at ACM SIGIR 2005 Aug. 15, 2005 ChengXiang Zhai Department of Computer Science University."

Similar presentations


Ads by Google