Download presentation
Presentation is loading. Please wait.
Published byJustin Simmons Modified over 9 years ago
1
1 Text Processing Rong Jin
2
2 Indexing Process
3
3 Processing Text Converting documents to index terms Why? Matching the exact string of characters typed by the user is too restrictive i.e., it doesn’t work very well in terms of effectiveness Not all words are of equal value in a search Sometimes not clear where words begin and end Not even clear what a word is in some languages (e.g., Chinese, Korean)
4
4 Tokenizing Forming words from sequence of characters Surprisingly complex in English, can be harder in other languages Early IR systems: any sequence of alphanumeric characters of length 3 or more terminated by a space or other special character upper-case changed to lower-case
5
5 Tokenizing Example: “Bigcorp's 2007 bi-annual report showed profits rose 10%.” becomes 1. bigcorp s 2007 bi annual report showed profits rose 10 2. bigcorp 2007 annual report showed profits rose
6
6 Tokenizing Example: “Bigcorp's 2007 bi-annual report showed profits rose 10%.” becomes “bigcorp 2007 annual report showed profits rose” Too simple for search applications or even large-scale experiments Why? Too much information lost Small decisions in tokenizing can have major impact on effectiveness of some queries
7
7 Tokenizing Problems Small words can be important in some queries, usually in combinations xp, ma, pm, ben e king, el paso, master p, gm, j lo, world war II Both hyphenated and non-hyphenated forms of many words are common Sometimes hyphen is not needed e-bay, wal-mart, active-x, cd-rom, t-shirts At other times, hyphens should be considered either as part of the word or a word separator winston-salem, mazda rx-7, e-cards, pre-diabetes, spanish- speaking
8
8 Tokenizing Problems Special characters are an important part of tags, URLs, code in documents Capitalized words can have different meaning from lower case words Bush, Apple Apostrophes can be a part of a word, a part of a possessive, or just a mistake rosie o'donnell, can't, don't, 80's, 1890's, men's straw hats, master's degree, england's ten largest cities, shriner's
9
9 Tokenizing Problems Numbers can be important, including decimals nokia 3250, top 10 courses, united 93, quicktime 6.5 pro, 92.3 the beat, 288358 Periods can occur in numbers, abbreviations, URLs, ends of sentences, and other situations I.B.M., Ph.D., cs.umass.edu, F.E.A.R. Note: tokenizing steps for queries must be identical to steps for documents
10
10 Tokenizing: SMART System text_stop_file /amd/beyla/d/smart.11.0/lib/common_words database "." temp_dir "" trace 0 global_trace_start -1 global_trace_end 2147483647 global_trace_file "" global_start -1 global_end 2147483647 global_accounting 0 # Indexing Locations doc_loc - query_loc - qrels_text_file "" query_skel "" # DEFAULTS FOR INDEXING PROCEDURES AND FLAGS index_pp index.index_pp.index_pp index_vec index.vec_index.doc query.index_vec index.vec_index.query preparse index.preparse.generic next_vecid index.next_vecid.next_vecid query.next_vecid index.next_vecid.next_vecid_1 addtextloc index.addtextloc.add_textloc query.addtextloc index.addtextloc.empty token_sect index.token_sect.token_sect interior_char "'.@!_" interior_hyphenation false endline_hyphenation true parse.single_case false makevec index.makevec.makevec expand index.expand.none weight convert.weight.weight store index.store.store_vec doc.store index.store.store_aux # DEFAULTS FOR DATABASE FILES AND MODES dict_file dict doc_file "" textloc_file textloc query.textloc_file "" inv_file inv query_file "" qrels_file "" collstat_file "" rwmode SRDWR rmode SRDONLY rwcmode SRDWR|SCREATE ## DEFAULTS FOR DOCUMENT DESCRIPTIONS #### GENERIC PREPARSING num_pp_sections 0 pp_section.string "" pp_section.section_name - pp_section.action copy pp_section.oneline_flag false pp_section.newdoc_flag false pp.default_section_name - pp.default_section_action discard pp_filter "" #### PARSING INPUT index.section.name none index.section.ctype -1 method index.parse_sect.none token_to_con index.token_to_con.dict stem_wanted 1 stemmer index.stem.triestem stopword_wanted 1 stopword index.stop.stop_dict stop_file common_words thesaurus_wanted 0 text_thesaurus_file "" #### PARSING OUTPUT index.ctype.name none con_to_token index.con_to_token.contok_dict store_aux convert.tup.vec_inv weight_ctype convert.wt_ctype.weight_tri doc_weight nnn query_weight nnn sect_weight nnn num_sections 0 num_ctypes 0 ## DEFAULTS FOR COLLECTION CREATION stop_file_size 1501 dict_file_size 30001 ## DEFAULTS FOR CONVERSION vec_inv.mem_usage 4194000 vec_inv.virt_mem_usage 50000000 in "" out "" deleted_doc_file "" ######################################### ## DEFAULTS FOR RETRIEVAL ## Procedures get_query retrieve.get_query.get_q_user user_qdisp retrieve.user_query.edit_skel qdisp_query retrieve.query_index.std_vec get_seen_docs retrieve.get_seen_docs.empty coll_sim retrieve.coll_sim.inverted inv_sim retrieve.ctype_coll.ctype_inv seq_sim retrieve.ctype_vec.inner vecs_vecs retrieve.vecs_vecs.vecs_vecs vec_vec retrieve.vec_vec.vec_vec retrieve.output retrieve.output.ret_tr rank_tr retrieve.rank_tr.rank_did sim_ctype_weight 1.0 eval.num_queries 0 num_wanted 10 ## Files tr_file "" rr_file "" seen_docs_file "" run_name "" ######################################### ## DEFAULTS FOR FEEDBACK ## Procedures feedback feedback.feedback.feedback feedback.expand feedback.expand.exp_const feedback.occ_info feedback.occ_info.ide feedback.weight feedback.weight.ide feedback.form feedback.form.form_query feedback.num_expand 15 feedback.num_iterations 0 feedback.alpha 1.0 feedback.beta 1.0 feedback.gamma 0.5 ######################################### ## DEFAULTS FOR TOP-LEVEL INTERACTIVE DISPLAY AND PRINTING DOCS indivtext print.indivtext.text_filter filter "" print.format "" print.rawdocflag 0 print.doctext print.print.doctext spec_list "" verbose 1 max_title_len 68 get_query.editor_only 0 ## SUBSET OF TOP-LEVEL ROUTINES (these are ones that are often called ## elsewhere, and substitution of routines may be desired.) index.doc index.top.doc_coll index.query index.top.query_coll top.exp_coll index.top.exp_coll top.inter inter.inter exp_coll index.top.exp_coll inter inter.inter convert convert.convert.convert print print.top.print retrieve retrieve.top.retrieve ## OBSOLETE (only included for backward compatibility) text_loc_prefix "" spec_file./spec token index.token.std_token parse index.parse.std_parse # Others onlyspace_sep 0 interior_slash 0 interior_semicolon 0 An example of configuration file for SMART ## GENERIC PREPARSER num_pp_sections 12 pp_section.0.string " " pp_section.0.oneline_flag true pp_section.0.newdoc_flag true pp_section.0.action discard … pp_section.8.string " " pp_section.8.section_name w pp_section.8.action copy ## PARSE INPUT index.num_sections1 index.section.0.namew index.section.0.methodindex.parse_sect.full index.section.0.word.ctype0 index.section.0.proper.ctype0 index.section.0.name.ctype 1
11
11 Tokenizing Process First step is to use parser to identify appropriate parts of document to tokenize Defer complex decisions to other components word is any sequence of alphanumeric characters, terminated by a space or special character, with everything converted to lower-case everything indexed example: Wal-Mart → Wal Mart but search finds documents with Wal and Mart adjacent incorporate some rules to reduce dependence on query transformation components
12
12 Stopping Function words (determiners, prepositions) have little meaning on their own High occurrence frequencies Treated as stopwords (i.e. removed) reduce index space, improve response time, improve effectiveness Can be important in combinations e.g., “to be or not to be”
13
13 Stopping Stopword list can be created from high- frequency words or based on a standard list Lists are customized for applications, domains, and even parts of documents e.g., “click” is a good stopword for anchor text Best policy is to index all words in documents, make decisions about which words to use at query time
14
14 Stopwords in Web Search Engine
15
15 Stemming Many morphological variations of words inflectional (plurals, tenses) derivational (making verbs nouns etc.) In most cases, these have the same or very similar meanings Stemmers attempt to reduce morphological variations of words to a common stem usually involves removing suffixes Can be done at indexing time or as part of query processing (like stopwords)
16
16 Stemming Generally a small but significant effectiveness improvement can be crucial for some languages e.g., 5-10% improvement for English, up to 50% in Arabic Words with the Arabic root ktb
17
17 Stemming Two basic types Dictionary-based: uses lists of related words Algorithmic: uses program to determine related words Algorithmic stemmers E.g. remove ‘s’ endings assuming plural e.g., cats → cat, lakes → lake, wiis → wii Many mistakes: UPS → up
18
18 Porter Stemmer Algorithmic stemmer used in IR experiments since the 70s Most common algorithm for stemming English Produces stems not words Makes a number of errors and difficult to modify
19
19 Problem with Rule-based Stemming Sometimes too aggressive in conflation (false positive) E.g., policy/police, execute/executive, university/universe Sometimes miss good conflations (false negative) E.g., European/Europe, matrices/matrix, machine/machinery Produce stems that are not words E.g., Iteration/iter, general/gener Corpus analysis can improve or replace a stemmer
20
20 Krovetz Stemmer Hybrid algorithmic-dictionary Word checked in dictionary If present, either left alone or replaced with “exception” If not present, word is checked for suffixes that could be removed After removal, dictionary is checked again Produces words not stems Comparable effectiveness Lower false positive rate, somewhat higher false negative
21
21 Stemming Examples Original Text marketing strategies carried out by U.S. companies for their agricultural chemicals, report predictions for market share of such chemicals, or report market statistics for agrochemicals. Porter Stemmer (stopwords removed) market strateg carr compan agricultur chemic report predict market share chemic report market statist agrochem KSTEM (stopwords removed) marketing strategy carry company agriculture chemical report prediction market share chemical report market statistic
22
22 Corpus-Based Stemming Hypothesis: Word variants that should be conflated co-occur in documents (text windows) in the corpus Initial equivalence classes generated by another source E.g., Porter stemmer E.g., Common initial prefix or n-grams New equivalence classes are clusters formed using co-occur statistics between pairs of words in initial classes Can be used for other languages
23
23 Corpus-Based Stemming (Xu & Croft, 1998)
24
24 Phrases Many queries are 2-3 word phrases Phrases are More precise than single words e.g., documents containing “black sea” vs. two words “black” and “sea” Less ambiguous e.g., “big apple” vs. “apple” In IR, phrases are restricted to noun phrases (or a sequence of nouns or adjectives followed by nouns)
25
25 Phrases Text processing issue – how are phrases recognized? Three possible approaches: Identify syntactic phrases using a part-of-speech (POS) tagger Use word n-grams Store word positions in indexes and use proximity operators in queries
26
26 POS Tagging POS taggers use statistical models of text to predict syntactic tags of words Example tags: NN (singular noun), NNS (plural noun), VB (verb), VBD (verb, past tense), VBN (verb, past participle), IN (preposition), JJ (adjective), CC (conjunction, e.g., “and”, “or”), PRP (pronoun), and MD (modal auxiliary, e.g., “can”, “will”). Phrases can then be defined as simple noun groups, for example
27
27 Pos Tagging Example
28
28 Pos Tagging Example
29
29 Example Noun Phrases
30
30 Word N-Grams POS tagging too slow for large collections Simpler definition – phrase is any sequence of n words – known as n-grams bigram: 2 word sequence, trigram: 3 word sequence, unigram: single words N-grams also used at character level for applications such as OCR N-grams typically formed from overlapping sequences of words i.e. move n-word “window” one word at a time “Document will describe…” “document will” and “will describe”
31
31 N-Grams Frequent n-grams are more likely to be meaningful phrases N-grams form a Zipf distribution Better fit than words alone Could index all n-grams up to specified length Much faster than POS tagging Uses a lot of storage e.g., document containing 1,000 words would contain 3,990 instances of word n-grams of length 2 ≤ n ≤ 5
32
32 Google N-Grams Web search engines index n-grams Google sample: Most frequent trigram in English is “all rights reserved” In Chinese, “limited liability corporation”
33
33 Document Structure and Markup Some parts of documents are more important than others Document parser recognizes structure using markup, such as HTML tags Headers, anchor text, bolded text all likely to be important Metadata can also be important Links used for link analysis
34
34 Example Web Page
35
35 Example Web Page
36
36 Link Analysis Links are a key component of the Web Important for navigation, but also for search e.g., Example website “Example website” is the anchor text “http://example.com” is the destination link both are used by search engines
37
37 Anchor Text Used as a description of the content of the destination page i.e., collection of anchor text in all links pointing to a page used as an additional text field Anchor text tends to be short, descriptive, and similar to query text Retrieval experiments have shown that anchor text has significant impact on effectiveness for some types of queries i.e., more than link analysis
38
38 Link Quality Link quality is affected by spam and other factors e.g., link farms to increase PageRank trackback links in blogs can create loops
39
39 Information Extraction Automatically extract structure from text annotate document using tags to identify extracted structure Named entity recognition identify words that refer to something of interest in a particular application e.g., people, companies, locations, dates, product names, prices, etc.
40
40 Named Entity Recognition Example showing semantic annotation of text using XML tags Information extraction also includes document structure and more complex features such as relationships and events
41
41 Named Entity Recognition Rule-based Uses lexicons (lists of words and phrases) that categorize names e.g., locations, peoples’ names, organizations, etc. Rules also used to verify or find new entity names e.g., “ street” for addresses “, ” or “in ” to verify city names “,, ” to find new cities “ ” to find new names
42
42 Named Entity Recognition Rules either developed manually by trial and error or using machine learning techniques Statistical uses a probabilistic model of the words in and around an entity probabilities estimated using training data (manually annotated text) Hidden Markov Model (HMM) is one approach
43
43 HMM Sentence Model Probabilistic finite state automata Each state is associated with a probability distribution over words (the output)
44
44 Internationalization 2/3 of the Web is in English About 50% of Web users do not use English as their primary language Many (maybe most) search applications have to deal with multiple languages monolingual search: search in one language, but with many possible languages cross-language search: search in multiple languages at the same time
45
45 Internationalization Many aspects of search engines are language- neutral Major differences: Text encoding (converting to Unicode) Tokenizing (many languages have no word separators) Stemming Cultural differences may also impact interface design and features provided
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.