Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"

Slides:



Advertisements
Similar presentations
Lecture 4: Index Construction
Advertisements

Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
Index Construction David Kauchak cs160 Fall 2009 adapted from:
Hinrich Schütze and Christina Lioma Lecture 4: Index Construction
PrasadL06IndexConstruction1 Index Construction Adapted from Lectures by Prabhakar Raghavan (Google) and Christopher Manning (Stanford)
Indexes. Primary Indexes Dense Indexes Pointer to every record of a sequential file, (ordered by search key). Can make sense because records may be much.
Information Retrieval IR 4. Plan This time: Index construction.
IR Paolo Ferragina Dipartimento di Informatica Università di Pisa.
1 INF 2914 Information Retrieval and Web Search Lecture 6: Index Construction These slides are adapted from Stanford’s class CS276 / LING 286 Information.
Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.
CpSc 881: Information Retrieval. 2 Hardware basics Many design decisions in information retrieval are based on hardware constraints. We begin by reviewing.
Hinrich Schütze and Christina Lioma Lecture 4: Index Construction
Lecture 4 Index construction
INF 2914 Information Retrieval and Web Search
Indexing Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 4 9/1/2011.
Algorithms for Information Retrieval Is algorithmic design a 5-mins thinking task ???
Introduction to Information Retrieval Information Retrieval and Data Mining (AT71.07) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor:
Index Construction David Kauchak cs458 Fall 2012 adapted from:
Document Indexing: SPIMI
LIS618 lecture 2 the Boolean model Thomas Krichel
Dictionaries and Tolerant retrieval
Binary Merge-Sort Merge-Sort(A,i,j) 01 if (i < j) then 02 m = (i+j)/2; 03 Merge-Sort(A,i,m); 04 Merge-Sort(A,m+1,j); 05 Merge(A,i,m,j) Merge-Sort(A,i,j)
Inverted index, Compressing inverted index And Computing score in complete search system Chintan Mistry Mrugank dalal.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
An Introduction to IR Lecture 4 Index construction 1.
Index Construction 1 Lecture 5: Index Construction Web Search and Mining.
Index Construction (Modified from Stanford CS276 Class Lecture 4 Index construction)
1 ITCS 6265 Lecture 4 Index construction. 2 Plan Last lecture: Dictionary data structures Tolerant retrieval Wildcards Spell correction Soundex This lecture:
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
PrasadL06IndexConstruction1 Index Construction Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Index Construction: sorting Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Chap 4.
1. L01: Corpuses, Terms and Search Basic terminology The need for unstructured text search Boolean Retrieval Model Algorithms for compressing data Algorithms.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 4: Index Construction Related to Chapter 4:
Introduction to Information Retrieval COMP4210: Information Retrieval and Search Engines Lecture 4: Index Construction United International College.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Lecture 4: Index Construction Related to Chapter 4:
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 4: Index Construction Related to Chapter 4:
Web Information Retrieval Textbook by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze Notes Revised by X. Meng for SEU May 2014.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 4: Index Construction.
Crawling Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 20.1, 20.2 and 20.3.
Introduction to Information Retrieval CSE 538 MRS BOOK – CHAPTER IV Index Construction 1.
Introduction to Information Retrieval Introduction to Information Retrieval Introducing Information Retrieval and Web Search.
CS276 Lecture 4 Index construction. Plan Last lecture: Tolerant retrieval Wildcards Spell correction Soundex This time: Index construction.
CS315 Introduction to Information Retrieval Boolean Search 1.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Index Construction.
Information Retrieval and Data Mining (AT71. 07) Comp. Sc. and Inf
Index Construction Some of these slides are based on Stanford IR Course slides at
COMP9319: Web Data Compression and Search
Large Scale Search: Inverted Index, etc.
Chapter 4 Index construction
Query processing: phrase queries and positional indexes
7CCSMWAL Algorithmic Issues in the WWW
Modified from Stanford CS276 slides Lecture 4: Index Construction
Lecture 7: Index Construction
Implementation Issues & IR Systems
CS276: Information Retrieval and Web Search
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Index Construction: sorting
Lecture 7: Index Construction
Information Retrieval and Web Search Lecture 1: Boolean retrieval
Discussion 5 Sara Javanmardi.
3-2. Index construction Most slides were adapted from Stanford CS 276 course and University of Munich IR course.
Lecture 4: Index Construction
Query processing: phrase queries and positional indexes
Index construction 4장.
CS276: Information Retrieval and Web Search
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Index Construction Paolo Ferragina Dipartimento di Informatica Università di Pisa

Sec. 3.1 Basics

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" The memory hierarchy CPU registers L1 L2 RAM Cache Few Mbs Some nanosecs Few words fetched Few Gbs Tens of nanosecs Some words fetched HD net Few Tbs Many Tbs Even secs Packets Few millisecs B = 32K page 1 CPU RAM Spatial locality or Temporal locality

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Keep attention on disk... If sorting needs to manage strings Key observations: Array A is an “array of pointers to objects” For each object-to-object comparison A[i] vs A[j]: 2 random accesses to 2 memory locations A[i] and A[j] Q(n log n) random memory accesses (I/Os ??) Memory containing the strings A You sort A Not the strings Again caching helps, but how much ? Strings  IDs

SPIMI: Single-pass in-memory indexing Key idea #1: Generate separate dictionaries for each block (No need for term  termID) Key idea #2: Accumulate postings in lists as they occur (in internal memory). Generate an inverted index for each block. More space for postings available Compression is possible Merge indexes into one big index. Easy append with 1 file per posting (docID are increasing within a block), otherwise multi-merge like

SPIMI-Invert

Overall, SPIMI Dictionary & postings: How do we construct them? Scan the texts Find the proper tokens-list Append to the proper tokens-list the docID How do we: Find ? …time issue… Append ? …space issues … Postings’ size? Dictionary size ? … in-memory issues …

What about one single index? Doc 1 I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. Doc 2 So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious

Some issues Assign TermID Create pairs <termID, docID> (1 pass) Create pairs <termID, docID> Sort by TermID Sort in-place by DocID

Sec. 1.2 Dictionary & Postings Multiple term entries in a single document are merged. Split into Dictionary and Postings Doc. frequency information is added.

Sorting on disk multi-way merge-sort aka BSBI: Blocked sort-based Indexing Mapping term  termID to be kept in memory for constructing the pairs Consumes memory to pairs’ creation Needs two passes, unless you use hashing and thus some probability of collision.

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Binary Merge-Sort Merge-Sort(A,i,j) 01 if (i < j) then 02 m = (i+j)/2; 03 Merge-Sort(A,i,m); 04 Merge-Sort(A,m+1,j); 05 Merge(A,i,m,j) Divide Conquer Combine 1 2 7 1 2 8 10 7 9 13 19 Merge is linear in the #items to be merged

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Multi-way Merge-Sort Sort N items with main-memory M and disk-pages B: Pass 1: Produce (N/M) sorted runs. Pass i: merge X = M/B-1 runs  logX N/M passes Pg for run1 . . . Pg for run 2 . . . . . . Out Pg Pg for run X Disk Disk Main memory buffers of B items 7

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" How it works LogX (N/M) 1 2 3 4 5 6 7 8 9 10 11 12 13 15 17 19 2 passes (one Read/one Write) = 2 * (N/B) I/Os X 1 2 5 7…. X 1 2 5 10 7 9 13 19 M M N/M runs, each sorted in internal memory = 2 (N/B) I/Os — I/O-cost for X-way merge is ≈ 2 (N/B) I/Os per level

Cost of Multi-way Merge-Sort Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Cost of Multi-way Merge-Sort Number of passes = logX N/M  logM/B (N/M) Total I/O-cost is Q( (N/B) logM/B N/M ) I/Os In practice M/B ≈ 105  #passes =1  few mins Tuning depends on disk features Large fan-out (M/B) decreases #passes Compression would decrease the cost of a pass! 8

Distributed indexing For web-scale indexing: must use a distributed computing cluster of PCs Individual machines are fault-prone Can unpredictably slow down or fail How do we exploit such a pool of machines?

Distributed indexing Maintain a master machine directing the indexing job – considered “safe”. Break up indexing into sets of (parallel) tasks. Master machine assigns tasks to idle machines Other machines can play many roles during the computation

one machine handles a subrange of terms Doc-based partition Parallel tasks We will use two sets of parallel tasks Parsers and Inverters Break the document collection in two ways: Term-based partition one machine handles a subrange of terms Doc-based partition one machine handles a subrange of documents

Data flow: doc-based partitioning Master assign assign Postings Set1 Parser Inverter IL1 Set2 Parser Inverter IL2 splits Inverter ILk Parser Setk Each query-term goes to many machines

Data flow: term-based partitioning Master assign assign Postings Set1 Parser a-f g-p q-z Inverter a-f Set2 Parser a-f g-p q-z Inverter g-p splits Inverter q-z Parser a-f g-p q-z Setk Each query-term goes to one machine

MapReduce This is a robust and conceptually simple framework for distributed computing … without having to write code for the distribution part. Google indexing system (ca. 2002) consists of a number of phases, each implemented in MapReduce.

Data flow: term-based partitioning Guarantee fitting in one machine ? Guarantee fitting in one machine ? Data flow: term-based partitioning 16Mb -64Mb Master assign assign Postings Parser a-f g-p q-z Inverter a-f Parser a-f g-p q-z Inverter g-p splits Inverter q-z Parser a-f g-p q-z Segment files (local disks) Map phase Reduce phase

Dynamic indexing Up to now, we have assumed static collections. Now more frequently occurs that: Documents come in over time Documents are deleted and modified And this induces: Postings updates for terms already in dictionary New terms added/deleted to/from dictionary

Simplest approach Maintain “big” main index New docs go into “small” auxiliary index Search across both, and merge the results Deletions Invalidation bit-vector for deleted docs Filter search results (i.e. docs) by the invalidation bit-vector Periodically, re-index into one main index

Issues with 2 indexes Poor performance Merging of the auxiliary index into the main index is efficient if we keep a separate file for each postings list. Merge is the same as a simple append [new docIDs are greater]. But this needs a lot of files – inefficient for O/S. In reality: Use a scheme somewhere in between (e.g., split very large postings lists, collect postings lists of length 1 in one file etc.)

# indexes = logarithmic Logarithmic merge Maintain a series of indexes, each twice as large as the previous one: M, 21 M , 22 M , 23 M , … Keep a small index (Z) in memory (of size M) Store I0, I1, I2, … on disk (sizes M , 2M , 4M,…) If Z gets too big (> M), write to disk as I0 or merge with I0 (if I0 already exists) Either write merge Z to disk as I1 (if no I1) or merge with I1 to form a larger Z etc. # indexes = logarithmic

Some analysis (C = total collection size) Auxiliary and main index: Each text participates to at most (C/M) mergings because we have 1 merge of the two indexes (small and large) every M-size document insertions. Logarithmic merge: Each text participates to no more than log (C/M) mergings because at each merge the text moves to a next index and they are at most log (C/M).

Web search engines Most search engines now support dynamic indexing News items, blogs, new topical web pages But (sometimes/typically) they also periodically reconstruct the index Query processing is then switched to the new index, and the old index is then deleted