The Nuts & Bolts of Hypertext retrieval Crawling; Indexing; Retrieval.

Slides:



Advertisements
Similar presentations
Information Retrieval in Practice
Advertisements

Indexing & Tolerant Dictionaries The pdf image slides are from Hinrich Schütze’s slides,Hinrich Schütze L'Homme qui marche Alberto Giacometti (sold for.
Chapter 5: Introduction to Information Retrieval
Indexing. Efficient Retrieval Documents x terms matrix t 1 t 2... t j... t m nf d 1 w 11 w w 1j... w 1m 1/|d 1 | d 2 w 21 w w 2j... w 2m 1/|d.
The Inside Story Christine Reilly CSCI 6175 September 27, 2011.
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
The Search Engine Architecture CSCI 572: Information Retrieval and Search Engines Summer 2010.
Intelligent Information Retrieval 1 Vector Space Model for IR: Implementation Notes CSC 575 Intelligent Information Retrieval These notes are based, in.
Web Search – Summer Term 2006 VI. Web Search - Indexing (c) Wolfgang Hürst, Albert-Ludwigs-University.
“ The Anatomy of a Large-Scale Hypertextual Web Search Engine ” Presented by Ahmed Khaled Al-Shantout ICS
Information Retrieval in Practice
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Presentation of Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page (1997) Presenter: Scott White.
Anatomy of a Large-Scale Hypertextual Web Search Engine (e.g. Google)
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Anatomy of a Large-Scale Hypertextual Web Search Engine ECE 7995: Term.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin and Lawrence Page.
9/10: Indexing & Tolerant Dictionaries Make-up Class: 10:30  11:45AM The pdf image slides are from Hinrich Schütze’s slides,Hinrich Schütze.
Anatomy of Google (circa 1999) Slides from Project part B due a month from now (10/26)
Search Engine Technology Slides are revised version of the ones taken from Homework 1 returned Stats: Total: 38 Min:
Date: Fri, 15 Feb :53: Subject: IOC awards presidency also to Gore (RNN)-- In a surprising, but widely anticipated move, the International.
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
Green Island (Coral Cay; Great Barrier Reef; Australia; 9/18)
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin and Lawrence Page Distributed Systems - Presentation 6/3/2002 Nancy Alexopoulou.
Google and Scalable Query Services
1 The anatomy of a Large Scale Search Engine Sergey Brin,Lawrence Page Dept. CS of Stanford University.
Overview of Search Engines
The Anatomy of a Large- Scale Hypertextual Web Search Engine Sergey Brin, Lawrence Page CS Department Stanford University Presented by Md. Abdus Salam.
The Anatomy of a Large-Scale Hypertextual Web Search Engine By Sergey Brin and Lawrence Page Presented by Joshua Haley Zeyad Zainal Michael Lopez Michael.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Presented By: Sibin G. Peter Instructor: Dr. R.M.Verma.
Anatomy of a search engine Design criteria of a search engine Architecture Data structures.
Chapter 2 Architecture of a Search Engine. Search Engine Architecture n A software architecture consists of software components, the interfaces provided.
CSE 6331 © Leonidas Fegaras Information Retrieval 1 Information Retrieval and Web Search Engines Leonidas Fegaras.
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
Search Xin Liu. 2 Searching the Web for Information How a Search Engine Works –Basic parts: 1.Crawler: Visits sites on the Internet, discovering Web pages.
Search - on the Web and Locally Related directly to Web Search Engines: Part 1 and Part 2. IEEE Computer. June & August 2006.
Gregor Gisler-Merz How to hit in google The anatomy of a modern web search engine.
Search Engines. Search Strategies Define the search topic(s) and break it down into its component parts What terms, words or phrases do you use to describe.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin & Lawrence Page Presented by: Siddharth Sriram & Joseph Xavier Department of Electrical.
استاد : مهندس حسین پور ارائه دهنده : احسان جوانمرد Google Architecture.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Kevin Mauricio Apaza Huaranca San Pablo Catholic University.
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
Search Engines1 Searching the Web Web is vast. Information is scattered around and changing fast. Anyone can publish on the web. Two issues web users have.
David Evans CS150: Computer Science University of Virginia Computer Science Class 38: Googling.
1 MSRBot Web Crawler Dennis Fetterly Microsoft Research Silicon Valley Lab © Microsoft Corporation.
Search Xin Liu.
1/16/20161 Introduction to Graphs Advanced Programming Concepts/Data Structures Ananda Gunawardena.
1 Google: Case Study cs430 lecture 15 03/13/01 Kamen Yotov.
1 CS 430: Information Discovery Lecture 17 Web Crawlers.
1 CS 430: Information Discovery Lecture 20 Web Search Engines.
The Anatomy of a Large-Scale Hypertextual Web Search Engine S. Brin and L. Page, Computer Networks and ISDN Systems, Vol. 30, No. 1-7, pages , April.
General Architecture of Retrieval Systems 1Adrienn Skrop.
The Anatomy of a Large-Scale Hypertextual Web Search Engine (The creation of Google)
1 Web Search Engines. 2 Search Engine Characteristics  Unedited – anyone can enter content Quality issues; Spam  Varied information types Phone book,
The Anatomy of a Large-Scale Hyper-textual Web Search Engine 전자전기컴퓨터공학과 G 김영제 Database Lab.
Presented By: Carlton Northern and Jeffrey Shipman The Anatomy of a Large-Scale Hyper-Textural Web Search Engine By Lawrence Page and Sergey Brin (1998)
CSE 454 Indexing. Todo A bit repetitive – cut some slides Some inconsistencie – eg are positions in the index or not. Do we want nutch as case study instead.
Information Retrieval in Practice
Why indexing? For efficient searching of a document
Information Retrieval in Practice
Search Engine Architecture
IST 516 Fall 2011 Dongwon Lee, Ph.D.
The Anatomy Of A Large Scale Search Engine
Google and Scalable Query Services
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Anatomy of a search engine
Sergey Brin, lawrence Page, The anatomy of a large scale hypertextual web search Engine Rogier Brussee ICI
Web Search Engines.
The Search Engine Architecture
Efficient Retrieval Document-term matrix t1 t tj tm nf
Presentation transcript:

The Nuts & Bolts of Hypertext retrieval Crawling; Indexing; Retrieval

Indexing and Retrieval Issues

Efficient Retrieval (1) Document-term matrix t 1 t 2... t j... t m nf d 1 w 11 w w 1j... w 1m 1/|d 1 | d 2 w 21 w w 2j... w 2m 1/|d 2 | d i w i1 w i2... w ij... w im 1/|d i | d n w n1 w n2... w nj... w nm 1/|d n | w ij is the weight of term t j in document d i Most w ij ’s will be zero.

Naïve retrieval Consider query q = (q 1, q 2, …, q j, …, q n ), nf = 1/|q|. How to evaluate q (i.e., compute the similarity between q and every document)? Method 1: Compare q with every document directly. document data structure: d i : ((t 1, w i1 ), (t 2, w i2 ),..., (t j, w ij ),..., (t m, w im ), 1/|d i |) –Only terms with positive weights are kept. –Terms are in alphabetic order. query data structure: q : ((t 1, q 1 ), (t 2, q 2 ),..., (t j, q j ),..., (t m, q m ), 1/|q|)

Naïve retrieval Method 1: Compare q with documents directly (cont.) Algorithm initialize all sim(q, d i ) = 0; for each document di (i = 1, …, n) { for each term t j (j = 1, …, m) if t j appears in both q and d i sim(q, d i ) += q j  w ij ; sim(q, d i ) = sim(q, d i )  (1/|q|)  (1/|d i |); } sort documents in descending similarities and display the top k to the user;

Inverted Files Observation: Method 1 is not efficient as most non-zero entries in the document-term matrix need to be accessed. Method 2: Use Inverted File Index Several data structures: 1.For each term t j, create a list (inverted file list) that contains all document ids that have t j. I(t j ) = { (d 1, w 1j ), (d 2, w 2j ), …, (d i, w ij ), …, (d n, w nj ) } –d i is the document id number of the ith document. –Only entries with non-zero weights should be kept.

Inverted files Method 2: Use Inverted File Index (continued) Several data structures: 2.Normalization factors of documents are pre- computed and stored in an array: nf[i] stores 1/|d i |. 3.Create a hash table for all terms in the collection t j pointer to I(t j ) –Inverted file lists are typically stored on disk. –The number of distinct terms is usually very large.

How Inverted Files are Created Dictionary Postings

Retrieval using Inverted files Algorithm initialize all sim(q, d i ) = 0; for each term t j in q { find I(t) using the hash table; for each (d i, w ij ) in I(t) sim(q, d i ) += q j  w ij ; } for each document di sim(q, d i ) = sim(q, d i )  (1/|q|)  nf[i]; sort documents in descending similarities and display the top k to the user; Use something like this as part of your Project..

Retrieval using inverted indices Some observations about Method 2: If a document d does not contain any term of a given query q, then d will not be involved in the evaluation of q. Only non-zero entries in the columns in the document-term matrix corresponding to the query terms are used to evaluate the query. computes the similarities of multiple documents simultaneously (w.r.t. each query word)

Efficient Retrieval (8) Example (Method 2): Suppose q = { (t1, 1), (t3, 1) }, 1/|q| = d1 = { (t1, 2), (t2, 1), (t3, 1) }, nf[1] = d2 = { (t2, 2), (t3, 1), (t4, 1) }, nf[2] = d3 = { (t1, 1), (t3, 1), (t4, 1) }, nf[3] = d4 = { (t1, 2), (t2, 1), (t3, 2), (t4, 2) }, nf[4] = d5 = { (t2, 2), (t4, 1), (t5, 2) }, nf[5] = I(t1) = { (d1, 2), (d3, 1), (d4, 2) } I(t2) = { (d1, 1), (d2, 2), (d4, 1), (d5, 2) } I(t3) = { (d1, 1), (d2, 1), (d3, 1), (d4, 2) } I(t4) = { (d2, 1), (d3, 1), (d4, 1), (d5, 1) } I(t5) = { (d5, 2) }

Efficient Retrieval (9) After t1 is processed: sim(q, d1) = 2, sim(q, d2) = 0, sim(q, d3) = 1 sim(q, d4) = 2, sim(q, d5) = 0 After t3 is processed: sim(q, d1) = 3, sim(q, d2) = 1, sim(q, d3) = 2 sim(q, d4) = 4, sim(q, d5) = 0 After normalization: sim(q, d1) =.87, sim(q, d2) =.29, sim(q, d3) =.82 sim(q, d4) =.78, sim(q, d5) = 0

Efficiency versus Flexibility Storing computed document weights is good for efficiency but bad for flexibility. –Recomputation needed if tfw and idfw formulas change and/or tf and df information change. Flexibility is improved by storing raw tf and df information but efficiency suffers. A compromise –Store pre-computed tf weights of documents. –Use idf weights with query term tf weights instead of document term tf weights.

Main issues General-purpose crawling Context specific crawiling –Building topic-specific search engines…

Web Crawling (Search) Strategy Starting location(s) Traversal order –Depth first –Breadth first –Or ??? Cycles? Coverage? Load? b c d e fg h i j

Lecture of October 7 th Today’s Agenda: Complete Crawling issues --Discussion of Google & Mercator Topics in queue: Clustering; Collaborative filtering Next class: Clustering search results Read: paper by Vipin Kumar et. Al. Interesting link: Commercial search-engine conference

Robot (2) Some specific issues: 1.What initial URLs to use? Choice depends on type of search engines to be built. For general-purpose search engines, use URLs that are likely to reach a large portion of the Web such as the Yahoo home page. For local search engines covering one or several organizations, use URLs of the home pages of these organizations. In addition, use appropriate domain constraint.

Robot (4) 2.How to extract URLs from a web page? Need to identify all possible tags and attributes that hold URLs. Anchor tag: … Option tag: … Map: Frame: Link to an image: Relative path vs. absolute path:

Robot (7) Several research issues about robots: Fetching more important pages first with limited resources. –Can use measures of page importance Fetching web pages in a specified subject area such as movies and sports for creating domain-specific search engines. –Focused crawling Efficient re-fetch of web pages to keep web page index up-to-date. –Keeping track of change rate of a page

Focused Crawling Classifier: Is crawled page P relevant to the topic? –Algorithm that maps page to relevant/irrelevant Semi-automatic Based on page vicinity.. Distiller:is crawled page P likely to lead to relevant pages? –Algorithm that maps page to likely/unlikely Could be just A/H computation, and taking HUBS –Distiller determines the priority of following links off of P

Storing Summaries Can’t store complete page text –Whole WWW doesn’t fit on any server Stop Words Stemming What (compact) summary should be stored? –Per URL Title, snippet –Per Word URL, word number But, look at Google’s “Cache” copy

SPIDER CASE STUDY

Anatomy of Google (circa 1999) Slides from

Number of indexed pages, self-reported Google: 50% of the web? Search Engine Size over Time The “google” paper Discusses google’s Architecture circa 99

System Anatomy High Level Overview

Google Search Engine Architecture SOURCE: BRIN & PAGE URL Server- Provides URLs to be fetched Crawler is distributed Store Server - compresses and stores pages for indexing Repository - holds pages for indexing (full HTML of every page) Indexer - parses documents, records words, positions, font size, and capitalization Lexicon - list of unique words found HitList – efficient record of word locs+attribs Barrels hold (docID, (wordID, hitList*)*)* sorted: each barrel has range of words Anchors - keep information about links found in web pages URL Resolver - converts relative URLs to absolute Sorter - generates Doc Index Doc Index - inverted index of all words in all documents (except stop words) Links - stores info about links to each page (used for Pagerank) Pagerank - computes a rank for each page retrieved Searcher - answers queries

Major Data Structures Big Files –virtual files spanning multiple file systems –addressable by 64 bit integers –handles allocation & deallocation of File Descriptions since the OS’s is not enough –supports rudimentary compression

Major Data Structures (2) Repository –tradeoff between speed & compression ratio –choose zlib (3 to 1) over bzip (4 to 1) –requires no other data structure to access it

Major Data Structures (3) Document Index –keeps information about each document –fixed width ISAM (index sequential access mode) index –includes various statistics pointer to repository, if crawled, pointer to info lists –compact data structure –we can fetch a record in 1 disk seek during search

Major Data Structures (4) URL’s - docID file –used to convert URLs to docIDs –list of URL checksums with their docIDs –sorted by checksums –given a URL a binary search is performed –conversion is done in batch mode

Major Data Structures (4) Lexicon –can fit in memory for reasonable price currently 256 MB contains 14 million words 2 parts –a list of words –a hash table

Major Data Structures (4) Hit Lists –includes position font & capitalization –account for most of the space used in the indexes –3 alternatives: simple, Huffman, hand- optimized –hand encoding uses 2 bytes for every hit

Major Data Structures (4) Hit Lists (2)

Major Data Structures (5) Forward Index –partially ordered –used 64 Barrels –each Barrel holds a range of wordIDs –requires slightly more storage –each wordID is stored as a relative difference from the minimum wordID of the Barrel –saves considerable time in the sorting

Major Data Structures (6) Inverted Index –64 Barrels (same as the Forward Index) –for each wordID the Lexicon contains a pointer to the Barrel that wordID falls into –the pointer points to a doclist with their hit list – the order of the docIDs is important by docID or doc word-ranking –Two inverted barrels—the short barrel/full barrel

Major Data Structures (7) Crawling the Web –fast distributed crawling system –URLserver & Crawlers are implemented in phyton –each Crawler keeps about 300 connection open –at peek time the rate pages, 600K per second –uses:internal cached DNS lookup –synchronized IO to handle events –number of queues –Robust & Carefully tested

Major Data Structures (8) Indexing the Web –Parsing should know to handle errors –HTML typos –kb of zeros in a middle of a TAG –non-ASCII characters –HTML Tags nested hundreds deep Developed their own Parser –involved a fair amount of work –did not cause a bottleneck

Major Data Structures (9) Indexing Documents into Barrels –turning words into wordIDs –in-memory hash table - the Lexicon –new additions are logged to a file –parallelization shared lexicon of 14 million pages log of all the extra words

Major Data Structures (10) Indexing the Web –Sorting creating the inverted index produces two types of barrels –for titles and anchor (Short barrels) –for full text (full barrels) sorts every barrel separately running sorters at parallel the sorting is done in main memory Ranking looks at Short barrels first And then full barrels

Searching Algorithm –1. Parse the query –2. Convert word into wordIDs –3. Seek to the start of the doclist in the short barrel for every word –4. Scan through the doclists until there is a document that matches all of the search terms –5. Compute the rank of that document –6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough –7. If were not at the end of any doclist goto step 4 –8. Sort the documents by rank return the top K (May jump here after 40k pages)

The Ranking System The information –Position, Font Size, Capitalization –Anchor Text –PageRank Hits Types –title,anchor, URL etc.. –small font, large font etc..

The Ranking System (2) Each Hit type has it’s own weight –Counts weights increase linearly with counts at first but quickly taper off this is the IR score of the doc –(IDF weighting??) the IR is combined with PageRank to give the final Rank For multi-word query –A proximity score for every set of hits with a proximity type weight 10 grades of proximity

Feedback A trusted user may optionally evaluate the results The feedback is saved When modifying the ranking function we can see the impact of this change on all previous searches that were ranked

Results Produce better results than major commercial search engines for most searches Example: query “bill clinton” –return results from the “Whitehouse.gov” – addresses of the president –all the results are high quality pages –no broken links –no bill without clinton & no clinton without bill

Storage Requirements Using Compression on the repository about 55 GB for all the data used by the SE most of the queries can be answered by just the short inverted index with better compression, a high quality SE can fit onto a 7GB drive of a new PC

Storage Statistics Web Page Statistics

System Performance It took 9 days to download 26million pages 48.5 pages per second The Indexer & Crawler ran simultaneously The Indexer runs at 54 pages per second The sorters run in parallel using 4 machines, the whole process took 24 hours