Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 20: Crawling and web indexes.

Slides:



Advertisements
Similar presentations
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
Advertisements

Information Retrieval CSE 8337 (Part II) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 20: Crawling 1.
Web Categorization Crawler – Part I Mohammed Agabaria Adam Shobash Supervisor: Victor Kulikov Winter 2009/10 Final Presentation Sep Web Categorization.
Introduction to Web Crawling and Regular Expression CSC4170 Web Intelligence and Social Computing Tutorial 1 Tutor: Tom Chao Zhou
CpSc 881: Information Retrieval. 2 How hard can crawling be?  Web search engines must crawl their documents.  Getting the content of the documents is.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Crawling the WEB Representation and Management of Data on the Internet.
CS Lecture 9 Storeing and Querying Large Web Graphs.
CS728 Lecture 16 Web indexes II. Last Time Indexes for answering text queries –given term produce all URLs containing –Compact representations for postings.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 20: Crawling 1.
Crawling and web indexes. Today’s lecture Crawling Connectivity servers.
March 26, 2003CS502 Web Information Systems1 Web Crawling and Automatic Discovery Donna Bergmark Cornell Information Systems
WEB CRAWLERs Ms. Poonam Sinai Kenkre.
1 CS 430 / INFO 430 Information Retrieval Lecture 15 Web Search 1.
Search engine structure Web Crawler Page archive Page Analizer Control Query resolver ? Ranker text Structure auxiliary Indexer.
Web Crawling David Kauchak cs160 Fall 2009 adapted from:
Deduplication CSCI 572: Information Retrieval and Search Engines Summer 2010.
Web Crawling Fall 2011 Dr. Lillian N. Cassel. Overview of the class Purpose: Course Description – How do they do that? Many web applications, from Google.
PrasadL16Crawling1 Crawling and Web Indexes Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
DETECTING NEAR-DUPLICATES FOR WEB CRAWLING Authors: Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma Presentation By: Fernando Arreola.
Wasim Rangoonwala ID# CS-460 Computer Security “Privacy is the claim of individuals, groups or institutions to determine for themselves when,
Crawlers and Spiders The Web Web crawler Indexer Search User Indexes Query Engine 1.
CSCI 5417 Information Retrieval Systems Jim Martin
CS276B Text Retrieval and Mining Winter 2005 Lecture 7.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Presented By: Sibin G. Peter Instructor: Dr. R.M.Verma.
Web Search Module 6 INST 734 Doug Oard. Agenda The Web  Crawling Web search.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Crawling.
Crawling Slides adapted from
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 19 11/1/2011.
1 CS 430 / INFO 430: Information Discovery Lecture 19 Web Search 1.
1 Crawling The Web. 2 Motivation By crawling the Web, data is retrieved from the Web and stored in local repositories Most common example: search engines,
استاد : مهندس حسین پور ارائه دهنده : احسان جوانمرد Google Architecture.
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
CRAWLER DESIGN YÜCEL SAYGIN These slides are based on the book “Mining the Web” by Soumen Chakrabarti Refer to “Crawling the Web” Chapter for more information.
CS 347Notes101 CS 347 Parallel and Distributed Data Processing Distributed Information Retrieval Hector Garcia-Molina Zoltan Gyongyi.
1 University of Qom Information Retrieval Course Web Search (Spidering) Based on:
Information Retrieval and Web Search Web Crawling Instructor: Rada Mihalcea (some of these slides were adapted from Ray Mooney’s IR course at UT Austin)
1 MSRBot Web Crawler Dennis Fetterly Microsoft Research Silicon Valley Lab © Microsoft Corporation.
What is Web Information retrieval from web Search Engine Web Crawler Web crawler policies Conclusion How does a web crawler work Synchronization Algorithms.
Parallel Crawlers Efficient URL Caching for World Wide Web Crawling Presenter Sawood Alam AND.
1 CS 430 / INFO 430 Information Retrieval Lecture 19 Web Search 1.
Crawling Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 20.1, 20.2 and 20.3.
Web Crawling and Automatic Discovery Donna Bergmark March 14, 2002.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
1 Crawling Slides adapted from – Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan.
1 CS 430: Information Discovery Lecture 17 Web Crawlers.
ITCS 6265 Lecture 11 Crawling and web indexes. This lecture Crawling Connectivity servers.
Allan Heydon and Mark Najork --Sumeet Takalkar. Inspiration of Mercator What is a Mercator Crawling Algorithm and its Functional Components Architecture.
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Design and Implementation of a High-Performance distributed web crawler Vladislav Shkapenyuk and Torsten Suel Proc. 18 th Data Engineering Conf., pp ,
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
Design and Implementation of a High- Performance Distributed Web Crawler Vladislav Shkapenyuk, Torsten Suel 실시간 연구실 문인철
CS276 Information Retrieval and Web Search
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Statistics Visualizer for Crawler
Lecture 17 Crawling and web indexes
UbiCrawler: a scalable fully distributed Web crawler
CS 430: Information Discovery
Crawler (AKA Spider) (AKA Robot) (AKA Bot)
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Spiders, crawlers, harvesters, bots
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Anwar Alhenshiri.
Presentation transcript:

Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 20: Crawling and web indexes

Introduction to Information Retrieval Today’s lecture  Crawling  Connectivity servers

Introduction to Information Retrieval Basic crawler operation  Begin with known “seed” URLs  Fetch and parse them  Extract URLs they point to  Place the extracted URLs on a queue  Fetch each URL on the queue and repeat Sec. 20.2

Introduction to Information Retrieval Crawling picture Web URLs crawled and parsed URLs frontier Unseen Web Seed pages Sec. 20.2

Introduction to Information Retrieval Simple picture – complications  Web crawling isn’t feasible with one machine  All of the above steps distributed  Malicious pages  Spam pages  Spider traps – incl dynamically generated  Even non-malicious pages pose challenges  Latency/bandwidth to remote servers vary  Webmasters’ stipulations  How “deep” should you crawl a site’s URL hierarchy?  Site mirrors and duplicate pages  Politeness – don’t hit a server too often Sec

Introduction to Information Retrieval What any crawler must do  Be Polite: Respect implicit and explicit politeness considerations  Only crawl allowed pages  Respect robots.txt (more on this shortly)  Be Robust: Be immune to spider traps and other malicious behavior from web servers Sec

Introduction to Information Retrieval What any crawler should do  Be capable of distributed operation: designed to run on multiple distributed machines  Be scalable: designed to increase the crawl rate by adding more machines  Performance/efficiency: permit full use of available processing and network resources Sec

Introduction to Information Retrieval What any crawler should do  Fetch pages of “higher quality” first  Continuous operation: Continue fetching fresh copies of a previously fetched page  Extensible: Adapt to new data formats, protocols Sec

Introduction to Information Retrieval Updated crawling picture URLs crawled and parsed Unseen Web Seed Pages URL frontier Crawling thread Sec

Introduction to Information Retrieval URL frontier  Can include multiple pages from the same host  Must avoid trying to fetch them all at the same time  Must try to keep all crawling threads busy Sec. 20.2

Introduction to Information Retrieval Explicit and implicit politeness  Explicit politeness: specifications from webmasters on what portions of site can be crawled  robots.txt  Implicit politeness: even with no specification, avoid hitting any site too often Sec. 20.2

Introduction to Information Retrieval Robots.txt  Protocol for giving spiders (“robots”) limited access to a website, originally from 1994   Website announces its request on what can(not) be crawled  For a URL, create a file URL/robots.txt  This file specifies access restrictions Sec

Introduction to Information Retrieval Robots.txt example  No robot should visit any URL starting with "/yoursite/temp/", except the robot called “searchengine": User-agent: * Disallow: /yoursite/temp/ User-agent: searchengine Disallow: Sec

Introduction to Information Retrieval Processing steps in crawling  Pick a URL from the frontier  Fetch the document at the URL  Parse the URL  Extract links from it to other docs (URLs)  Check if URL has content already seen  If not, add to indexes  For each extracted URL  Ensure it passes certain URL filter tests  Check if it is already in the frontier (duplicate URL elimination) E.g., only crawl.edu, obey robots.txt, etc. Which one? Sec

Introduction to Information Retrieval Basic crawl architecture WWW DNS Parse Content seen? Doc FP’s Dup URL elim URL set URL Frontier URL filter robots filters Fetch Sec

Introduction to Information Retrieval DNS (Domain Name Server)  A lookup service on the internet  Given a URL, retrieve its IP address  Service provided by a distributed set of servers – thus, lookup latencies can be high (even seconds)  Common OS implementations of DNS lookup are blocking: only one outstanding request at a time  Solutions  DNS caching  Batch DNS resolver – collects requests and sends them out together Sec

Introduction to Information Retrieval Parsing: URL normalization  When a fetched document is parsed, some of the extracted links are relative URLs  E.g., at we have a relative link to /wiki/Wikipedia:General_disclaimer which is the same as the absolute URL  During parsing, must normalize (expand) such relative URLs Sec

Introduction to Information Retrieval Content seen?  Duplication is widespread on the web  If the page just fetched is already in the index, do not further process it  This is verified using document fingerprints or shingles Sec

Introduction to Information Retrieval Filters and robots.txt  Filters – regular expressions for URL’s to be crawled/not  Once a robots.txt file is fetched from a site, need not fetch it repeatedly  Doing so burns bandwidth, hits web server  Cache robots.txt files Sec

Introduction to Information Retrieval Duplicate URL elimination  For a non-continuous (one-shot) crawl, test to see if an extracted+filtered URL has already been passed to the frontier  For a continuous crawl – see details of frontier implementation Sec

Introduction to Information Retrieval Distributing the crawler  Run multiple crawl threads, under different processes – potentially at different nodes  Geographically distributed nodes  Partition hosts being crawled into nodes  Hash used for partition  How do these nodes communicate? Sec

Introduction to Information Retrieval Communication between nodes  The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes WWW Fetch DNS Parse Content seen? URL filter Dup URL elim Doc FP’s URL set URL Frontier robots filters Host splitter To other hosts From other hosts Sec

Introduction to Information Retrieval URL frontier: two main considerations  Politeness: do not hit a web server too frequently  Freshness: crawl some pages more often than others  E.g., pages (such as News sites) whose content changes often These goals may conflict each other. (E.g., simple priority queue fails – many links out of a page go to its own site, creating a burst of accesses to that site.) Sec

Introduction to Information Retrieval Politeness – challenges  Even if we restrict only one thread to fetch from a host, can hit it repeatedly  Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host Sec

Introduction to Information Retrieval Back queue selector B back queues Single host on each Crawl thread requesting URL URL frontier: Mercator scheme Biased front queue selector Back queue router Prioritizer K front queues URLs Sec

Introduction to Information Retrieval Mercator URL frontier  URLs flow in from the top into the frontier  Front queues manage prioritization  Back queues enforce politeness  Each queue is FIFO Sec

Introduction to Information Retrieval Front queues Prioritizer 1K Biased front queue selector Back queue router Sec

Introduction to Information Retrieval Front queues  Prioritizer assigns to URL an integer priority between 1 and K  Appends URL to corresponding queue  Heuristics for assigning priority  Refresh rate sampled from previous crawls  Application-specific (e.g., “crawl news sites more often”) Sec

Introduction to Information Retrieval Biased front queue selector  When a back queue requests a URL (in a sequence to be described): picks a front queue from which to pull a URL  This choice can be round robin biased to queues of higher priority, or some more sophisticated variant  Can be randomized Sec

Introduction to Information Retrieval Back queues Biased front queue selector Back queue router Back queue selector 1B Heap Sec

Introduction to Information Retrieval Back queue invariants  Each back queue is kept non-empty while the crawl is in progress  Each back queue only contains URLs from a single host  Maintain a table from hosts to back queues Host nameBack queue …3 1 B Sec

Introduction to Information Retrieval Back queue heap  One entry for each back queue  The entry is the earliest time t e at which the host corresponding to the back queue can be hit again  This earliest time is determined from  Last access to that host  Any time buffer heuristic we choose Sec

Introduction to Information Retrieval Back queue processing  A crawler thread seeking a URL to crawl:  Extracts the root of the heap  Fetches URL at head of corresponding back queue q (look up from table)  Checks if queue q is now empty – if so, pulls a URL v from front queues  If there’s already a back queue for v’s host, append v to q and pull another URL from front queues, repeat  Else add v to q  When q is non-empty, create heap entry for it Sec

Introduction to Information Retrieval Number of back queues B  Keep all threads busy while respecting politeness  Mercator recommendation: three times as many back queues as crawler threads Sec

Connectivity servers

Introduction to Information Retrieval Connectivity Server  Support for fast queries on the web graph  Which URLs point to a given URL?  Which URLs does a given URL point to? Stores mappings in memory from  URL to outlinks, URL to inlinks  Applications  Crawl control  Web graph analysis  Connectivity, crawl optimization  Link analysis Sec. 20.4

Introduction to Information Retrieval Champion published work  Boldi and Vigna   Webgraph – set of algorithms and a java implementation  Fundamental goal – maintain node adjacency lists in memory  For this, compressing the adjacency lists is the critical component Sec. 20.4

Introduction to Information Retrieval Adjacency lists  The set of neighbors of a node  Assume each URL represented by an integer  E.g., for a 4 billion page web, need 32 bits per node  Naively, this demands 64 bits to represent each hyperlink Sec. 20.4

Introduction to Information Retrieval Adjaceny list compression  Properties exploited in compression:  Similarity (between lists)  Locality (many links from a page go to “nearby” pages)  Use gap encodings in sorted lists  Distribution of gap values Sec. 20.4

Introduction to Information Retrieval Storage  Boldi/Vigna get down to an average of ~3 bits/link  (URL to URL edge)  How? Why is this remarkable? Sec. 20.4

Introduction to Information Retrieval Main ideas of Boldi/Vigna  Consider lexicographically ordered list of all URLs, e.g.,       Sec. 20.4

Introduction to Information Retrieval Boldi/Vigna  Each of these URLs has an adjacency list  Main idea: due to templates, the adjacency list of a node is similar to one of the 7 preceding URLs in the lexicographic ordering  Express adjacency list in terms of one of these  E.g., consider these adjacency lists  1, 2, 4, 8, 16, 32, 64  1, 4, 9, 16, 25, 36, 49, 64  1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144  1, 4, 8, 16, 25, 36, 49, 64 Encode as (-2), remove 9, add 8 Why 7? Sec. 20.4

Introduction to Information Retrieval Gap encodings  Given a sorted list of integers x, y, z, …, represent by x, y-x, z-y, …  Compress each integer using a code   code - Number of bits =  lg x    code: …  Information theoretic bound: 1 +  lg x  bits   code: Works well for integers from a power law Boldi Vigna DCC 2004 Sec. 20.4

Introduction to Information Retrieval Main advantages of BV  Depends only on locality in a canonical ordering  Lexicographic ordering works well for the web  Adjacency queries can be answered very efficiently  To fetch out-neighbors, trace back the chain of prototypes  This chain is typically short in practice (since similarity is mostly intra-host)  Can also explicitly limit the length of the chain during encoding  Easy to implement one-pass algorithm Sec. 20.4

Introduction to Information Retrieval Resources  IIR Chapter 20  Mercator: A scalable, extensible web crawler (Heydon et al. 1999) Mercator: A scalable, extensible web crawler (Heydon et al. 1999)  A standard for robot exclusion A standard for robot exclusion  The WebGraph framework I: Compression techniques (Boldi et al. 2004) The WebGraph framework I: Compression techniques (Boldi et al. 2004)