CS 430: Information Discovery

Slides:



Advertisements
Similar presentations
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Advertisements

Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 20: Crawling 1.
Web Categorization Crawler – Part I Mohammed Agabaria Adam Shobash Supervisor: Victor Kulikov Winter 2009/10 Final Presentation Sep Web Categorization.
Introduction to Web Crawling and Regular Expression CSC4170 Web Intelligence and Social Computing Tutorial 1 Tutor: Tom Chao Zhou
CpSc 881: Information Retrieval. 2 How hard can crawling be?  Web search engines must crawl their documents.  Getting the content of the documents is.
1 CS 430 / INFO 430: Information Retrieval Lecture 16 Web Search 2.
Crawling the WEB Representation and Management of Data on the Internet.
Web Servers How do our requests for resources on the Internet get handled? Can they be located anywhere? Global?
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 20: Crawling 1.
1 CS 502: Computing Methods for Digital Libraries Lecture 16 Web search engines.
Searching the Web. The Web Why is it important: –“Free” ubiquitous information resource –Broad coverage of topics and perspectives –Becoming dominant.
March 26, 2003CS502 Web Information Systems1 Web Crawling and Automatic Discovery Donna Bergmark Cornell Information Systems
WEB CRAWLERs Ms. Poonam Sinai Kenkre.
1 CS 502: Computing Methods for Digital Libraries Lecture 4 Identifiers and Reference Links.
1 CS 430 / INFO 430 Information Retrieval Lecture 15 Web Search 1.
Search engine structure Web Crawler Page archive Page Analizer Control Query resolver ? Ranker text Structure auxiliary Indexer.
Web Crawling David Kauchak cs160 Fall 2009 adapted from:
PrasadL16Crawling1 Crawling and Web Indexes Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
Crawlers and Spiders The Web Web crawler Indexer Search User Indexes Query Engine 1.
Mercator: A scalable, extensible Web crawler Allan Heydon and Marc Najork, World Wide Web, Young Geun Han.
Web Categorization Crawler Mohammed Agabaria Adam Shobash Supervisor: Victor Kulikov Winter 2009/10 Design & Architecture Dec
Crawlers - Presentation 2 - April (Web) Crawlers Domain Presented by: Or Shoham Amit Yaniv Guy Kroupp Saar Kohanovitch.
Web Search Module 6 INST 734 Doug Oard. Agenda The Web  Crawling Web search.
Crawling Slides adapted from
Search - on the Web and Locally Related directly to Web Search Engines: Part 1 and Part 2. IEEE Computer. June & August 2006.
1 CS 430 / INFO 430: Information Discovery Lecture 19 Web Search 1.
1 Crawling The Web. 2 Motivation By crawling the Web, data is retrieved from the Web and stored in local repositories Most common example: search engines,
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
Searching the web Enormous amount of information –In 1994, 100 thousand pages indexed –In 1997, 100 million pages indexed –In June, 2000, 500 million pages.
CRAWLER DESIGN YÜCEL SAYGIN These slides are based on the book “Mining the Web” by Soumen Chakrabarti Refer to “Crawling the Web” Chapter for more information.
CS 347Notes101 CS 347 Parallel and Distributed Data Processing Distributed Information Retrieval Hector Garcia-Molina Zoltan Gyongyi.
1 University of Qom Information Retrieval Course Web Search (Spidering) Based on:
1 MSRBot Web Crawler Dennis Fetterly Microsoft Research Silicon Valley Lab © Microsoft Corporation.
What is Web Information retrieval from web Search Engine Web Crawler Web crawler policies Conclusion How does a web crawler work Synchronization Algorithms.
1 CS 430: Information Discovery Lecture 18 Web Search Engines: Google.
1 CS 430 / INFO 430 Information Retrieval Lecture 19 Web Search 1.
Crawling Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 20.1, 20.2 and 20.3.
Information Discovery Lecture 20 Web Search 2. Example: Heritrix Crawler A high-performance, open source crawler for production and research Developed.
Information Retrieval (9) Prof. Dragomir R. Radev
1 CS 430: Information Discovery Lecture 26 Architecture of Information Retrieval Systems 1.
Web Crawling and Automatic Discovery Donna Bergmark March 14, 2002.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
1 Crawling Slides adapted from – Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan.
1 CS 430: Information Discovery Lecture 17 Web Crawlers.
ITCS 6265 Lecture 11 Crawling and web indexes. This lecture Crawling Connectivity servers.
1 CS 430: Information Discovery Lecture 20 Web Search Engines.
Week-6 (Lecture-1) Publishing and Browsing the Web: Publishing: 1. upload the following items on the web Google documents Spreadsheets Presentations drawings.
Allan Heydon and Mark Najork --Sumeet Takalkar. Inspiration of Mercator What is a Mercator Crawling Algorithm and its Functional Components Architecture.
General Architecture of Retrieval Systems 1Adrienn Skrop.
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
Design and Implementation of a High-Performance distributed web crawler Vladislav Shkapenyuk and Torsten Suel Proc. 18 th Data Engineering Conf., pp ,
Design and Implementation of a High- Performance Distributed Web Crawler Vladislav Shkapenyuk, Torsten Suel 실시간 연구실 문인철
Database System Laboratory Mercator: A Scalable, Extensible Web Crawler Allan Heydon and Marc Najork International Journal of World Wide Web, v.2(4), p ,
Data mining in web applications
4.01 How Web Pages Work.
4.01 How Web Pages Work.
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
4.01 How Web Pages Work.
Statistics Visualizer for Crawler
Lecture 17 Crawling and web indexes
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Extraction, aggregation and classification at Web Scale
Net 323 D: Networks Protocols
Spiders, crawlers, harvesters, bots
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
12. Web Spidering These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.
Anwar Alhenshiri.
WJEC GCSE Computer Science
4.01 How Web Pages Work.
cs430 lecture 02/22/01 Kamen Yotov
Presentation transcript:

CS 430: Information Discovery Lecture 19 Web Search 1

Course Administration •

Web Search Goal Provide information discovery for large amounts of open access material on the web Challenges • Volume of material -- several billion items, growing steadily • Items created dynamically or in databases • Great variety -- length, formats, quality control, purpose, etc. • Inexperience of users -- range of needs • Economic models to pay for the service

Strategies Subject hierarchies • Yahoo! -- use of human indexing Web crawling + automatic indexing • General -- Google, Teoma, ... Mixed models • Human directed web crawling and automatic indexing -- BBC News

Components of Web Search Service • Web crawler • Indexing system • Search system Considerations • Economics • Scalability

Economic Models Subscription Monthly fee with logon provides unlimited access (introduced by InfoSeek) Advertising Access is free, with display advertisements (introduced by Lycos) Can lead to distortion of results to suit advertisers Licensing Cost of company are covered by fees, licensing of software and specialized services

What is a Web Crawler? Web Crawler • A program for downloading web pages. • Given an initial set of seed URLs, it recursively downloads every page that is linked from pages in the set. • A focused web crawler downloads only those pages whose content satisfies some criterion. Also known as a web spider

Simple Web Crawler Algorithm Basic Algorithm Let S be set of URLs to pages waiting to be indexed. Initially S is the singleton, s, known as the seed. Take an element u of S and retrieve the page, p, that it references. Parse the page p and extract the set of URLs L it has links to. Update S = S + L - u Repeat as many times as necessary.

Not so Simple… Performance -- How do you crawl 1,000,000,000 pages? Politeness -- How do you avoid overloading servers? Failures -- Broken links, time outs, spider traps. Strategies -- How deep do we go? Depth first or breadth first? Implementations -- How do we store and update S and the other data structures needed?

What to Retrieve No web crawler retrieves everything Most crawlers retrieve only HTML (leaves and nodes in the tree) ASCII clear text (only as leaves in the tree) Some retrieve PDF PostScript,… Indexing after crawl Some index only the first part of long files (e.g., Google indexes about 100K words) Do you keep the files (e.g., Google cache)?

Crawling to build an historical archive Internet Archive: http://www.archive.org A non-for profit organization in San Francisco, created by Brewster Kahle, to collect and retain digital materials for future historians. Services include the Wayback Machine.

Building a Web Crawler: Links are not Easy to Extract Relative/Absolute CGI Parameters Dynamic generation of pages Server-side scripting Server-side image maps Links buried in scripting code

Robots Exclusion Example file: /robots.txt # robots.txt for http://www.example.com/ User-agent: * Disallow: /cyberworld/map/ Disallow: /tmp/ # these will soon disappear Disallow: /foo.html # Cybermapper knows where to go. User-agent: cybermapper Disallow: See: http://www.robotstxt.org/wc/robots.html

Extracts from: http://www.nytimes.com/robots.txt # robots.txt, nytimes.com 4/10/2002 User-agent: * Disallow: /2000 Disallow: /2001 Disallow: /2002 Disallow: /learning Disallow: /library Disallow: /reuters Disallow: /cnet Disallow: /archives Disallow: /indexes Disallow: /weather Disallow: /RealMedia

High Performance Web Crawling The web is growing fast: • To crawl a billion pages a month, a crawler must download about 400 pages per second. • Internal data structures must scale beyond the limits of main memory. Politeness: • A web crawler must not overload the servers that it is downloading from.

Typical crawling setting Multi-machine, clustered environment Multi-thread, parallel retrieval

Example: Mercator Crawler A high-performance, production crawler Used by the Internet Archive and others Being used by Cornell computer science for experiments in selective web crawling (automated collection development) Developed by Allan Heydon, Marc Njork and colleagues at Compaq Systems Research Center. (Continuation of work of Digital's Altavista group.)

Mercator Design parameters • Extensible. Many components are plugins that can be rewritten for different tasks. • Distributed. A crawl can be distributed in a symmetric fashion across many machines. • Scalable. Size of within memory data structures is bounded. • High performance. Performance is limited by speed of Internet connection (e.g., with 160 Mbit/sec connection, downloads 50 million documents per day). • Polite. Options of weak or strong politeness. • Continuous. Will support continuous crawling.

Mercator: Main Components • Crawling is carried out by multiple worker threads, e.g., 500 threads for a big crawl. • The URL frontier stores the list of absolute URLs to download. • The DNS resolver resolves domain names into IP addresses. • Protocol modules download documents using appropriate protocol (e.g., HTML). • Link extractor extracts URLs from pages and converts to absolute URLs. • URL filter and duplicate URL eliminator determine which URLs to add to frontier.

The URL Frontier A repository with two pluggable methods: add a URL, get a URL. Most web crawlers use variations of breadth-first traversal, but ... • Most URLs on a web page are relative (about 80%). • A single FIFO queue, serving many threads, would send many simultaneous requests to a single server. Weak politeness guarantee: Only one thread allowed to contact a particular web server. Stronger politeness guarantee: Maintain n FIFO queues, each for a single host, which feed the queues for the crawling threads by rules based on priority and politeness factors.

Duplicate URL Elimination Duplicate URLs are not added to the URL Frontier Requires efficient data structure to store all URLs that have been seen and to check a new URL. In memory: Represent URL by 8-byte checksum. Maintain in-memory hash table of URLs. Requires 5 Gigabytes for 1 billion URLs. Disk based: Combination of disk file and in-memory cache with batch updating to minimize disk head movement.

Domain Name Lookup Resolving domain names to IP addresses is a major bottleneck of web crawlers. Approach: • Separate DNS resolver and cache on each crawling computer. • Create multi-threaded version of DNS code (BIND). These changes reduced DNS loop-up from 70% to 14% of each thread's elapsed time.

Limitations of Web Crawling • Time delay. Typically a monthly cycle. Crawlers are ineffective with sites that change rapidly, e.g., news. • Pages not linked to. Crawlers find only those pages that are linked by paths from their seeds. • Depth of crawl. Crawlers do not index every page on a site (algorithms to avoid crawler traps). but ... Creators of information are increasingly organizing them to be accessible to the web search services (e.g., Springer- Verlag)

Research Topics in Web Crawling • How frequently to crawl and what strategies to use. • Identification of anomalies and crawling traps. • Strategies for crawling based on the content of web pages (focused and selective crawling). • Duplicate detection.

Further Reading Allan Heydon and Marc Najork, Mercator: A Scalable, Extensible Web Crawler. Compaq Systems Research Center, June 26, 1999. http://www.research.compaq.com/SRC/mercator/papers/www/paper.html