Web Search Spidering.

Slides:



Advertisements
Similar presentations
Getting Your Web Site Found. Meta Tags Description Tag This allows you to influence the description of your page with the web crawlers.
Advertisements

Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Information Retrieval in Practice
Web Categorization Crawler – Part I Mohammed Agabaria Adam Shobash Supervisor: Victor Kulikov Winter 2009/10 Final Presentation Sep Web Categorization.
CpSc 881: Information Retrieval. 2 How hard can crawling be?  Web search engines must crawl their documents.  Getting the content of the documents is.
Crawling the WEB Representation and Management of Data on the Internet.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 20: Crawling 1.
World Wide Web1 Applications World Wide Web. 2 Introduction What is hypertext model? Use of hypertext in World Wide Web (WWW) – HTML. WWW client-server.
Searching the Web. The Web Why is it important: –“Free” ubiquitous information resource –Broad coverage of topics and perspectives –Becoming dominant.
March 26, 2003CS502 Web Information Systems1 Web Crawling and Automatic Discovery Donna Bergmark Cornell Information Systems
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
Crawling The Web. Motivation By crawling the Web, data is retrieved from the Web and stored in local repositories Most common example: search engines,
WEB CRAWLERs Ms. Poonam Sinai Kenkre.
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
Overview of Search Engines
Spiders, crawlers, harvesters, bots
Web search basics.
1 Spidering the Web in Python CSC 161: The Art of Programming Prof. Henry Kautz 11/23/2009.
Chapter 9 Using Perl for CGI Programming. Computation is required to support sophisticated web applications Computation can be done by the server or the.
DETECTING NEAR-DUPLICATES FOR WEB CRAWLING Authors: Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma Presentation By: Fernando Arreola.
Wasim Rangoonwala ID# CS-460 Computer Security “Privacy is the claim of individuals, groups or institutions to determine for themselves when,
Web Crawlers.
CS621 : Seminar-2008 DEEP WEB Shubhangi Agrawal ( )‏ Jayalekshmy S. Nair ( )‏
Crawlers and Spiders The Web Web crawler Indexer Search User Indexes Query Engine 1.
Chapter 7 Web Content Mining Xxxxxx. Introduction Web-content mining techniques are used to discover useful information from content on the web – textual.
Basic Web Applications 2. Search Engine Why we need search ensigns? Why we need search ensigns? –because there are hundreds of millions of pages available.
Web Search Module 6 INST 734 Doug Oard. Agenda The Web  Crawling Web search.
Crawling Slides adapted from
WHAT IS A SEARCH ENGINE A search engine is not a physical engine, instead its an electronic code or a software programme that searches and indexes millions.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
Chapter 8 Cookies And Security JavaScript, Third Edition.
1 CS 430 / INFO 430: Information Discovery Lecture 19 Web Search 1.
Information Retrieval and Web Search Web search. Spidering Instructor: Rada Mihalcea Class web page: (some of these.
SICP Search Algorithms Why search Data structures that support search Breadth first vs. depth first.
McLean HIGHER COMPUTER NETWORKING Lesson 7 Search engines Description of search engine methods.
1 Crawling The Web. 2 Motivation By crawling the Web, data is retrieved from the Web and stored in local repositories Most common example: search engines,
استاد : مهندس حسین پور ارائه دهنده : احسان جوانمرد Google Architecture.
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
Retrieving Information on the Web Presented by Md. Zaheed Iftekhar Course : Information Retrieval (IFT6255) Professor : Jian E. Nie DIRO, University of.
Search Engines1 Searching the Web Web is vast. Information is scattered around and changing fast. Anyone can publish on the web. Two issues web users have.
1 University of Qom Information Retrieval Course Web Search (Spidering) Based on:
Information Retrieval and Web Search Web Crawling Instructor: Rada Mihalcea (some of these slides were adapted from Ray Mooney’s IR course at UT Austin)
1. 2 Google Session 1.About MIT’s Google Search Appliance (GSA) 2.Adding Google search to your web site 3.Customizing search results 4.Tips on improving.
Website design and structure. A Website is a collection of webpages that are linked together. Webpages contain text, graphics, sound and video clips.
What is Web Information retrieval from web Search Engine Web Crawler Web crawler policies Conclusion How does a web crawler work Synchronization Algorithms.
8 Chapter Eight Server-side Scripts. 8 Chapter Objectives Create dynamic Web pages that retrieve and display database data using Active Server Pages Process.
The World Wide Web. What is the worldwide web? The content of the worldwide web is held on individual pages which are gathered together to form websites.
Website Design, Development and Maintenance ONLY TAKE DOWN NOTES ON INDICATED SLIDES.
WIRED Week 6 Syllabus Review Readings Overview Search Engine Optimization Assignment Overview & Scheduling Projects and/or Papers Discussion.
Spiders, crawlers, harvesters, bots Thanks to B. Arms R. Mooney P. Baldi P. Frasconi P. Smyth C. Manning.
Web Crawling and Automatic Discovery Donna Bergmark March 14, 2002.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
1 Crawling Slides adapted from – Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan.
1 CS 430: Information Discovery Lecture 17 Web Crawlers.
General Architecture of Retrieval Systems 1Adrienn Skrop.
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
1 Web Search Spidering (Crawling)
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
(Big) data accessing Prof. Wenwen Li School of Geographical Sciences and Urban Planning 5644 Coor Hall
Design and Implementation of a High- Performance Distributed Web Crawler Vladislav Shkapenyuk, Torsten Suel 실시간 연구실 문인철
CHAPTER 16 SEARCH ENGINE OPTIMIZATION. LEARNING OBJECTIVES How to monitor your site’s traffic What are the pros and cons of keyword advertising within.
Spiders, crawlers, harvesters, bots
Information Retrieval in Practice
CS 430: Information Discovery
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Spiders, crawlers, harvesters, bots
Search Search Engines Search Engine Optimization Search Interfaces
CNIT 131 HTML5 – Anchor/Link.
Implementation Based on Inverted Files
12. Web Spidering These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.
Presentation transcript:

Web Search Spidering

Spiders (Robots/Bots/Crawlers) Start with a comprehensive set of root URL’s from which to start the search. Follow all links on these pages recursively to find additional pages. Index all novel found pages in an inverted index as they are encountered. May allow users to directly submit pages to be indexed (and crawled from).

Search Strategies Breadth-first Search

Search Strategies (cont) Depth-first Search

Search Strategy Trade-Off’s Breadth-first explores uniformly outward from the root page but requires memory of all nodes on the previous level (exponential in depth). Standard spidering method. Depth-first requires memory of only depth times branching-factor (linear in depth) but gets “lost” pursuing a single thread. Both strategies implementable using a queue of links (URL’s).

Avoiding Page Duplication Must detect when revisiting a page that has already been spidered (web is a graph not a tree). Must efficiently index visited pages to allow rapid recognition test. Tree indexing (e.g. trie) Hashtable Index page using URL as a key. Must canonicalize URL’s (e.g. delete ending “/”) Not detect duplicated or mirrored pages. Index page using textual content as a key. Requires first downloading page.

Spidering Algorithm Initialize queue (Q) with initial set of known URL’s. Until Q empty or page or time limit exhausted: Pop URL, L, from front of Q. If L is not to an HTML page (.gif, .jpeg, .ps, .pdf, .ppt…) continue loop. If already visited L, continue loop. Download page, P, for L. If cannot download P (e.g. 404 error, robot excluded) Index P (e.g. add to inverted index or store cached copy). Parse P to obtain list of new links N. Append N to the end of Q.

Queueing Strategy How new links added to the queue determines search strategy. FIFO (append to end of Q) gives breadth-first search. LIFO (add to front of Q) gives depth-first search. Heuristically ordering the Q gives a “focused crawler” that directs its search towards “interesting” pages.

Restricting Spidering Restrict spider to a particular site. Remove links to other sites from Q. Restrict spider to a particular directory. Remove links not in the specified directory. Obey page-owner restrictions (robot exclusion).

Link Extraction Must find all links in a page and extract URLs. <a href=“http://www.cs.utexas.edu/users/mooney/ir-course”> <frame src=“site-index.html”> Must complete relative URL’s using current page URL: <a href=“proj3”> to http://www.cs.utexas.edu/users/mooney/ir-course/proj3 <a href=“../cs343/syllabus.html”> to http://www.cs.utexas.edu/users/mooney/cs343/syllabus.html

URL Syntax A URL has the following syntax: <scheme>://<authority><path>?<query>#<fragment> An authority has the syntax: <host>:<port-number> A query passes variable values from an HTML form and has the syntax: <variable>=<value>&<variable>=<value>… A fragment is also called a reference or a ref and is a pointer within the document to a point specified by an anchor tag of the form: <A NAME=“<fragment>”>

Java Spider Spidering code in ir.webutils package. Generic spider in Spider class. Does breadth-first crawl from a start URL and saves copy of each page in a local directory. This directory can then be indexed and searched using VSR InvertedIndex. Main method parameters: -u <start-URL> -d <save-directory> -c <page-count-limit>

Java Spider (cont.) Robot Exclusion can be invoked to prevent crawling restricted sites/pages. -safe Specialized classes also restrict search: SiteSpider: Restrict to initial URL host. DirectorySpider: Restrict to below initial URL directory.

Spider Java Classes HTMLPage link HTMLPageRetriever Link getHTMLPage() text outLinks HTMLPageRetriever getHTMLPage() Link url URL String LinkExtractor page extract()

Link Canonicalization Equivalent variations of ending directory normalized by removing ending slash. http://www.cs.utexas.edu/users/mooney/ http://www.cs.utexas.edu/users/mooney Internal page fragments (ref’s) removed: http://www.cs.utexas.edu/users/mooney/welcome.html#courses http://www.cs.utexas.edu/users/mooney/welcome.html

Link Extraction in Java Java Swing contains an HTML parser. Parser uses “call-back” methods. Pass parser an object that has these methods: HandleText(char[] text, int position) HandleStartTag(HTML.Tag tag, MutableAttributeSet attributes, int position) HandleEndTag(HTML.Tag tag, int position) HandleSimpleTag (HTML.Tag tag, MutableAttributeSet attributes, int position) When parser encounters a tag or intervening text, it calls the appropriate method of this object.

Link Extraction in Java (cont.) In HandleStartTag, if it is an “A” tag, take the HREF attribute value as an initial URL. Complete the URL using the base URL: new URL(URL baseURL, String relativeURL) Fails if baseURL ends in a directory name but this is not indicated by a final “/” Append a “/” to baseURL if it does not end in a file name with an extension (and therefore presumably is a directory).

Cached File with Base URL Store copy of page in a local directory for eventual indexing for retrieval. BASE tag in the header section of an HTML file changes the base URL for all relative pointers: <BASE HREF=“<base-URL>”> This is specifically included in HTML for use in documents that were moved from their original location.

Java Spider Trace As a simple demo, SiteSpider was used to collect 100 pages starting at: UT CS Faculty Page See trace at: http://www.cs.utexas.edu/users/mooney/ir-course/spider-trace.txt A larger crawl from the same page was used to assemble 800 pages that are cached at: /u/mooney/ir-code/corpora/cs-faculty/

Servlet Web Interface Demo Web interface to using VSR to search directories of cached HTML files is at: http://www.cs.utexas.edu/users/mooney/ir-course/search.html The Java Servlet code supporting this demo is at: /u/ml/servlets/irs/Search.java

Anchor Text Indexing Extract anchor text (between <a> and </a>) of each link followed. Anchor text is usually descriptive of the document to which it points. Add anchor text to the content of the destination page to provide additional relevant keyword indices. Used by Google: <a href=“http://www.microsoft.com”>Evil Empire</a> <a href=“http://www.ibm.com”>IBM</a>

Anchor Text Indexing (cont) Helps when descriptive text in destination page is embedded in image logos rather than in accessible text. Many times anchor text is not useful: “click here” Increases content more for popular pages with many in-coming links, increasing recall of these pages. May even give higher weights to tokens from anchor text.

Robot Exclusion Web sites and pages can specify that robots should not crawl/index certain areas. Two components: Robots Exclusion Protocol: Site wide specification of excluded directories. Robots META Tag: Individual document tag to exclude indexing or following links.

Robots Exclusion Protocol Site administrator puts a “robots.txt” file at the root of the host’s web directory. http://www.ebay.com/robots.txt http://www.cnn.com/robots.txt File is a list of excluded directories for a given robot (user-agent). Exclude all robots from the entire site: User-agent: * Disallow: /

Robot Exclusion Protocol Examples Exclude specific directories: User-agent: * Disallow: /tmp/ Disallow: /cgi-bin/ Disallow: /users/paranoid/ Exclude a specific robot: User-agent: GoogleBot Disallow: / Allow a specific robot: Disallow:

Robot Exclusion Protocol Details Only use blank lines to separate different User-agent disallowed directories. One directory per “Disallow” line. No regex patterns in directories.

Robots META Tag Include META tag in HEAD section of a specific HTML document. <meta name=“robots” content=“none”> Content value is a pair of values for two aspects: index | noindex: Allow/disallow indexing of this page. follow | nofollow: Allow/disallow following links on this page.

Robots META Tag (cont) Special values: Examples: all = index,follow none = noindex,nofollow Examples: <meta name=“robots” content=“noindex,follow”> <meta name=“robots” content=“index,nofollow”> <meta name=“robots” content=“none”>

Robot Exclusion Issues META tag is newer and less well-adopted than “robots.txt”. Standards are conventions to be followed by “good robots.” Companies have been prosecuted for “disobeying” these conventions and “trespassing” on private cyberspace. “Good robots” also try not to “hammer” individual sites with lots of rapid requests. “Denial of service” attack.

Multi-Threaded Spidering Bottleneck is network delay in downloading individual pages. Best to have multiple threads running in parallel each requesting a page from a different host. Distribute URL’s to threads to guarantee equitable distribution of requests across different hosts to maximize through-put and avoid overloading any single server. Early Google spider had multiple co-ordinated crawlers with about 300 threads each, together able to download over 100 pages per second.

Directed/Focused Spidering Sort queue to explore more “interesting” pages first. Two styles of focus: Topic-Directed Link-Directed

Topic-Directed Spidering Assume desired topic description or sample pages of interest are given. Sort queue of links by the similarity (e.g. cosine metric) of their source pages and/or anchor text to this topic description. Preferentially explores pages related to a specific topic. Robosurfer assignment in AI course.

Link-Directed Spidering Monitor links and keep track of in-degree and out-degree of each page encountered. Sort queue to prefer popular pages with many in-coming links (authorities). Sort queue to prefer summary pages with many out-going links (hubs).

Keeping Spidered Pages Up to Date Web is very dynamic: many new pages, updated pages, deleted pages, etc. Periodically check spidered pages for updates and deletions: Just look at header info (e.g. META tags on last update) to determine if page has changed, only reload entire page if needed. Track how often each page is updated and preferentially return to pages which are historically more dynamic. Preferentially update pages that are accessed more often to optimize freshness of more popular pages.