Hongjun Song Computer Science The University of Memphis

Slides:



Advertisements
Similar presentations
Information Retrieval in Practice
Advertisements

The Inside Story Christine Reilly CSCI 6175 September 27, 2011.
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
1 The PageRank Citation Ranking: Bring Order to the web Lawrence Page, Sergey Brin, Rajeev Motwani and Terry Winograd Presented by Fei Li.
Web Search – Summer Term 2006 VI. Web Search - Indexing (c) Wolfgang Hürst, Albert-Ludwigs-University.
“ The Anatomy of a Large-Scale Hypertextual Web Search Engine ” Presented by Ahmed Khaled Al-Shantout ICS
Presentation of Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page (1997) Presenter: Scott White.
The PageRank Citation Ranking “Bringing Order to the Web”
Anatomy of a Large-Scale Hypertextual Web Search Engine (e.g. Google)
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Anatomy of a Large-Scale Hypertextual Web Search Engine ECE 7995: Term.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin and Lawrence Page.
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
Sigir’99 Inside Internet Search Engines: Search Jan Pedersen and William Chang.
1 COMP4332 Web Data Thanks for Raymond Wong’s slides.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin and Lawrence Page Distributed Systems - Presentation 6/3/2002 Nancy Alexopoulou.
Google and Scalable Query Services
1 The anatomy of a Large Scale Search Engine Sergey Brin,Lawrence Page Dept. CS of Stanford University.
An Application of Graphs: Search Engines (most material adapted from slides by Peter Lee) Slides by Laurie Hiyakumoto.
Presented By: - Chandrika B N
The Anatomy of a Large- Scale Hypertextual Web Search Engine Sergey Brin, Lawrence Page CS Department Stanford University Presented by Md. Abdus Salam.
David Evans Class 13: Quicksort, Problems and Procedures CS150: Computer Science University of Virginia Computer Science.
Basic Web Applications 2. Search Engine Why we need search ensigns? Why we need search ensigns? –because there are hundreds of millions of pages available.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Presented By: Sibin G. Peter Instructor: Dr. R.M.Verma.
Data Structures & Algorithms and The Internet: A different way of thinking.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
CSE 6331 © Leonidas Fegaras Information Retrieval 1 Information Retrieval and Web Search Engines Leonidas Fegaras.
Search Xin Liu. 2 Searching the Web for Information How a Search Engine Works –Basic parts: 1.Crawler: Visits sites on the Internet, discovering Web pages.
Search - on the Web and Locally Related directly to Web Search Engines: Part 1 and Part 2. IEEE Computer. June & August 2006.
Gregor Gisler-Merz How to hit in google The anatomy of a modern web search engine.
The PageRank Citation Ranking: Bringing Order to the Web Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd Presented by Anca Leuca, Antonis Makropoulos.
Search Engines. Search Strategies Define the search topic(s) and break it down into its component parts What terms, words or phrases do you use to describe.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin & Lawrence Page Presented by: Siddharth Sriram & Joseph Xavier Department of Electrical.
The Anatomy of a Large-Scale Hypertextual Web Search Engine Kevin Mauricio Apaza Huaranca San Pablo Catholic University.
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
Search Engines1 Searching the Web Web is vast. Information is scattered around and changing fast. Anyone can publish on the web. Two issues web users have.
CS 347Notes101 CS 347 Parallel and Distributed Data Processing Distributed Information Retrieval Hector Garcia-Molina Zoltan Gyongyi.
David Evans CS150: Computer Science University of Virginia Computer Science Class 38: Googling.
1 1 COMP5331: Knowledge Discovery and Data Mining Acknowledgement: Slides modified based on the slides provided by Lawrence Page, Sergey Brin, Rajeev Motwani.
Search Xin Liu.
1/16/20161 Introduction to Graphs Advanced Programming Concepts/Data Structures Ananda Gunawardena.
The World Wide Web. What is the worldwide web? The content of the worldwide web is held on individual pages which are gathered together to form websites.
1 CS 430: Information Discovery Lecture 20 Web Search Engines.
The Anatomy of a Large-Scale Hypertextual Web Search Engine S. Brin and L. Page, Computer Networks and ISDN Systems, Vol. 30, No. 1-7, pages , April.
The Anatomy of a Large-Scale Hypertextual Web Search Engine (The creation of Google)
1 Web Search Engines. 2 Search Engine Characteristics  Unedited – anyone can enter content Quality issues; Spam  Varied information types Phone book,
The Anatomy of a Large-Scale Hyper-textual Web Search Engine 전자전기컴퓨터공학과 G 김영제 Database Lab.
Presented By: Carlton Northern and Jeffrey Shipman The Anatomy of a Large-Scale Hyper-Textural Web Search Engine By Lawrence Page and Sergey Brin (1998)
Why indexing? For efficient searching of a document
Information Retrieval in Practice
Advanced Topics in Concurrency and Reactive Programming: Case Study – Google Cluster Majeed Kassis.
SEARCH ENGINES & WEB CRAWLER Akshay Ghadge Roll No: 107.
Hash table CSC317 We have elements with key and satellite data
Map Reduce.
Hashing Alexandra Stefan.
More complexity analysis & Binary Search
The Anatomy Of A Large Scale Search Engine
CSE 454 Advanced Internet Systems University of Washington
Google and Scalable Query Services
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Search Search Engines Search Engine Optimization Search Interfaces
Information Retrieval
Anatomy of a search engine
What is a Search Engine EIT, Author Gay Robertson, 2017.
Data Mining Chapter 6 Search Engines
Sergey Brin, lawrence Page, The anatomy of a large scale hypertextual web search Engine Rogier Brussee ICI
CS246 Search Engine Scale.
CS246: Search-Engine Scale
Web Search Engines.
The Search Engine Architecture
Presentation transcript:

Hongjun Song Computer Science The University of Memphis Googling Hongjun Song Computer Science The University of Memphis COMP 7100: Computers in the Information society

Some searches... “David Evans” “Dave Evans” “idiot” “lawn lighting” Tomorrow at 6pm (but google doesn’t know that!)

Building a Web Search Engine Database of web pages Crawling the web collecting pages and links Indexing them efficiently Responding to Searches How to find documents that match a query How to rank the “best” documents

Crawling Crawler activeURLs = [ “www.yahoo.com” ] while (len(activeURLs) > 0) : newURLs = [ ] for URL in activeURLs: page = downloadPage (URL) newURLs += extractLinks (page) activeURLs = newURLs Problems: Will keep revisiting the same pages Will take very long to get a good view of the web Will annoy web server admins downloadPage and extractLinks must be very robust

Crawling Crawler activeURLs = [ “www.yahoo.com” ] visitedURLs = [ ] while (len(activeURLs) > 0) : newURLs = [ ] for URL in activeURLs: visitedURLs += URL page = downloadPage (URL) newURLs += extractLinks (page) - visitedURLs activeURLs = newURLs What is the complexity?

Distributed Crawler activeURLs = [ “www.yahoo.com” ] visitedURLs = [ ] while (len(activeURLs) > 0) : newURLs = [ ] parfor URL in activeURLs: visitedURLs += URL page = downloadPage (URL) newURLs += extractLinks (page) - visitedURLs activeURLs = newURLs Is this as “easy” as distributing finding aliens?

Building a Web Search Engine Database of web pages Crawling the web collecting pages and links Indexing them efficiently Responding to Searches How to find documents that match a query How to rank the “best” documents

Building an Index What if we just stored all the pages? Answering a query would be  (size of the database) (need to look at all characters in database) For google: about 4 Billion pages (actual size is now considered a corporate secret) * 60 KB (average web page size) = ~184 Trillion Linear is not nearly good enough when n is Trillions

Reverse Index Word Locations … “David” [ …, http://www.cs.virginia.edu/~evans/index.html:12, …] “Evans” [ …, http://www.cs.virginia.edu/~evans/index.html:19, …] What is time complexity of search now?

Best Possible Searching Searching Problem: Input: a target key key, a list of n <key, value> pairs, sorted by key using a comparison function cf Output: if key is in the list, the value associated with key; otherwise, not found What is the best possible solution to the general searching problem?

Sorting problem is Ω(n log n) There are n! possible orderings Each comparison can eliminate at best ½ of them So, best possible sorting procedure is Ω(log2n!) Sterling’s approximation: n! = Ω(nn) So, best possible sorting procedure is Ω(log (nn)) = Ω(n log n) Recall log multiplication is normal addition: log mn = log m + log n

Searching Problem is (log n) It is  (log n) Each comparison can eliminate at best ½ of all the elements from consideration It is O (log n) We know a procedure that solves it in (log n) For google: n is the number of distinct words on the web (hundreds of millions?) (log n) is not good enough

Faster Searching? The proof that searching is (log n) relied on knowing that the best a comparison can do is eliminate ½ the entries Can we do better? Without knowing anything about comparison: no With knowing about comparison: yes What if one comparison can eliminate O(n) of the entries?

Bin Searching First Letter Items a [<“aardvark”, [http://www.aardvarksareus.com, …]>, … ] b [ … ] … z [ …, <“zweitgeist”, […]>] def binsearch (key, table) : search (key, table[key[0]]) What is time complexity of binsearch?

Searching in O(1) To do better than  (log n) the number of bins must scale with n Average number of elements in a bin must be O(1) One comparison must eliminate O(n) of the elements

Hash Tables Bin = H(key, number of bins) Finding a good H is difficult H is a hash function We’ve seen cryptographic hash functions where H must be collision resistant For this, we don’t need that just need H must distribute the keys well across the bins Finding a good H is difficult You can download google’s from http://goog-sparsehash.sourceforge.net/

Google’s Lexicon 1998: 14 million words (much more today) Lookup word in H(word, nbins) Maps to WordID Key Words [<“aardvark”, 1024235>, ... ] 1 [<“aaa”, 224155>, ..., <“zzz”, 29543> ] ... nbins – 1 [<“abba”, 25583>, ..., <“zeit”, 50395> ]

Google’s Reverse Index (From 1998 paper...may have changed some since then) WordId ndocs pointer 00000000 3 00000001 15 ... 16777215 105 Inverted Barrels: 41 GB (1998) Lexicon: 293 MB (1998)

Inverted Barrels 7630486927 23 ... docid (27 bits) nhits (5 bits) hits (16 bits each) 7630486927 23 ... plain hit: capitalized: 1 bit font size: 3 bits position: 12 bits first 4095 chars, everything else extra info for anchors, titles (less position bits)

Building a Web Search Engine Database of web pages Crawling the web collecting pages and links Indexing them efficiently Responding to Searches How to find documents that match a query How to rank the “best” documents

Finding the “Best” Documents Humans rate them “Jerry and David’s Guide to the World Wide Web” (became Yahoo!) Machines rate them Count number of occurrences of keyword Easy for sites to rig this Machine language understanding not good enough Business Model Whoever pays you the most is listed first

Random Walk Model Initialize all page ranks = 0 p = select a random URL for as long as you feel like p.rank = p.rank + 1 p = select random link from Links (p) Eventually, ranks measure probability a random websurfer would encounter a page Problems with this?

Back Links http://www.google.com/search?hl=en&lr=&q=link%3Awww.cs.virginia.edu%2F%7Eevans%2Findex.html&btnG=Search = 219 backlinks

Counting Back Links link:http://www.deainc.com/ 109 backlinks (hey, I should be first!) Back links are not a good measure Most of mine are from my own pages But Google doesn’t know that (always) Some pages are more important than others

PageRank Weight the back links by the popularity of the linking page def PageRank (u): rank = 0 for b in BackLinks (u) rank = rank + PageRank (b) / Links (b) return rank Would this work?

Converging PageRank Ranks of all pages depend on ranks of all other pages Keep recalculating ranks until they converge def CalculatePageRanks (urls): initially, every rank is 1 for as many times as necessary calculate a new rank for each page (using old ranks of other pages) replace the old ranks with the new ranks How do initial ranks effect results? How many iterations are necessary?

PageRank Crawlable web (1998): 150 million pages, 1.7 Billion links Database of 322 million links Converges in ~50 iterations Initialization matters All pages = 1: very democratic, models browser equally likely to start on random page www.yahoo.com = 1, ..., all others = 0 More like what Google probably uses

Query Work To respond to 1 query (2002) Google in 2002: Read 100 MB of data 10s of Billions of CPU cycles Google in 2002: 15,000 commodity PCs Racks of 88 2GB PCs, $278,000 each rack Power: 10 MW-h/month ($1,500) If you have 15,000 PCs, there always be some with faults: load balancing, data partitioning

Building a Web Search Engine Database of web pages Crawling the web collecting pages and links Indexing them efficiently Responding to Searches How to find documents that match a query How to rank the “best” documents Ready to go become the next google?

Charge Before becoming the next Google, you need to finish COMP7100.