Network Science and the Web Networked Life CIS 112 Spring 2008 Prof. Michael Kearns.

Slides:



Advertisements
Similar presentations
Web Intelligence Text Mining, and web-related Applications
Advertisements

Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Web as Network: A Case Study Networked Life CIS 112 Spring 2010 Prof. Michael Kearns.
1 The PageRank Citation Ranking: Bring Order to the web Lawrence Page, Sergey Brin, Rajeev Motwani and Terry Winograd Presented by Fei Li.
Information Retrieval Lecture 8 Introduction to Information Retrieval (Manning et al. 2007) Chapter 19 For the MSc Computer Science Programme Dell Zhang.
How PageRank Works Ketan Mayer-Patel University of North Carolina January 31, 2011.
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 March 23, 2005
CSE 522 – Algorithmic and Economic Aspects of the Internet Instructors: Nicole Immorlica Mohammad Mahdian.
1 CS 430 / INFO 430: Information Retrieval Lecture 16 Web Search 2.
CS 345A Data Mining Lecture 1
CS246: Page Selection. Junghoo "John" Cho (UCLA Computer Science) 2 Page Selection Infinite # of pages on the Web – E.g., infinite pages from a calendar.
CS 345A Data Mining Lecture 1 Introduction to Web Mining.
1 CS 502: Computing Methods for Digital Libraries Lecture 16 Web search engines.
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
Link Structure and Web Mining Shuying Wang
The Web as Network Networked Life CSE 112 Spring 2006 Prof. Michael Kearns.
CS 345 Data Mining Lecture 1 Introduction to Web Mining.
Network Structure and Web Search Networked Life CIS 112 Spring 2010 Prof. Michael Kearns.
Link Analysis. 2 HITS - Kleinberg’s Algorithm HITS – Hypertext Induced Topic Selection For each vertex v Є V in a subgraph of interest: A site is very.
1 COMP4332 Web Data Thanks for Raymond Wong’s slides.
Network Science and the Web: A Case Study Networked Life CIS 112 Spring 2009 Prof. Michael Kearns.
News and Notes, 2/24 Homework 2 due at the start of Thursday’s class New required readings: –“Micromotives and Macrobehavior”, chapters 1, 3 and 4 –Watts,
Overview of Web Data Mining and Applications Part I
Google and the Page Rank Algorithm Székely Endre
Motivation When searching for information on the WWW, user perform a query to a search engine. The engine return, as the query’s result, a list of Web.
“ The Initiative's focus is to dramatically advance the means to collect,store,and organize information in digital forms,and make it available for searching,retrieval,and.
PRESENTED BY ASHISH CHAWLA AND VINIT ASHER The PageRank Citation Ranking: Bringing Order to the Web Lawrence Page and Sergey Brin, Stanford University.
The PageRank Citation Ranking: Bringing Order to the Web Larry Page etc. Stanford University, Technical Report 1998 Presented by: Ratiya Komalarachun.
HITS – Hubs and Authorities - Hyperlink-Induced Topic Search A on the left is an authority A on the right is a hub.
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
Presented By: - Chandrika B N
The PageRank Citation Ranking: Bringing Order to the Web Presented by Aishwarya Rengamannan Instructor: Dr. Gautam Das.
1 Announcements Research Paper due today Research Talks –Nov. 29 (Monday) Kayatana and Lance –Dec. 1 (Wednesday) Mark and Jeremy –Dec. 3 (Friday) Joe and.
CC P ROCESAMIENTO M ASIVO DE D ATOS O TOÑO 2015 Lecture 8: Information Retrieval II Aidan Hogan
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
1 CS 430: Information Discovery Lecture 9 Term Weighting and Ranking.
COM1721: Freshman Honors Seminar A Random Walk Through Computing Lecture 2: Structure of the Web October 1, 2002.
Autumn Web Information retrieval (Web IR) Handout #0: Introduction Ali Mohammad Zareh Bidoki ECE Department, Yazd University
Overview of Web Ranking Algorithms: HITS and PageRank
Ch 14. Link Analysis Padmini Srinivasan Computer Science Department
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
Ranking Link-based Ranking (2° generation) Reading 21.
1 1 COMP5331: Knowledge Discovery and Data Mining Acknowledgement: Slides modified based on the slides provided by Lawrence Page, Sergey Brin, Rajeev Motwani.
CC P ROCESAMIENTO M ASIVO DE D ATOS O TOÑO 2014 Aidan Hogan Lecture IX: 2014/05/05.
Information Retrieval and Web Search Link analysis Instructor: Rada Mihalcea (Note: This slide set was adapted from an IR course taught by Prof. Chris.
1 CS 430: Information Discovery Lecture 5 Ranking.
“Important” Vertices and the PageRank Algorithm Networked Life NETS 112 Fall 2014 Prof. Michael Kearns.
Chapter 8: Web Analytics, Web Mining, and Social Analytics
Mathematics of the Web Prof. Sara Billey University of Washington.
GRAPH AND LINK MINING 1. Graphs - Basics 2 Undirected Graphs Undirected Graph: The edges are undirected pairs – they can be traversed in any direction.
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
Web Spam Taxonomy Zoltán Gyöngyi, Hector Garcia-Molina Stanford Digital Library Technologies Project, 2004 presented by Lorenzo Marcon 1/25.
Automated Information Retrieval
22C:145 Artificial Intelligence
The PageRank Citation Ranking: Bringing Order to the Web
The PageRank Citation Ranking: Bringing Order to the Web
Aidan Hogan CC Procesamiento Masivo de Datos Otoño 2017 Lecture 7: Information Retrieval II Aidan Hogan
Link-Based Ranking Seminar Social Media Mining University UC3M
Introduction to Web Mining
Aidan Hogan CC Procesamiento Masivo de Datos Otoño 2018 Lecture 7 Information Retrieval: Ranking Aidan Hogan
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
HITS Hypertext Induced Topic Selection
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Junghoo “John” Cho UCLA
CS 345A Data Mining Lecture 1
CS 345A Data Mining Lecture 1
Introduction to Web Mining
CS 345A Data Mining Lecture 1
Presentation transcript:

Network Science and the Web Networked Life CIS 112 Spring 2008 Prof. Michael Kearns

The Web as Network Consider the web as a network –vertices: individual (html) pages –edges: hyperlinks between pages –will view as both a directed and undirected graph What is the structure of this network? –connected components –degree distributions –etc. What does it say about the people building and using it? –page and link generation –visitation statistics What are the algorithmic consequences? –web search –community identification

Graph Structure in the Web [Broder et al. paper] Report on the results of two massive “web crawls” Executed by AltaVista in May and October 1999 Details of the crawls: –automated script following hyperlinks (URLs) from pages found –large set of starting points collected over time –crawl implemented as breadth-first search –have to deal with webspam, infinite paths, timeouts, duplicates, etc. May ’99 crawl: –200 million pages, 1.5 billion links Oct ’99 crawl: –271 million pages, 2.1 billion links Unaudited, self-reported Sep ’03 stats:Sep ’03 stats: –3 major search engines claim > 3 billion pages indexed

Five Easy Pieces Authors did two kinds of breadth-first search: –ignoring link direction  weak connectivity –only following forward links  strong connectivity They then identify five different regions of the web: –strongly connected component (SCC): can reach any page in SCC from any other in directed fashion –component IN: can reach any page in SCC in directed fashion, but not reverse –component OUT: can be reached from any page in SCC, but not reverse –component TENDRILS: weakly connected to all of the above, but cannot reach SCC or be reached from SCC in directed fashion (e.g. pointed to by IN) –SCC+IN+OUT+TENDRILS form weakly connected component (WCC) –everything else is called DISC (disconnected from the above) –here is a visualization of this structurevisualization

Size of the Five SCC: ~56M pages, ~28% IN: ~43M pages, ~ 21% OUT: ~43M pages, ~21% TENDRILS: ~44M pages, ~22% DISC: ~17M pages, ~8% WCC > 91% of the web --- the giant component One interpretation of the pieces: –SCC: the heart of the web –IN: newer sites not yet discovered and linked to –OUT: “insular” pages like corporate web sites

Diameter Measurements Directed worst-case diameter of the SCC: –at least 28 Directed worst-case diameter of IN  SCC  OUT: –at least 503 Over 75% of the time, there is no directed path between a random start and finish page in the WCC –when there is a directed path, average length is 16 Average undirected distance in the WCC is 7 Moral: –web is a “small world” when we ignore direction –otherwise the picture is more complex

Degree Distributions They are, of course, heavy-tailedheavy-tailed Power law distribution of component size –consistent with the Erdos-Renyi model Undirected connectivity of web not reliant on “connectors” –what happens as we remove high-degree vertices?remove high-degree vertices?

Digression: “Collective Intelligence Foo Sponsored by O’Reilly publishers; interesting historyhistory Interesting attendees: –Tim O’Reilly; Rod Brooks; Larry Page; many others –Lots of CI start-ups Interesting topics: –Web 2.0, Wikipedia, recommender systems –Prediction markets and corporate apps –How to design such systems? –How to “trick” people into working for “free”? (ESP Game and CAPTCHAs) –Decomposing more complex problems (see behavioral experiments to come) –Bad actors and malicious behavior –Ants

Beyond Macroscopic Structure Such studies tell us the coarse overall structure of the web Use and construction of the web are more fine-grained –people browse the web for certain information or topics –people build pages that link to related or “similar” pages How do we quantify & analyze this more detailed structure? We’ll examine two related examples: –Kleinberg’s hubs and authorities automatic identification of “web communities” –PageRank automatic identification of “important” pages one of the main criteria used by Google –both rely mainly on the link structure of the web –both have an algorithm and a theory supporting it

Hubs and Authorities Suppose we have a large collection of pages on some topic –possibly the results of a standard web search Some of these pages are highly relevant, others not at all How could we automatically identify the important ones? What’s a good definition of importance? Kleinberg’s idea: there are two kinds of important pages: –authorities: highly relevant pages –hubs: pages that point to lots of relevant pages If you buy this definition, it further stands to reason that: –a good hub should point to lots of good authorities –a good authority should be pointed to by many good hubs –this logic is, of course, circular We need some math and an algorithm to sort it out

The HITS System (Hyperlink-Induced Topic Search) Given a user-supplied query Q: –assemble root set S of pages (e.g. first 200 pages by AltaVista) –grow S to base set T by adding all pages linked (undirected) to S –might bound number of links considered from each page in S Now consider directed subgraph induced on just pages in T For each page p in T, define its –hub weight h(p); initialize all to be 1 –authority weight a(p); initialize all to be 1 Repeat “forever”: –a(p) := sum of h(q) over all pages q  p –h(p) := sum of a(q) over all pages p  q –renormalize all the weights This algorithm will always converge! –weights computed related to eigenvectors of connectivity matrix –further substructure revealed by different eigenvectors Here are some examplesexamples

The PageRank Algorithm Let’s define a measure of page importance we will call the rank Notation: for any page p, let –N(p) be the number of forward links (pages p points to) –R(p) be the (to-be-defined) rank of p Idea: important pages distribute importance over their forward links So we might try defining –R(p) := sum of R(q)/N(q) over all pages q  p –can again define iterative algorithm for computing the R(p) –if it converges, solution again has an eigenvector interpretation –problem: cycles accumulate rank but never distribute it The fix: –R(p) := [sum of R(q)/N(q) over all pages q  p] + E(p) –E(p) is some external or exogenous measure of importance –some technical details omitted here (e.g. normalization) Let’s play with the PageRank calculatorPageRank calculator

The “Random Surfer” Model Let’s suppose that E(p) sums to 1 (normalized) Then the resulting PageRank solution R(p) will –also be normalized –can be interpreted as a probability distribution R(p) is the stationary distribution of the following process: –starting from some random page, just keep following random links –if stuck in a loop, jump to a random page drawn according to E(p) –so surfer periodically gets “bored” and jumps to a new page –E(p) can thus be personalized for each surfer An important component of Google’s search criteria

But What About Content? PageRank and Hubs & Authorities –both based purely on link structure –often applied to an pre-computed set of pages filtered for content So how do (say) search engines do this filtering? This is the domain of information retrieval

Basics of Information Retrieval Represent a document as a “bag of words”: –for each word in the English language, count number of occurences –so d[i] is the number of times the i-th word appears in the document –usually ignore common words (the, and, of, etc.) –usually do some stemming (e.g. “washed”  “wash”) –vectors are very long (~100Ks) but very sparse –need some special representation exploiting sparseness Note all that we ignore or throw away: –the order in which the words appear –the grammatical structure of sentences (parsing) –the sense in which a word is used firing a gun or firing an employee –and much, much more…

Bag of Words Document Comparison View documents as vectors in a very high-dimensional space Can now import geometry and linear algebra concepts Similarity between documents d and e: –  d[i]*e[i] over all words i –may normalize d and e first –this is their projection onto each other Improve by using TF/IDF weighting of words: –term frequency --- how frequent is the word in this document? –inverse document frequency --- how frequent in all documents? –give high weight to words with high TF and low IDF Search engines: –view the query as just another “document” –look for similar documents via above

Looking Ahead: Left Side vs. Right Side So far we are discussing the “left hand” search results on GoogleGoogle –a.k.a “organic” search; “Right hand” or “sponsored” search: paid advertisements in a formal market –We will spend a lecture on these markets later in the term Same two types of search/results on Yahoo!, MSN,… Common perception: –organic results are “objective”, based on content, importance, etc. –sponsored results are subjective advertisements But both sides are subject to “gaming” (strategic behavior)… –organic: invisible terms in the html, link farms and web spam, reverse engineering –sponsored: bidding behavior, “jamming” –optimization of each side has its own industry: SEO and SEMSEOSEM … and perhaps to outright fraud –organic: typo squattingtypo squatting –sponsored: click fraud More later…