Link Analysis in Web Mining Hubs and Authorities Spam Detection.

Slides:



Advertisements
Similar presentations
Matrices, Digraphs, Markov Chains & Their Use by Google Leslie Hogben Iowa State University and American Institute of Mathematics Leslie Hogben Iowa State.
Advertisements

TrustRank Algorithm Srđan Luković 2010/3482
Analysis of Large Graphs: TrustRank and WebSpam
Link Analysis: PageRank and Similar Ideas. Recap: PageRank Rank nodes using link structure PageRank: – Link voting: P with importance x has n out-links,
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
Link Analysis: PageRank
How Does a Search Engine Work? Part 2 Dr. Frank McCown Intro to Web Science Harding University This work is licensed under a Creative Commons Attribution-NonCommercial-
CS345 Data Mining Page Rank Variants. Review Page Rank  Web graph encoded by matrix M N £ N matrix (N = number of web pages) M ij = 1/|O(j)| iff there.
22 May 2006 Wu, Goel and Davison Models of Trust for the Web (MTW) WWW2006 Workshop L EHIGH U NIVERSITY.
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
CS345 Data Mining Web Spam Detection. Economic considerations  Search has become the default gateway to the web  Very high premium to appear on the.
Authoritative Sources in a Hyperlinked Environment Hui Han CSE dept, PSU 10/15/01.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 April 2, 2006
CS345 Data Mining Link Analysis 3: Hubs and Authorities Spam Detection Anand Rajaraman, Jeffrey D. Ullman.
Link Analysis, PageRank and Search Engines on the Web
1 Evaluating the Web PageRank Hubs and Authorities.
Web Projections Learning from Contextual Subgraphs of the Web Jure Leskovec, CMU Susan Dumais, MSR Eric Horvitz, MSR.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
PageRank Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata October 27, 2014.
CS345 Data Mining Link Analysis 2: Topic-Specific Page Rank Hubs and Authorities Spam Detection Anand Rajaraman, Jeffrey D. Ullman.
Link Analysis HITS Algorithm PageRank Algorithm.
WEB SPAM A By-Product Of The Search Engine Era Web Enhanced Information Management Aniruddha Dutta Department of Computer Science Columbia University.
CS246 Link-Based Ranking. Problems of TFIDF Vector  Works well on small controlled corpus, but not on the Web  Top result for “American Airlines” query:
Todd Friesen April, 2007 SEO Workshop Web 2.0 Expo San Francisco.
“ The Initiative's focus is to dramatically advance the means to collect,store,and organize information in digital forms,and make it available for searching,retrieval,and.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
Presented By: - Chandrika B N
Search Engine Optimization. Introduction SEO is a technique used to optimize a web site for search engines like Google, Yahoo, etc. It improves the volume.
1 Link Analysis and spam Slides adapted from – Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan –
Adversarial Information Retrieval The Manipulation of Web Content.
Web Spam Yonatan Ariel SDBI 2005 Based on the work of Gyongyi, Zoltan; Berkhin, Pavel; Garcia-Molina, Hector; Pedersen, Jan Stanford University The Hebrew.
Web Spam Detection with Anti- Trust Rank Vijay Krishnan Rashmi Raj Computer Science Department Stanford University.
Link Analysis 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 7: Link Analysis Mining Massive Datasets.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
Web Intelligence Web Communities and Dissemination of Information and Culture on the www.
윤언근 DataMining lab.  The Web has grown exponentially in size but this growth has not been isolated to good-quality pages.  spamming and.
1 Page Rank uIntuition: solve the recursive equation: “a page is important if important pages link to it.” uIn technical terms: compute the principal eigenvector.
How Does a Search Engine Work? Part 2 Dr. Frank McCown Intro to Web Science Harding University This work is licensed under a Creative Commons Attribution-NonCommercial-
Lecture 2: Extensions of PageRank
PageRank. s1s1 p 12 p 21 s2s2 s3s3 p 31 s4s4 p 41 p 34 p 42 p 13 x 1 = p 21 p 34 p 41 + p 34 p 42 p 21 + p 21 p 31 p 41 + p 31 p 42 p 21 / Σ x 2 = p 31.
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
Ranking Link-based Ranking (2° generation) Reading 21.
CS345 Data Mining Link Analysis 2: Topic-Specific Page Rank Hubs and Authorities Spam Detection Anand Rajaraman, Jeffrey D. Ullman.
- Murtuza Shareef Authoritative Sources in a Hyperlinked Environment More specifically “Link Analysis” using HITS Algorithm.
KAIST TS & IS Lab. CS710 Know your Neighbors: Web Spam Detection using the Web Topology SIGIR 2007, Carlos Castillo et al., Yahoo! 이 승 민.
1 CS 430: Information Discovery Lecture 5 Ranking.
Link Analysis Algorithms Page Rank Slides from Stanford CS345, slightly modified.
What is WEB SPAM Many slides are from a lecture by Marc Najork: “Detecting Spam Web Pages”
CS 540 Database Management Systems Web Data Management some slides are due to Kevin Chang 1.
TrustRank. 2 Observation – Good pages tend to link good pages. – Human is the best spam detector Algorithm – Select a small subset of pages and let a.
IR Theory: Web Information Retrieval. Web IRFusion IR Search Engine 2.
Extrapolation to Speed-up Query- dependent Link Analysis Ranking Algorithms Muhammad Ali Norozi Department of Computer Science Norwegian University of.
Web Mining Link Analysis Algorithms Page Rank. Ranking web pages  Web pages are not equally “important” v  Inlinks.
Adversarial Information System Tanay Tandon Web Enhanced Information Management April 5th, 2011.
Web Spam Taxonomy Zoltán Gyöngyi, Hector Garcia-Molina Stanford Digital Library Technologies Project, 2004 presented by Lorenzo Marcon 1/25.
Jeffrey D. Ullman Stanford University. 3  Mutually recursive definition:  A hub links to many authorities;  An authority is linked to by many hubs.
Jeffrey D. Ullman Stanford University.  Web pages are important if people visit them a lot.  But we can’t watch everybody using the Web.  A good surrogate.
Combatting Web Spam Dealing with Non-Main-Memory Web Graphs SimRank
WEB SPAM.
Search Engines and Link Analysis on the Web
PageRank Random Surfers on the Web Transition Matrix of the Web Dead Ends and Spider Traps Topic-Specific PageRank Hubs and Authorities Jeffrey D. Ullman.
PageRank Random Surfers on the Web Transition Matrix of the Web Dead Ends and Spider Traps Topic-Specific PageRank Jeffrey D. Ullman Stanford University.
Junghoo “John” Cho UCLA
Junghoo “John” Cho UCLA
Link Analysis II Many slides are borrowed from Stanford Data Mining Class taught by Drs Anand Rajaraman, Jeffrey D. Ullman, and Jure Leskovec.
Presentation transcript:

Link Analysis in Web Mining Hubs and Authorities Spam Detection

Problem formulation (1998)  Suppose we are given a collection of documents on some broad topic e.g., stanford, evolution, iraq perhaps obtained through a text search  Can we organize these documents in some manner? Page rank offers one solution HITS (Hypertext-Induced Topic Selection) is another  proposed at approx the same time

HITS Model  Interesting documents fall into two classes 1.Authorities are pages containing useful information course home pages home pages of auto manufacturers 2.Hubs are pages that link to authorities course bulletin list of US auto manufacturers

Idealized view HubsAuthorities

Mutually recursive definition  A good hub links to many good authorities  A good authority is linked from many good hubs  Model using two scores for each node Hub score and Authority score Represented as vectors h and a

Transition Matrix A  HITS uses a matrix A[i, j] = 1 if page i links to page j, 0 if not  A T, the transpose of A, is similar to the PageRank matrix M, but A T has 1’s where M has fractions

Example Yahoo M’softAmazon y a m y a m A =

Hub and Authority Equations  The hub score of page P is proportional to the sum of the authority scores of the pages it links to h = λ Aa Constant λ is a scale factor  The authority score of page P is proportional to the sum of the hub scores of the pages it is linked from a = μ A T h Constant μ is scale factor

Iterative algorithm  Initialize h, a to all 1’s  h = Aa  Scale h so that its max entry is 1.0  a = A T h  Scale a so that its max entry is 1.0  Continue until h, a converge

Example A = A T = a(yahoo) a(amazon) a(m’soft) ====== / h(yahoo) = 1 h(amazon) = 1 h(m’soft) = 1 1 2/3 1/

Existence and Uniqueness h = λ Aa a = μ A T h h = λμ AA T h a = λμ A T A a Under reasonable assumptions about A, the dual iterative algorithm converges to vectors h* and a* such that: h* is the principal eigenvector of the matrix AA T a* is the principal eigenvector of the matrix A T A

Bipartite cores HubsAuthorities Most densely-connected core (primary core) Less densely-connected core (secondary core)

Secondary cores  A single topic can have many bipartite cores corresponding to different meanings, or points of view abortion: pro-choice, pro-life evolution: darwinian, intelligent design jaguar: auto, Mac, NFL team, panthera onca  How to find such secondary cores?

Non-primary eigenvectors  AA T and A T A have the same set of eigenvalues An eigenpair is the pair of eigenvectors with the same eigenvalue The primary eigenpair (largest eigenvalue) is what we get from the iterative algorithm  Non-primary eigenpairs correspond to other bipartite cores The eigenvalue is a measure of the density of links in the core

Finding secondary cores  Once we find the primary core, we can remove its links from the graph  Repeat HITS algorithm on residual graph to find the next bipartite core  Technically, not exactly equivalent to non-primary eigenpair model

Creating the graph for HITS  We need a well-connected graph of pages for HITS to work well

Page Rank and HITS  Page Rank and HITS are two solutions to the same problem What is the value of an inlink from S to D? In the page rank model, the value of the link depends on the links into S In the HITS model, it depends on the value of the other links out of S  The destinies of Page Rank and HITS post-1998 were very different Why?

Web Spam  Search has become the default gateway to the web  Very high premium to appear on the first page of search results e.g., e-commerce sites advertising-driven sites

What is web spam?  Spamming = any deliberate action solely in order to boost a web page’s position in search engine results, incommensurate with page’s real value  Spam = web pages that are the result of spamming  This is a very broad definition SEO industry might disagree! SEO = search engine optimization  Approximately 10-15% of web pages are spam

Web Spam Taxonomy  We follow the treatment by Gyongyi and Garcia-Molina [2004]  Boosting techniques Techniques for achieving high relevance/importance for a web page  Hiding techniques Techniques to hide the use of boosting  From humans and web crawlers

Boosting techniques  Term spamming Manipulating the text of web pages in order to appear relevant to queries  Link spamming Creating link structures that boost page rank or hubs and authorities scores

Term Spamming  Repetition of one or a few specific terms e.g., free, cheap, sale, promotion, … Goal is to subvert if-idf ranking schemes The tf–idf weight (term frequency–inverse document frequency) is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus (a large and structured set of texts). The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf–idf weighting scheme are often used by search engines to score and rank a document's relevance given a user query.information retrievaltext miningdocumentcorpus proportionallycorpussearch enginesrelevancequery

Term Spamming  Repetition  Dumping of a large number of unrelated terms e.g., copy entire dictionaries  Weaving Copy legitimate pages and insert spam terms at random positions  Phrase Stitching Glue together sentences and phrases from different sources

Term spam targets  Body of web page  Title  URL  HTML meta tags  Anchor text

Link spam  Three kinds of web pages from a spammer’s point of view Inaccessible pages Accessible pages  e.g., web log comments pages  spammer can post links to his pages Own pages  Completely controlled by spammer  May span multiple domain names

Link Farms  Spammer’s goal Maximize the page rank of target page t  Technique Get as many links from accessible pages as possible to target page t Construct “link farm” to get page rank multiplier effect

Link Farms Inaccessible t AccessibleOwn 1 2 M One of the most common and effective organizations for a link farm

Analysis Suppose rank contributed by accessible pages = x Let page rank of target page = y Rank of each “farm” page = y/M + (1-)/N y = x + M[y/M + (1-)/N] + (1-)/N = x +  2 y + (1-)M/N + (1-)/N y = x/(1- 2 ) + cM/N where c = /(1+) Inaccessibl e t Accessible Own 1 2 M Very small; ignore

Analysis  y = x/(1- 2 ) + cM/N where c = /(1+)  For  = 0.85, 1/(1- 2 )= 3.6 Multiplier effect for “acquired” page rank By making M large, we can make y as large as we want Inaccessibl e t Accessible Own 1 2 M

Hiding techniques  Content hiding Use same color for text and page background  Cloaking Return different page to crawlers and browsers  Redirection Alternative to cloaking Redirects are followed by browsers but not crawlers

Detecting Spam  Term spamming Analyze text using statistical methods e.g., Naïve Bayes classifiers Similar to spam filtering Also useful: detecting approximate duplicate pages  Link spamming Open research area One approach: TrustRank

TrustRank idea  Basic principle: approximate isolation It is rare for a “good” page to point to a “bad” (spam) page  Sample a set of “seed pages” from the web  Have an oracle (human) identify the good pages and the spam pages in the seed set Expensive task, so must make seed set as small as possible

Trust Propagation  Call the subset of seed pages that are identified as “good” the “trusted pages”  Set trust of each trusted page to 1  Propagate trust through links Each page gets a trust value between 0 and 1 Use a threshold value and mark all pages below the trust threshold as spam

Example good bad

Rules for trust propagation  Trust attenuation The degree of trust conferred by a trusted page decreases with distance  Trust splitting The larger the number of outlinks from a page, the less scrutiny the page author gives each outlink Trust is “split” across outlinks

Simple model  Suppose trust of page p is t(p) Set of outlinks O(p)  For each q in O(p), p confers the trust t(p)/|O(p)| for 0<<1  Trust is additive Trust of p is the sum of the trust conferred on p by all its inlinked pages  Note similarity to Topic-Specific Page Rank Within a scaling factor, trust rank = biased page rank with trusted pages as teleport set

Picking the seed set  Two conflicting considerations Human has to inspect each seed page, so seed set must be as small as possible Must ensure every “good page” gets adequate trust rank, so need make all good pages reachable from seed set by short paths

Approaches to picking seed set  Suppose we want to pick a seed set of k pages  PageRank Pick the top k pages by page rank Assume high page rank pages are close to other highly ranked pages We care more about high page rank “good” pages