Modified by Dongwon Lee from slides by

Slides:



Advertisements
Similar presentations
CS276 Information Retrieval and Web Search
Advertisements

Near-Duplicates Detection
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 16: Web search basics.

Search Engines and Information Retrieval
CS345 Data Mining Web Spam Detection. Economic considerations  Search has become the default gateway to the web  Very high premium to appear on the.
Near Duplicate Detection
CS 345 Data Mining Lecture 1 Introduction to Web Mining.
Overview of Search Engines
“ The Initiative's focus is to dramatically advance the means to collect,store,and organize information in digital forms,and make it available for searching,retrieval,and.

Web search basics.
ITCS 6265 Information Retrieval and Web Mining Lecture 10: Web search basics.
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
Adversarial Information Retrieval The Manipulation of Web Content.
Search Engines and Information Retrieval Chapter 1.
SEO Part 1 Search Engine Marketing Chapter 5 Instructor: Dawn Rauscher.
COMP4210 Information Retrieval and Search Engines Lecture 7: Web search basics.
CSCI 5417 Information Retrieval Systems Jim Martin
CS276 Lecture 16 Web characteristics II: Web size measurement Near-duplicate detection.
Accessing the Deep Web Bin He IBM Almaden Research Center in San Jose, CA Mitesh Patel Microsoft Corporation Zhen Zhang computer science at the University.
Web Search Module 6 INST 734 Doug Oard. Agenda The Web  Crawling Web search.
Data Structures & Algorithms and The Internet: A different way of thinking.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Crawling.
Crawling Slides adapted from
Searching the Web From Chris Manning’s slides for Lecture 8: IIR Chapter 19.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
Search Engine Marketing Gay, Charlesworth & Esen Chapter 6.
When Experts Agree: Using Non-Affiliated Experts To Rank Popular Topics Meital Aizen.
| 1 › Gertjan van Noord2014 Search Engines Lecture 7: relevance feedback & query reformulation.
Brief (non-technical) history Full-text index search engines Altavista, Excite, Infoseek, Inktomi, ca Taxonomies populated with web page Yahoo.
Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be.
Improving Cloaking Detection Using Search Query Popularity and Monetizability Kumar Chellapilla and David M Chickering Live Labs, Microsoft.
Web Search Basics (Modified from Stanford CS276 Class Lecture 13: Web search basics)
Document duplication (exact or approximate) Paolo Ferragina Dipartimento di Informatica Università di Pisa Slides only!
What is Web Information retrieval from web Search Engine Web Crawler Web crawler policies Conclusion How does a web crawler work Synchronization Algorithms.
Chapter 8: Web Analytics, Web Mining, and Social Analytics
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 19 Web search basics.
WEB SEARCH BASICS By K.KARTHIKEYAN. Web search basics The Web Ad indexes Web spider Indexer Indexes Search User Sec
Search Engine Optimization
Information Retrieval in Practice
SEARCH ENGINE OPTIMIZATION.
CS276 Information Retrieval and Web Search
Web Basics Slides adapted from
信息检索与搜索引擎 Introduction to Information Retrieval GESC1007
Search Engine Optimization
CS276A Text Information Retrieval, Mining, and Exploitation
SEARCH ENGINE OPTIMIZATION
WEB SPAM.
SEARCH ENGINES & WEB CRAWLER Akshay Ghadge Roll No: 107.
15-499:Algorithms and Applications
Near Duplicate Detection
Crawler (AKA Spider) (AKA Robot) (AKA Bot)
BTEC NCF Dip in Comp - Unit 15 Website Development Lesson 04 – Search Engine Optimisation Mr C Johnston.
Information Retrieval Christopher Manning and Prabhakar Raghavan
Lecture 16: Web search/Crawling/Link Analysis
SEARCH ENGINE OPTIMIZATION
1 SEO is short for search engine optimization. Search engine optimization is a methodology of strategies, techniques and tactics used to increase the amount.
Fred Dirkse CEO, OIC Group, Inc.
The Four Dimensions of Search Engine Quality
Section 14.1 Section 14.2 Identify the technical needs of a Web server
Robotic Search Engines for the Physical World
Data Mining Chapter 6 Search Engines
Searching for Truth: Locating Information on the WWW
Introduction to Information Retrieval
Minwise Hashing and Efficient Search
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

Modified by Dongwon Lee from slides by Ch 19 Web Search Basics Modified by Dongwon Lee from slides by Christopher Manning and Prabhakar Raghavan

Lab #6 (DUE: Nov. 6 11:55PM) https://online.ist.psu.edu/ist516/labs Tasks: Individual Lab / 3 Tasks Search Engine related hands-on lab URL Frontier exercise question Turn-In Solution document to ANGEL

Brief (non-technical) history Early keyword-based engines ca. 1995-1997 Altavista, Excite, Infoseek, Inktomi, Lycos Paid search ranking: Goto (morphed into Overture.com  Yahoo!) CPM vs. CPC Your search ranking depended on how much you paid Auction for keywords: casino was expensive!

Brief (non-technical) history 1998+: Link-based ranking pioneered by Google Blew away all early engines Great user experience in search of a business model Meanwhile Goto/Overture’s annual revenues were nearing $1 billion Result: Google added paid search “ads” to the side, independent of search results Yahoo followed suit, acquiring Overture (for paid placement) and Inktomi (for search) 2005+: Google gains search share, dominating in Europe and very strong in North America 2009: Yahoo! and Microsoft propose combined paid search offering

Paid Search Ads Algorithmic results.

Web search basics User Indexer The Web Ad indexes Indexes Web spider Sec. 19.4.1 Web search basics User The Web Web spider Indexer Search Indexes Ad indexes

User Needs Need [Brod02, RL04] Sec. 19.4.1 User Needs Need [Brod02, RL04] Informational – want to learn about something (~40% / 65%) Navigational – want to go to that page (~25% / 15%) Transactional – want to do something (web-mediated) (~35% / 20%) Access a service Downloads Shop Gray areas Find a good hub Exploratory search “see what’s there” Low hemoglobin United Airlines Seattle weather Mars surface images Canon S410 Car rental Brasil

How far do people look for results? (Source: iprospect.com WhitePaper_2006_SearchEngineUserBehavior.pdf)

Users’ empirical evaluation of results Quality of pages varies widely Relevance is not enough Other desirable qualities (non IR!!) Content: Trustworthy, diverse, non-duplicated, well maintained Web readability: display correctly & fast No annoyances: pop-ups, etc Precision vs. recall On the web, recall seldom matters What matters Precision at 1? Precision within top-K? Comprehensiveness – must be able to deal with obscure queries Recall matters when the number of matches is very small User perceptions may be unscientific, but are significant over a large aggregate

Users’ empirical evaluation of engines Relevance and validity of results UI – Simple, no clutter, error tolerant Trust – Results are objective Coverage of topics for polysemic queries Pre/Post process tools provided Mitigate user errors (auto spell check, search assist,…) Explicit: Search within results, more like this, refine ... Anticipative: related searches, instant searches (next slide) Impact on stemming, spell-check, etc Web addresses typed in the search box

2010: Instant Search

2010: Instant Search

The Web document collection Sec. 19.2 The Web document collection No design/co-ordination Distributed content creation, linking, democratization of publishing Content includes truth, lies, obsolete information, contradictions … Unstructured (text, html, …), semi-structured (XML, annotated photos), structured (Databases)… Scale much larger than previous text collections … but corporate records are catching up Growth – slowed down from initial “volume doubling every few months” but still expanding Content can be dynamically generated The Web

The trouble with paid search ads … Sec. 19.2.2 The trouble with paid search ads … It costs money. What’s the alternative? Search Engine Optimization (SEO): “Tuning” your web page to rank highly in the algorithmic search results for select keywords Alternative to paying for placement Thus, intrinsically a marketing function Performed by companies, webmasters and consultants (“Search engine optimizers”) for their clients Some perfectly legitimate, some very shady

Search engine optimization (Spam) Sec. 19.2.2 Search engine optimization (Spam) Motives Commercial, political, religious, lobbies Promotion funded by advertising budget Operators Contractors (Search Engine Optimizers) for lobbies, companies Web masters Hosting services Forums E.g., Web master world ( www.webmasterworld.com )

Simplest forms: Keyword Stuffing Sec. 19.2.2 Simplest forms: Keyword Stuffing First generation engines relied heavily on tf/idf The top-ranked pages for the query maui resort were the ones containing the most maui’s and resort’s SEOs -- dense repetitions of chosen terms e.g., maui resort maui resort maui resort Often, the repetitions would be in the same color as the background of the web page Repeated terms got indexed by crawlers But not visible to humans on browsers Pure word density cannot be trusted as an IR signal

Eg, Heaven's Gate web site http://www.ariadne.ac.uk/issue10/search-engines/

Cloaking Serve fake content to search engine spider Sec. 19.2.2 Cloaking Serve fake content to search engine spider DNS cloaking: Switch IP address. Impersonate Is this a Search Engine spider? Y N SPAM Real Doc Cloaking

The war against spam Quality signals - Prefer authoritative pages based on: Votes from authors (linkage signals) Votes from users (usage signals) Policing of URL submissions Anti robot test Limits on meta-keywords Robust link analysis Ignore statistically implausible linkage (or text) Use link analysis to detect spammers (guilt by association) Spam recognition by machine learning Training set based on known spam Family friendly filters Linguistic analysis, general classification techniques, etc. For images: flesh tone detectors, source text analysis, etc. Editorial intervention Blacklists Top queries audited Complaints addressed Suspect pattern detection

Size of the web We covered some related topics in Week #1 Refer to web measuring problems in the following slide: http://pike.psu.edu/classes/ist516/latest/slides/web-size.ppt More related topics in Ch 19 of the IIR book

Relative Size from Overlap Given two engines A and B Sec. 19.5 Relative Size from Overlap Given two engines A and B A Ç B Sample URLs randomly from A Check if contained in B and vice versa A Ç B = (1/2) * Size A A Ç B = (1/6) * Size B (1/2)*Size A = (1/6)*Size B \ Size A / Size B = (1/6)/(1/2) = 1/3 Each test involves: (i) Sampling (ii) Checking

Sec. 19.5 Sampling URLs Ideal strategy: Generate a random URL and check for containment in each index. Problem: Random URLs are hard to find! 4 Approaches are discussed in Ch 19

1. Random searches Choose random searches extracted from a local log Sec. 19.5 1. Random searches Choose random searches extracted from a local log Use only queries with small result sets. Count normalized URLs in result sets. Use ratio statistics

1. Random searches 575 & 1050 queries from the NEC RI employee logs Sec. 19.5 1. Random searches 575 & 1050 queries from the NEC RI employee logs 6 Engines in 1998, 11 in 1999 Implementation: Restricted to queries with < 600 results in total Counted URLs from each engine after verifying query match Computed size ratio & overlap for individual queries Estimated index size ratio & overlap by averaging over all queries

2. Random IP addresses Generate random IP addresses Sec. 19.5 2. Random IP addresses Generate random IP addresses Find a web server at the given address If there’s one Collect all pages from server From this, choose a page at random

2. Random IP addresses HTTP requests to random IP addresses Sec. 19.5 2. Random IP addresses HTTP requests to random IP addresses Ignored: empty or authorization required or excluded [Lawr99] Estimated 2.8 million IP addresses running crawlable web servers (16 million total) from observing 2500 servers. OCLC using IP sampling found 8.7 M hosts in 2001 Netcraft [Netc02] accessed 37.2 million hosts in July 2002 [Lawr99] exhaustively crawled 2500 servers and extrapolated Estimated size of the web to be 800 million pages

3. Random walks View the Web as a directed graph Sec. 19.5 3. Random walks View the Web as a directed graph Build a random walk on this graph Includes various “jump” rules back to visited sites Does not get stuck in spider traps! Can follow all links! Converges to a stationary distribution Must assume graph is finite and independent of the walk. Conditions are not satisfied (cookie crumbs, flooding) Time to convergence not really known Sample from stationary distribution of walk Use “strong query” method to check coverage by SE

4. Random queries Generate random query: how? Sec. 19.5 4. Random queries Generate random query: how? Lexicon: 400,000+ words from a web crawl Conjunctive Queries: w1 and w2 e.g., vocalists AND rsi Get top-100 result URLs from engine A Choose a random URL as the candidate to check for presence in engine B 6-8 low frequency terms as conjunctive query Not an English dictionary

Duplicate documents The web is full of duplicated content Sec. 19.6 Duplicate documents The web is full of duplicated content Strict duplicate detection = exact match Not as common But many, many cases of near duplicates E.g., Last modified date the only difference between two copies of a page

Eg, Near-duplicate videos < Original Video> Contrast Brightness Crop Color Enhancement Color Change TV size Multi-editing Low resolution Noise/Blur Small Logo

Eg, Near-duplicate videos Original video Elongated Copied video

Duplicate/Near-Duplicate Detection Sec. 19.6 Duplication: Exact match can be detected with fingerprints Near-Duplication: Approximate match Compute syntactic similarity with an edit-distance measure Use similarity threshold to detect near-duplicates E.g., Similarity > 80% => Documents are “near duplicates” Not transitive though sometimes used transitively

Computing Similarity Features: Sec. 19.6 Computing Similarity Features: Segments of a document (natural or artificial breakpoints) Shingles (Word N-Grams) a rose is a rose is a rose my rose is a rose is yours a_rose_is_a rose_is_a_rose is_a_rose_is Similarity Measure between two docs (= sets of shingles) Set intersection Specifically (Size_of_Intersection / Size_of_Union)

Shingles + Set Intersection Issue: Computing exact set intersection of shingles between all pairs of documents is expensive

Shingles + Set Intersection Issue: Computing exact set intersection of shingles between all pairs of documents is expensive Solution  Approximate using a cleverly chosen subset of shingles from each (called a sketch) Estimate (size_of_intersection / size_of_union) based on a short sketch Doc A Shingle set A Sketch A Doc B Shingle set B Sketch B Jaccard

Sec. 19.6 Sketch of a document Create a “sketch vector” (of size ~200) for each document Documents that share ≥ t (say 80%) corresponding vector elements are near duplicates For doc D, sketchD[ i ] is as follows: Let f map all shingles in the universe to 0..2m (e.g., f = fingerprinting) Let pi be a random permutation on 0..2m Pick MIN {pi(f(s))} over all shingles s in D

Computing Sketch[i] for Doc1 Sec. 19.6 Computing Sketch[i] for Doc1 Document 1 264 Start with 64-bit f(shingles) Permute on the number line with pi Pick the min value 264 264 264

Test if Doc1.Sketch[i] = Doc2.Sketch[i] Sec. 19.6 Test if Doc1.Sketch[i] = Doc2.Sketch[i] Document 2 Document 1 264 264 264 264 264 264 A B 264 264 Are these equal? Test for 200 random permutations: p1, p2,… p200

Sketch Eg (|shingle|=4, |U|=5, |sketch|=2) Sec. 19.6 Sketch Eg (|shingle|=4, |U|=5, |sketch|=2) a rose is a rose is a rose a_rose_is_a rose_is_a_rose is_a_rose_is a_rose_is_a my rose is a rose is yours my_rose_is_a rose_is_a_rose is_a_rose_is a_rose_is_yours

Min-Hash Technique: Theory behind the “Sketch vector” idea

Set Similarity of sets Ci , Cj Sec. 19.6 Set Similarity of sets Ci , Cj View sets as columns of a matrix A; one row for each element in the universe. aij = 1 indicates presence of item i in set j Example C1 C2 0 1 1 0 1 1 Jaccard(C1,C2) = 2/5 = 0.4 0 0 1 1 0 1

Key Observation For columns Ci, Cj, four types of rows Claim Ci Cj Sec. 19.6 Key Observation For columns Ci, Cj, four types of rows Ci Cj A 1 1 B 1 0 C 0 1 D 0 0 Claim

“Min” Hashing Randomly permute rows Sec. 19.6 “Min” Hashing Randomly permute rows Hash h(Ci) = index of first row with 1 in column Ci Surprising Property Why? Both are |A|/(|A|+|B|+|C|) Look down columns Ci, Cj until first non-Type-D row h(Ci) = h(Cj)  type A row

Example Task: find near-duplicate pairs among D1, D2, and D3 using Sec. 19.6 Example Task: find near-duplicate pairs among D1, D2, and D3 using the similarity threshold >= 0.5 D1 D2 D3 C1 C2 C3 R1 1 0 1 R2 0 1 1 R3 1 0 0 R4 1 0 1 R5 0 1 0 Index 1 2 3 4 5

Example Signatures S1 S2 S3 Perm 1 = (12345) 1 2 1 Sec. 19.6 Example Signatures S1 S2 S3 Perm 1 = (12345) 1 2 1 Perm 2 = (54321) 4 5 4 Perm 3 = (34512) 3 5 4 D1 D2 D3 C1 C2 C3 R1 1 0 1 R2 0 1 1 R3 1 0 0 R4 1 0 1 R5 0 1 0 Index 1 2 3 Similarities 1-2 1-3 2-3 Col-Col 0.00 0.50 0.25 Sig-Sig 0.00 0.67 0.67 4 5

More resources IIR Chapter 19 http://en.wikipedia.org/wiki/Locality-sensitive_hashing http://people.csail.mit.edu/indyk/vldb99.ps