INF 2914 Information Retrieval and Web Search Lecture 1: Overview These slides are adapted from Stanford’s class CS276 / LING 286 Information Retrieval.

Slides:



Advertisements
Similar presentations
Chapter 6 Server-side Programming: Java Servlets
Advertisements

CS276 Information Retrieval and Web Search
Chapter 5: Introduction to Information Retrieval
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Search Engines: The players and the field The mechanics of a typical search. The search engine wars. Statistics from search engine logs. The architecture.
Search Engine Marketing Free Traffic for Your Web Site Paul Allen, CEO
“ The Anatomy of a Large-Scale Hypertextual Web Search Engine ” Presented by Ahmed Khaled Al-Shantout ICS
Information Retrieval in Practice
Search Engines and Information Retrieval
CS 345A Data Mining Lecture 1
CS 345A Data Mining Lecture 1 Introduction to Web Mining.
Anatomy of a Large-Scale Hypertextual Web Search Engine (e.g. Google)
IR 8 Web Information Retrieval. Search use … (iProspect Survey, 4/04,
Web Search – Summer Term 2006 III. Web Search - Introduction (Cont.) - Jeff Dean, Google's Systems Lab:
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) Web IR.
Information Retrieval in Practice
1 CS 430: Information Discovery Lecture 21 Web Search 3.
CS 345 Data Mining Lecture 1 Introduction to Web Mining.
CS276B Text Retrieval and Mining Winter 2005 Lecture 1.
Exercise 1: Bayes Theorem (a). Exercise 1: Bayes Theorem (b) P (b 1 | c plain ) = P (c plain ) P (c plain | b 1 ) * P (b 1 )
Overview of Search Engines
What’s The Difference??  Subject Directory  Search Engine  Deep Web Search.
An Application of Graphs: Search Engines (most material adapted from slides by Peter Lee) Slides by Laurie Hiyakumoto.
Web search basics.
ITCS 6265 Information Retrieval and Web Mining Lecture 10: Web search basics.
Introductions Search Engine Development COMP 475 Spring 2009 Dr. Frank McCown.
What is web search? Access to “heterogeneous”, distributed information Source of new opportunities in marketing Strains the boundaries of trademark and.
Search Engines and Information Retrieval Chapter 1.
Web Search Created by Ejaj Ahamed. What is web?  The World Wide Web began in 1989 at the CERN Particle Physics Lab in Switzerland. The Web did not gain.
CSCI 5417 Information Retrieval Systems Jim Martin
Chapter 7 Web Content Mining Xxxxxx. Introduction Web-content mining techniques are used to discover useful information from content on the web – textual.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 17 10/25/2011.
Searching the Web Dr. Frank McCown Intro to Web Science Harding University This work is licensed under Creative Commons Attribution-NonCommercial 3.0Attribution-NonCommercial.
Chapter 2 Architecture of a Search Engine. Search Engine Architecture n A software architecture consists of software components, the interfaces provided.
Fall 2006 Davison/LinCSE 197/BIS 197: Search Engine Strategies 2-1 How Search Engines Work Today we show how a search engine works  What happens when.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
WHAT IS A SEARCH ENGINE A search engine is not a physical engine, instead its an electronic code or a software programme that searches and indexes millions.
Overview What is a Web search engine History Popular Web search engines How Web search engines work Problems.
When Experts Agree: Using Non-Affiliated Experts To Rank Popular Topics Meital Aizen.
Search - on the Web and Locally Related directly to Web Search Engines: Part 1 and Part 2. IEEE Computer. June & August 2006.
Autumn Web Information retrieval (Web IR) Handout #0: Introduction Ali Mohammad Zareh Bidoki ECE Department, Yazd University
Brief (non-technical) history Full-text index search engines Altavista, Excite, Infoseek, Inktomi, ca Taxonomies populated with web page Yahoo.
XP New Perspectives on The Internet, Sixth Edition— Comprehensive Tutorial 3 1 Searching the Web Using Search Engines and Directories Effectively Tutorial.
Search Engines. Search Strategies Define the search topic(s) and break it down into its component parts What terms, words or phrases do you use to describe.
Search Engines: The players and the field The mechanics of a typical search. The search engine wars. Statistics from search engine logs. The architecture.
Autumn Web Information retrieval (Web IR) Handout #1:Web characteristics Ali Mohammad Zareh Bidoki ECE Department, Yazd University
Searching the web Enormous amount of information –In 1994, 100 thousand pages indexed –In 1997, 100 million pages indexed –In June, 2000, 500 million pages.
CS276B Text Retrieval and Mining Winter Lecture 1.
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
Meet the web: First impressions How big is the web and how do you measure it? How many people use the web? How many use search engines? What is the shape.
Understanding Search Engines. Basic Defintions: Search Engine Search engines are information retrieval (IR) systems designed to help find specific information.
Search Engines By: Faruq Hasan.
Introduction to Information Retrieval Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
Web Search – Summer Term 2006 VII. Web Search - Indexing: Structure Index (c) Wolfgang Hürst, Albert-Ludwigs-University.
1 CS 430: Information Discovery Lecture 18 Web Search Engines: Google.
Web Search Architecture & The Deep Web
Information Retrieval (9) Prof. Dragomir R. Radev
General Architecture of Retrieval Systems 1Adrienn Skrop.
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 19 Web search basics.
WEB SEARCH BASICS By K.KARTHIKEYAN. Web search basics The Web Ad indexes Web spider Indexer Indexes Search User Sec
Data mining in web applications
Information Retrieval in Practice
Information Retrieval in Practice
22C:145 Artificial Intelligence
Search Engine Architecture
SEARCH ENGINES & WEB CRAWLER Akshay Ghadge Roll No: 107.
Search Market and Technologies
Data Mining Chapter 6 Search Engines
Presentation transcript:

INF 2914 Information Retrieval and Web Search Lecture 1: Overview These slides are adapted from Stanford’s class CS276 / LING 286 Information Retrieval and Web Mining

Search use … (iProspect Survey, 4/04,

Without search engines the web wouldn’t scale 1. No incentive in creating content unless it can be easily found – other finding methods haven’t kept pace (taxonomies, bookmarks, etc) 2. The web is both a technology artifact and a social environment “The Web has become the “new normal” in the American way of life; those who don’t go online constitute an ever-shrinking minority.” – [Pew Foundation report, January 2005] 3. Search engines make aggregation of interest possible: Create incentives for very specialized niche players Economical – specialized stores, providers, etc Social – narrow interests, specialized communities, etc 4. The acceptance of search interaction makes “unlimited selection” stores possible: Amazon, Netflix, etc 5. Search turned out to be the best mechanism for advertising on the web, a $15+ B industry. Growing very fast but entire US advertising industry $250B – huge room to grow Sponsored search marketing is about $10B

Classical IR vs. Web IR

Basic assumptions of Classical Information Retrieval Corpus: Fixed document collection Goal: Retrieve documents with information content that is relevant to user’s information need

Classic IR Goal Classic relevance For each query Q and stored document D in a given corpus assume there exists relevance Score(Q, D) Score is average over users U and contexts C Optimize Score(Q, D) as opposed to Score(Q, D, U, C) That is, usually: Context ignored Individuals ignored Corpus predetermined Bad assumptions in the web context

Web IR

The coarse-level dynamics Content creators Content aggregators Feeds Crawls Content consumers

Brief (non-technical) history Early keyword-based engines Altavista, Excite, Infoseek, Inktomi, ca Paid placement ranking: Goto.com (morphed into Overture.com  Yahoo!) Your search ranking depended on how much you paid Auction for keywords: casino was expensive!

Brief (non-technical) history 1998+: Link-based ranking pioneered by Google Blew away all early engines save Inktomi Great user experience in search of a business model Meanwhile Goto/Overture’s annual revenues were nearing $1 billion Result: Google added paid-placement “ads” to the side, independent of search results Yahoo follows suit, acquiring Overture (for paid placement) and Inktomi (for search)

Algorithmic results. Ads

Ads vs. search results Google has maintained that ads (based on vendors bidding for keywords) do not affect vendors’ rankings in search results Search = miele

Ads vs. search results Other vendors (Yahoo, MSN) have made similar statements from time to time Any of them can change anytime We will focus primarily on search results independent of paid placement ads Although the latter is a fascinating technical subject in itself

Web search basics The Web Ad indexes Web spider Indexer Indexes Search User

User Needs Need [Brod02, RL04] Informational – want to learn about something (~40% / 65%) Navigational – want to go to that page (~25% / 15%) Transactional – want to do something (web-mediated) (~35% / 20%) Access a service Downloads Shop Gray areas Find a good hub Exploratory search “see what’s there” Low hemoglobin United Airlines Seattle weather Mars surface images Canon S410 Car rental Brasil

Web search users Make ill defined queries Short AV 2001: 2.54 terms avg, 80% < 3 words) AV 1998: 2.35 terms avg, 88% < 3 words [Silv98] Imprecise terms Sub-optimal syntax (most queries without operator) Low effort Wide variance in Needs Expectations Knowledge Bandwidth Specific behavior 85% look over one result screen only (mostly above the fold) 78% of queries are not modified (one query/session)

Query Distribution Power law: few popular broad queries, many rare specific queries

How far do people look for results? (Source: iprospect.com WhitePaper_2006_SearchEngineUserBehavior.pdf)iprospect.com

Users’ empirical evaluation of results Quality of pages varies widely Relevance is not enough Other desirable qualities (non IR!!) Content: Trustworthy, new info, non-duplicates, well maintained, Web readability: display correctly & fast No annoyances: pop-ups, etc Precision vs. recall On the web, recall seldom matters What matters Precision at 1? Precision above the fold? Comprehensiveness – must be able to deal with obscure queries Recall matters when the number of matches is very small User perceptions may be unscientific, but are significant over a large aggregate

Users’ empirical evaluation of engines Relevance and validity of results UI – Simple, no clutter, error tolerant Trust – Results are objective Coverage of topics for poly-semic queries Pre/Post process tools provided Mitigate user errors (auto spell check, syntax errors,…) Explicit: Search within results, more like this, refine... Anticipative: related searches Deal with idiosyncrasies Web specific vocabulary Impact on stemming, spell-check, etc Web addresses typed in the search box …

Loyalty to a given search engine (iProspect Survey, 4/04)

The Web corpus No design/co-ordination Distributed content creation, linking, democratization of publishing Content includes truth, lies, obsolete information, contradictions … Unstructured (text, html, …), semi- structured (XML, annotated photos), structured (Databases)… Scale much larger than previous text corpora … but corporate records are catching up. Growth – slowed down from initial “volume doubling every few months” but still expanding Content can be dynamically generated The Web

The Web: Dynamic content A page without a static html version E.g., current status of flight AA129 Current availability of rooms at a hotel Usually, assembled at the time of a request from a browser Typically, URL has a ‘?’ character in it Application server Browser AA129 Back-end databases

Dynamic content Most dynamic content is ignored by web spiders Many reasons including malicious spider traps Some dynamic content (news stories from subscriptions) are sometimes delivered as dynamic content Application-specific spidering Spiders commonly view web pages just as Lynx (a text browser) would Note: even “static” pages are typically assembled on the fly (e.g., headers are common)

The web: size What is being measured? Number of hosts Number of (static) html pages Volume of data Number of hosts – netcraft survey Monthly report on how many web hosts & servers are out there Number of pages – numerous estimates (will discuss later)

Netcraft Web Server Survey

The web: evolution All of these numbers keep changing Relatively few scientific studies of the evolution of the web [Fetterly & al, 2003] pubs/p97-fetterly/p97-fetterly.pdf pubs/p97-fetterly/p97-fetterly.pdf Sometimes possible to extrapolate from small samples (fractal models) [Dill & al, 2001]

Rate of change [Cho00] 720K pages from 270 popular sites sampled daily from Feb 17 – Jun 14, 1999 Any changes: 40% weekly, 23% daily [Fett02] Massive study 151M pages checked over few months Significant changed -- 7% weekly Small changes – 25% weekly [Ntul04] 154 large sites re-crawled from scratch weekly 8% new pages/week 8% die 5% new content 25% new links/week

Static pages: rate of change Fetterly et al. study (2002): several views of data, 150 million pages over 11 weekly crawls Bucketed into 85 groups by extent of change

Other characteristics Significant duplication Syntactic – 30%-40% (near) duplicates [Brod97, Shiv99b, etc.] Semantic – ??? High linkage More than 8 links/page in the average Complex graph topology Not a small world; bow-tie structure [Brod00] Spam Billions of pages

Answering “the need behind the query” Semantic analysis Query language determination Auto filtering Different ranking (if query in Japanese do not return English) Hard & soft (partial) matches Personalities (triggered on names) Cities (travel info, maps) Medical info (triggered on names and/or results) Stock quotes, news (triggered on stock symbol) Company info Etc. Natural Language reformulation Integration of Search and Text Analysis

Yahoo!: britney spears

Ask Jeeves: las vegas

Yahoo!: salvador hotels

Yahoo shortcuts Various types of queries that are “understood”

Google andrei broder new york

Answering “the need behind the query”: Context Context determination spatial (user location/target location) query stream (previous queries) personal (user profile) explicit (user choice of a vertical search, ) implicit (use Google from France, use google.fr) Context use Result restriction Kill inappropriate results Ranking modulation Use a “rough” generic ranking, but personalize later

Google: dentists bronx

Yahoo!: dentists (bronx)

Query expansion

Web Search Components Crawler Stores raw documents along with per-document and per-server metadata in a database Parser/tokenizer Processes the raw documents to generate a tokenized documents Handles different files types (HTML, PDF, etc) Store Storage for the tokenized version of each document

Web Search Components Index Inverted text index over the Store Global analysis Duplicate detection, ranks, and anchor text processing Runtime Query processing Ranking (dynamic)

(Offline) Search Engine Data Flow - Parse - Tokenize - Per page analysis tokenized web pages dup table Parse & Tokenize Global Analysis 2 inverted text index 1 Crawler web page - Scan tokenized web pages, anchor text, etc - Generate text index Index Build - Dup detection - Static rank comp - Anchor text - … 34 rank table anchor text in background

Class Schedule Lecture 1: Overview Lecture 2: Crawler Lecture 3: Parsing, Tokenization, Storage Lecture 4: Link Analysis Static ranking, anchor text Lecture 5: Other Global Analysis Duplicate detection, Web spam Lectures 6 & 7: Indexing Lectures 8 & 9: Query Processing & Ranking Lecture 10: Evaluation (IR Metrics) Lectures 11-15: Student projects Potential extra lectures: Advertizing/XML Retrieval, Machine Learning, Compression

Projects Each class has a list of papers that students can select for a written paper, implementation, and lecture Students have to discuss the implementation projects with the teachers Students have until May 3rd to select a project topic

Resources IIR Chapter 19

Project 1 - Web measurements References: Sampling: Ziv Bar-Yossef, Maxim Gurevich: Random sampling from a search engine's index. WWW 2006: Index size: Andrei Z. Broder et. al.: Estimating corpus size via queries. CIKM 2006: Brazilian Web df