Web Characterization Week 9 LBSC 690 Information Technology
Outline What is the Web? What’s on the Web? What is the nature of the Web? Preserving the Web
Defining the Web HTTP, HTML, or URL? Static, dynamic or streaming? Public, protected, or internal?
Economics of the Web in 1995 Affordable storage –300,000 words/$ Adequate backbone capacity –25,000 simultaneous transfers Adequate “last mile” bandwidth –1 second/screen Display capability –10% of US population Effective search capabilities –Lycos (now google), Yahoo
Nature of the Web Over one billion pages by 1999 –Growing at 25% per month! –Google indexed about 3 billion pages in 2003 Unstable –Changing at 1% per week Redundant –30-40% (near) duplicates e.g., unix man page tree
Source: Michael Lesk, How Much Information is there in the World?
Number of Web Sites
Web Sites by Country, 2002
What’s a Web “Site”? OCLC counts any server at port 80 –Misses many servers at other ports Some servers host unrelated content –Geocities Some content requires specialized servers –rtsp
World Trade in 2001 Source: World Trade Organization
Source: Global Reach English Global Internet User Population Chinese
Widely Spoken Languages Source:
Source: James Crawford,
Source: Jack Xu, 1999 Web Page Languages
European Web Size: Exponential Growth Source: Extrapolated from Grefenstette and Nioche, RIAO 2000
European Web Content Source: European Commission, Evolution of the Internet and the World Wide Web in Europe, 1997
Live Streams source: Feb 2000 Almost 2000 Internet-accessible Radio and Television Stations
Streaming Media SingingFish indexes 35 million streams 60% of queries are for music –Then movies –Then sports –Then news
Crawling the Web
Web Crawl Challenges Temporary server interruptions Discovering “islands” and “peninsulas” Duplicate and near-duplicate content Dynamic content Link rot Server and network loads Have I seen this page before?
Duplicate Detection Structural –Identical directory structure (e.g., mirrors, aliases) Syntactic –Identical bytes –Identical markup (HTML, XML, …) Semantic –Identical content –Similar content (e.g., with a different banner ad) –Related content (e.g., translated)
Robots Exclusion Protocol Based on voluntary compliance by crawlers Exclusion by site –Create a robots.txt file at the server’s top level –Indicate which directories not to crawl Exclusion by document (in HTML head) –Not implemented by all crawlers
Link Structure of the Web
The Deep Web Dynamic pages, generated from databases Not easily discovered using crawling Perhaps times larger than surface Web Fastest growing source of new information
Content of the Deep Web
Deep Web 60 Deep Sites Exceed Surface Web by 40 Times Name TypeURL Web Size (GBs) National Climatic Data Center (NOAA) Publichttp:// urces.html 366,000 NASA EOSDISPublichttp://harp.gsfc.nasa.gov/~imswww/pub/imswelco me/plain.html 219,600 National Oceanographic (combined with Geophysical) Data Center (NOAA) Public/Feehttp:// 32,940 AlexaPublic (partial) Right-to-Know Network (RTK Net)Publichttp:// MP3.comPublichttp://
Hands on: The Wayback Machine Internet Archive –Stored Alexa.com Web crawls since 1997 – Check out Maryland’s Web site in 1997 Check out the history of your favorite site
Discussion Point Can we save everything? Should we? Do people have a right to remove things?