Download presentation
Presentation is loading. Please wait.
Published byEmerald Bailey Modified over 9 years ago
1
Overhaul: Extending HTTP to Combat Flash Crowds Jay A. Patel & Indranil Gupta Distributed Protocols Research Group Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Urbana, Illinois, USA
2
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign2 Introduction Flash crowd: A stampede of unexpected visitors Occurs regularly due to linkage from popular news feeds, web logs, etc. Popularly termed “Slashdot effect” Victim sites become unresponsive Perception of dysfunction
3
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign3 Example: MSNBC MSNBC home page December 14, 2003
4
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign4 Motivation Problem Unpredictable, yet frequent Brief period of time Thousand-fold increase in traffic Two naïve solutions Overly insure on resources Shut down web site
5
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign5 Current Solutions Architectural Changes SEDA Capriccio ESI Protocol Modifications DHTTP Web Booster Cooperative Sharing Squirrel Kache Backslash BitTorrent
6
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign6 HTTP: Regular Interaction Client Server GET Request Response Document Header
7
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign7 Overhaul: Overview Protocol change HTTP extension, no modification 5 new tags added, 1 slightly modified Backwards compatible Key concept: chunking Characteristic of the web applied to individual documents m chunks per document P2P distribution framework Voluntary Ad hoc, not DHT based Key benefit: parallel resource discovery
8
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign8 Overhaul: Design Client Server Client #1 #2 #4 #3 HTTP Request with Overhaul support tag Chunked Response with Overhaul headers Peers exchange chunks to fetch the complete document Ad hoc peer network
9
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign9 Details: Client/Server Interaction Initial request by client Supports: Overhaul $port $speed Response by server in Overhaul mode i th chunk transmitted in sequential order Signatures of other m-1 chunks for verification Initial Overhaul network membership list n most-recent Overhaul clients List maintained at server (updated with every request)
10
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign10 Details: Peer Clients’ Interaction Clients contact other peer members To fetch remaining chunks To discover new peers Aggregate membership list by swapping information 1-hop random walk discovery process Resource discovery Lookup documents on a busy Overhaul server Contact peers randomly on membership list INFO $host.tld
11
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign11 Implementation Server Apache/2.0 HTTP server Module: mod_overhaul Client Java HTTP Proxy Cross platform Universal client support
12
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign12 Testing Methodology: Server Server machine 2.5 GHz AMD Athlon XP+ 1 GB RAM Client machine 650 MHz Pentium III 320 MB RAM Same network equipment 25 concurrent fetches ApacheBench utility
13
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign13 Results: Chunking (Fixed Size) Document: 10 KB Concurrency: 25 Regular HTTP 512-byte chunks 2048-byte chunks Overhaul mode requires the server to send only a single chunk
14
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign14 Results: Chunking (Maximum Count) Regular HTTP 6 chunks 12 chunks 24 chunks Document: 50 KB Concurrency: 25
15
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign15 Results: Overhaul vs. Regular Regular HTTP 6 chunks 12 chunks Concurrency: 25 Minimum chunk size: 512-bytes
16
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign16 Testing Methodology: Client Cluster of workstations 25 homogenous PCs 2.8 GHz Intel Pentium 4 1 GB RAM Same network equipment Two experiments Concurrent: single document Staggered: multiple documents
17
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign17 Results: Single Document Large document: 50 KB (12 chunks) Server condition: 150- 250 concurrent fetches + competition Overhaul requests: concurrently only using 24 Overhaul-aware clients Regular requests Overhaul mode Fastest1 sec6 secs Slowest32 secs9 secs Average9 secs7 secs Server bandwidth usage in Overhaul mode: 1/12 th of regular requests
18
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign18 Results: Multiple Documents 8 documents: 110 KB total (12 chunks) Server condition: 150-250 concurrent fetches + competition Overhaul requests staggered 1 st stage: 12 concurrent fetches, fetch all documents 2 nd stage: 12 concurrent fetches, fetch index document only Regular requests Overhaul mode Fastest1 sec14 secs Slowest∞28 secs Average23 secs * 18 secs Server bandwidth usage in Overhaul mode : 1/18 th of regular requests * indicates completed requests
19
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign19 Limitations Both client and server must be Overhaul aware Requires critical mass to be maintained to remain effective n clients > m chunks More responsibilities for the client Possible security implications
20
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign20 Conclusion Saves resources Bandwidth The bigger the crowd, the lower the per capita usage Response time Faster turnaround for both server and client Getting wide spread acceptance Marginal cost Protocol extension requires industry and standards push
21
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign21 Overhaul Vs. BitTorrent Specifically intended for flash crowds Feasible for short durations Small document size Tightly integrated for HTTP Another server/software not required Resource discovery: built-in notion of related documents
22
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign22 Regarding Greedy Clients Voluntary network Must increase membership list to fetch document(s) faster Forces communication and sharing Future work Trust score matrix based on sharing
23
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign23 Heterogeneous Networks Problem Connections are heterogeneous Solutions Clustering of clients Super nodes Client
24
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign24 Document Selection Only a partial set of documents are affected by a flash crowd Must implement selective Overhaul mode Automatic selection Active monitoring Server Large collection of documents reside on the server Documents fetched by a flash crowd
25
Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign25 Dynamic Documents Flash crowds especially frequent during big events and news Characteristic: rapidly, changing data Solutions Time stamping Expiration of chunks Inter-network refresh from peers
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.