Dynamic parallel access to replicated content in the Internet Pablo Rodriguez and Ernst W. Biersack IEEE/ACM Transactions on Networking, August 2002.

Slides:



Advertisements
Similar presentations
Push Technology Humie Leung Annabelle Huo. Introduction Push technology is a set of technologies used to send information to a client without the client.
Advertisements

Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Ningning HuCarnegie Mellon University1 Optimizing Network Performance In Replicated Hosting Peter Steenkiste (CMU) with Ningning Hu (CMU), Oliver Spatscheck.
Web Server Benchmarking Using the Internet Protocol Traffic and Network Emulator Carey Williamson, Rob Simmonds, Martin Arlitt et al. University of Calgary.
Performance Analysis of a Parallel Downloading Scheme from Mirror Sites Throughout the Internet Allen Miu, Eugene Shih Class Project December 3,
Dynamic Adaptive Streaming over HTTP2.0. What’s in store ▪ All about – MPEG DASH, pipelining, persistent connections and caching ▪ Google SPDY - Past,
Measurements of Congestion Responsiveness of Windows Streaming Media (WSM) Presented By:- Ashish Gupta.
Efficient and Flexible Parallel Retrieval using Priority Encoded Transmission(2004) CMPT 886 Represented By: Lilong Shi.
Toolbox Mirror -Overview Effective Distributed Learning.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Network Coding for Large Scale Content Distribution Christos Gkantsidis Georgia Institute of Technology Pablo Rodriguez Microsoft Research IEEE INFOCOM.
An Analysis of Internet Content Delivery Systems Stefan Saroiu, Krishna P. Gommadi, Richard J. Dunn, Steven D. Gribble, and Henry M. Levy Proceedings of.
1 Lecture 10: TCP Performance Slides adapted from: Congestion slides for Computer Networks: A Systems Approach (Peterson and Davis) Chapter 3 slides for.
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
EEC-484/584 Computer Networks Discussion Session for HTTP and DNS Wenbing Zhao
1 Web Performance Modeling Chapter New Phenomena in the Internet and WWW Self-similarity - a self-similar process looks bursty across several time.
Analysis of Web Caching Architectures: Hierarchical and Distributed Caching Pablo Rodriguez, Christian Spanner, and Ernst W. Biersack IEEE/ACM TRANSACTIONS.
Performance Comparison of Congested HTTP/2 Links Brian Card, CS /7/
HTTP Performance Objective: In this problem, we consider the performance of HTTP, comparing non-persistent HTTP with persistent HTTP. Suppose the page.
1 Web Content Delivery Reading: Section and COS 461: Computer Networks Spring 2007 (MW 1:30-2:50 in Friend 004) Ioannis Avramopoulos Instructor:
Application Layer  We will learn about protocols by examining popular application-level protocols  HTTP  FTP  SMTP / POP3 / IMAP  Focus on client-server.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
INTRODUCTION TO WEB DATABASE PROGRAMMING
1 Computer Communication & Networks Lecture 28 Application Layer: HTTP & WWW p Waleed Ejaz
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 7: Transport Layer Introduction to Networking.
Chapter 1: Introduction to Web Applications. This chapter gives an overview of the Internet, and where the World Wide Web fits in. It then outlines the.
Protocol(TCP/IP, HTTP) 송준화 조경민 2001/03/13. Network Computing Lab.2 Layering of TCP/IP-based protocols.
Parallel Access For Mirror Sites in the Internet Yu Cai.
An Efficient Approach for Content Delivery in Overlay Networks Mohammad Malli Chadi Barakat, Walid Dabbous Planete Project To appear in proceedings of.
Sharing Information across Congestion Windows CSE222A Project Presentation March 15, 2005 Apurva Sharma.
The Internet Trisha Cummings ITE115. What is the Internet? The Internet is a world-wide network of computer networks that use a common communications.
Lect3..ppt - 09/13/04 CIS 4100 Systems Performance and Evaluation Lecture 4 by Zornitza Genova Prodanoff.
Presented by Rajan Includes slides presented by Andrew Sprouse, Northeastern University CISC 856 TCP/IP and Upper Layer Protocols Date:May 03, 2011.
The Inter-network is a big network of networks.. The five-layer networking model for the internet.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
Internet Protocol B Bhupendra Ratha, Lecturer School of Library and Information Science Devi Ahilya University, Indore
Web Performance 성민영 SNU Computer Systems lab.. 2 차례 4 Modeling the Performance of HTTP Over Several Transport Protocols. 4 Summary Cache : A Scaleable.
Ch 2. Application Layer Myungchul Kim
Transport Layer3-1 Announcements r Collect homework r New homework: m Ch3#3,4,7,10,12,16,18-20,25,26,31,33,37 m Due Wed Sep 24 r Reminder: m Project #1.
IT 210: Web-based IT Winter 2012 Measuring Speed on the Internet and WWW.
A Low-bandwidth Network File System Athicha Muthitacharoen et al. Presented by Matt Miller September 12, 2002.
Networking Basics CCNA 1 Chapter 11.
Empirical Quantification of Opportunities for Content Adaptation in Web Servers Michael Gopshtein and Dror Feitelson School of Engineering and Computer.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Performance of Web Proxy Caching in Heterogeneous Bandwidth Environments IEEE Infocom, 1999 Anja Feldmann et.al. AT&T Research Lab 발표자 : 임 민 열, DB lab,
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
Network Protocols: Design and Analysis Polly Huang EE NTU
New HTTP Protocols HTTP/0.9 -Earliest standard simple GET/PUT requests (no headers, constraints, resolution) HTTP/1.0 - Current standard Request For Comment.
ADVANCED COMPUTER NETWORKS Peer-Peer (P2P) Networks 1.
MiddleMan: A Video Caching Proxy Server NOSSDAV 2000 Brian Smith Department of Computer Science Cornell University Ithaca, NY Soam Acharya Inktomi Corporation.
1 COMP 431 Internet Services & Protocols HTTP Persistence & Web Caching Jasleen Kaur February 11, 2016.
Voice Over Internet Protocol (VoIP) Copyright © 2006 Heathkit Company, Inc. All Rights Reserved Presentation 5 – VoIP and the OSI Model.
Computer Network Architecture Lecture 6: OSI Model Layers Examples 1 20/12/2012.
79 Sidevõrgud IRT 4060/ IRT 0020 vooruloeng 8 / 3. nov 2004 Vooülekanne Avo Ots telekommunikatsiooni õppetool, TTÜ raadio- ja sidetehnika inst.
BACS 485 Multi-User Database Processing. Lecture Objectives Learn the difference between single and multi-user database processing and understand the.
Web Proxy Caching: The Devil is in the Details Ramon Caceres, Fred Douglis, Anja Feldmann Young-Ho Suh Network Computing Lab. KAIST Proceedings of the.
27.1 Chapter 27 WWW and HTTP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Performance Evaluation of Redirection Schemes in Content Distribution Networks Jussi Kangasharju, Keith W. Ross Institut Eurecom Jim W. Roberts France.
Agenda Background Project goals Project description –General –Implementation –Algorithms Simulation results –Charts –Conclusions.
1 Chapter 1 INTRODUCTION TO WEB. 2 Objectives In this chapter, you will: Become familiar with the architecture of the World Wide Web Learn about communication.
Chapter 7: Transport Layer
WWW and HTTP King Fahd University of Petroleum & Minerals
Mohammad Malli Chadi Barakat, Walid Dabbous Alcatel meeting
Understand the OSI Model Part 2
Web Caching? Web Caching:.
Computer Communication & Networks
IS 4506 Server Configuration (HTTP Server)
HyperText Transfer Protocol
Algorithms for Selecting Mirror Sites for Parallel Download
Presentation transcript:

Dynamic parallel access to replicated content in the Internet Pablo Rodriguez and Ernst W. Biersack IEEE/ACM Transactions on Networking, August 2002

Introduction Popular content is frequently replicated in multiple servers or caches. Even the fastest server is selected, its performance can fluctuate during a download session. Users can experience a better and more uniform performance by connecting to several servers. A user downloads different parts of the same document from each of the servers in parallel.

Parallel-access schemes History-based TCP parallel access Clients specify a priori which part of a document must be delivered from each mirror server. The portion of a document delivered by one server should be proportional to its service rate. Dynamic TCP parallel access A client partitions a document into many small blocks. The client request a different block from each server.

History-based parallel access The client divides a document (size S) into M disjoint blocks. (M servers) μ i is the transmission rate for server i. α i S is the size of the block delivered by server i. Download time T i = α i S/ μ i To achieve a maximum speedup, T i =T j for 1 ≤ i, j ≤ M.

Experimental setup

Performance (1/2) S=763KB, M=2

Performance (2/2) S=763KB, M=4

Dynamic parallel access The document is divided into B blocks of equal size. Use HTTP 1.1 byte-range header to request a block. Dynamic parallel-access scheme A client first request one block from every server. Every time a client has completely received one block from a server, it requests another block from this server. When the client has received all blocks, it reassembles them to construct the whole document.

Block request

Idle time Interblock idle time Each server will be idle between two consecutive block transfer. This idle time corresponds to one RTT. Can be avoided by pipelining requests. Termination idle time Not all servers terminate at the same time. Can be reduced by receiving missing blocks from the idle server.

Performance (1/2)

Performance (2/2)

Performance of parallel access with large documents

Performance of parallel access with small documents

Shared bottleneck A single server may already consume all the bandwidth of the link. When a client uses another server in parallel, there is no residual network bandwidth. Packets from different servers interfere and compete for bandwidth. The download time obtained with a dynamic parallel access is always slightly higher than the download time obtained for the fastest server. Because of interblock idle time

Performance (1/2) S = 256KB, B = 20, M = 3

Performance (2/2) S = 763KB, B = 30, M = 2

With pipelining (1/2) S = 256KB, B = 20, M = 3 S / B ≥ RTT

With pipelining (2/2) S = 763KB, B = 30, M = 2

Discussion Overhead Extra server access to find out S Block request messages Limitations Must be applied to large files Benefits Avoid the risk of selecting a slow or instable server. Suitable for peer-to-peer applications or content- distribution networks.