On Scheduling of Peer-to-Peer Video Services

Slides:



Advertisements
Similar presentations
Welcome to the Manage Orders lesson for the North Carolina Immunization Branch. Contents: Placing a State Supplied Order Accepting a State Supplied Order.
Advertisements

Ch 11 Distributed Scheduling –Resource management component of a system which moves jobs around the processors to balance load and maximize overall performance.
© Vijay Kumar, Nitin Prabhu, Panos K. Chrysanthis, USA AICCSA – 2005, Cairo, Egypt Vijay Kumar & Nitin Prabhu SCE, Computer Networking University of Missouri-Kansas.
CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Agent Caching in APHIDS CPSC 527 Computer Communication Protocols Project Presentation Presented By: Jake Wires and Abhishek Gupta.
A Peer-to-Peer On-Demand Streaming Service and Its Performance Evaluation Presenter: Nera Liu Author: Yang Guo, Kyoungwon Suh, Jim Kurose and Don Towsley.
Performance Evaluation of Peer-to-Peer Video Streaming Systems Wilson, W.F. Poon The Chinese University of Hong Kong.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Lecture 12 Synchronization. EECE 411: Design of Distributed Software Applications Summary so far … A distributed system is: a collection of independent.
On-Demand Media Streaming Over the Internet Mohamed M. Hefeeda, Bharat K. Bhargava Presented by Sam Distributed Computing Systems, FTDCS Proceedings.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
Transaction. A transaction is an event which occurs on the database. Generally a transaction reads a value from the database or writes a value to the.
Can Internet Video-on-Demand Be Profitable? SIGCOMM 2007 Cheng Huang (Microsoft Research), Jin Li (Microsoft Research), Keith W. Ross (Polytechnic University)
Freenet. Anonymity  Napster, Gnutella, Kazaa do not provide anonymity  Users know who they are downloading from  Others know who sent a query  Freenet.
AN OPTIMISTIC CONCURRENCY CONTROL ALGORITHM FOR MOBILE AD-HOC NETWORK DATABASES Brendan Walker.
COCONET: Co-Operative Cache driven Overlay NETwork for p2p VoD streaming Abhishek Bhattacharya, Zhenyu Yang & Deng Pan.
An Analysis of Chaining Protocols for Video-on-Demand J.-F. Pâris University of Houston Thomas Schwarz, S. J. Universidad Católica del Uruguay.
Efficient Protocols for Massive Data Transport Sailesh Kumar.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Jozef Goetz, Application Layer PART VI Jozef Goetz, Position of application layer The application layer enables the user, whether human.
Selecting, Formatting, and Printing a finished Report…….
Streaming over Subscription Overlay Networks Department of Computer Science Iowa State University.
Live Streaming over Subscription Overlay Networks CS587x Lecture Department of Computer Science Iowa State University.
1 Blue Gene Simulator Gengbin Zheng Gunavardhan Kakulapati Parallel Programming Laboratory Department of Computer Science.
Concurrency Server accesses data on behalf of client – series of operations is a transaction – transactions are atomic Several clients may invoke transactions.
Kaleidoscope – Adding Colors to Kademlia Gil Einziger, Roy Friedman, Eyal Kibbar Computer Science, Technion 1.
 Distributed file systems having transaction facility need to support distributed transaction service.  A distributed transaction service is an extension.
1 Client-Server Interaction. 2 Functionality Transport layer and layers below –Basic communication –Reliability Application layer –Abstractions Files.
HTTP evolution - TCP/IP issues Lecture 4 CM David De Roure
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
CSE Computer Networks Prof. Aaron Striegel Department of Computer Science & Engineering University of Notre Dame Lecture 19 – March 23, 2010.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
Data Scheduling for Multi-item and transactional Requests in On-demand Broadcast Nitin Pabhu Vijay Kumar MDM 2005.
A simple model for analyzing P2P streaming protocols. Seminar on advanced Internet applications and systems Amit Farkash. 1.
Greedy Algorithms Interval Scheduling and Fractional Knapsack These slides are based on the Lecture Notes by David Mount for the course CMSC 451 at the.
Memory Hierarchies Sonish Shrestha October 3, 2013.
Peer-to-Peer Video Systems: Storage Management CS587x Lecture Department of Computer Science Iowa State University.
Large-Scale and Cost-Effective Video Services CS587x Lecture Department of Computer Science Iowa State University.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
Cost-Effective Video Streaming Techniques Kien A. Hua School of EE & Computer Science University of Central Florida Orlando, FL U.S.A.
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
The Impact of Replacement Granularity on Video Caching
Load Balancing and Data centers
Authors: Sajjad Rizvi, Xi Li, Bernard Wong, Fiodar Kazhamiaka
CS 425 / ECE 428 Distributed Systems Fall 2016 Nov 10, 2016
CPU Scheduling Algorithms
Video On Demand.
Immediate-request vs. scheduled calls Short-duration vs
Transactions.
Process Scheduling B.Ramamurthy 11/18/2018.
Distributed Systems CS
Peer-to-Peer Video Services
Improving Performance in the Gnutella Protocol
Peer-to-Peer Streaming: An Hierarchical Approach
Load Balancing/Sharing/Scheduling Part II
Persistence: hard disk drive
Cary G. Gray David R. Cheriton Stanford University
Process Scheduling B.Ramamurthy 2/23/2019.
Javad Ghaderi, Tianxiong Ji and R. Srikant
P1 : Distributed Bitcoin Miner
Chapter 5 (through section 5.4)
Internet Applications & Programming
THE GOOGLE FILE SYSTEM.
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
CSCS-200 Data Structure and Algorithms
Algorithms for Selecting Mirror Sites for Parallel Download
GPU Scheduling on the NVIDIA TX2:
Message Passing Systems Version 2
Message Passing Systems
Presentation transcript:

On Scheduling of Peer-to-Peer Video Services CS587x Lecture Department of Computer Science Iowa State University

Server Design Each server maintains a waiting queue Q and all requests (Ci, Vi) are first appended in the queue When a server has sufficient resource, it schedules a pending request for service The requests in the queue are served according to their arriving order, i.e., FIFO Given a request (Ci, Vi) in the queue of a server, we can estimate its remaining waiting time Ti = f(B, D, R) B: the server bandwidth, i.e., the number of concurrent video streams it can sustain D: the remaining data that needs to be sent for the requests that are being served R: the service requests that arrive ahead of (Ci, Vi) and are still waiting for services in the queue

Service Scheduling Problem Goal Given a video request, which server should be used to serve it Goal Minimize service latency Defined to be the period from the time when a client submits its request to the time when the client starts to download the video

Naïve Solution Find a set of server candidates Choose a server A client c requesting a video v can send command Search(c, v), which will return a set of hosts that cache the video The underlying file lookup mechanism determines which servers can be found Choose a server From the set of server candidates, the client checks each of them to find their current workload The client selects the server that can provide the service at the earliest time Submit request to the server The client sends command Request(c, v) to the server

Naïve Solution Falls Short Problem 1: a video may be cached by many hosts in the entire system, but a client requesting for the video may find only a few, typically one, of them These hosts may not be able to serve the client at the earliest possible time Cause: the inherent problem of lookup services

Naïve Solution Falls Short Problem 1: while a video may be cached by many hosts in the entire system, a client requesting for the video may find only a few, typically one, of them These hosts may not be able to serve the client at the earliest possible time Cause: the inherent problem of lookup services Problem 2: given a series of client requests, their choosing of servers should depend on their arriving order Different matches of clients and servers may result in significantly different performance results Cause: the set of videos cached by a server partially overlaps with those by other servers

Naïve Solution Falls Short Two free servers: S1{V1, V2}, S2{V2, V3} Two client requests: (C1, V1), (C2, V2) (C2, V2) (C1, V1) (C2, V2) (C1, V1) S1 [V1, V2] S2 [V1, V3] S1 [V1, V2] S2 [V1, V3] Case II C1 chooses S2 Result Both C1 and C2 can be served immediately Case I C1 chooses S1 Result C1 can be served immediately, but C2 needs to wait until C1 is finished

A Possible Solution Basic idea: Dynamically adjust the service schedule as requests arrive Before a client submits its request to a server S, the client contacts the server for all of its pending requests For each pending request (Ci, Vi), the client tries to find another server that can serve Ci at an earlier time If there is such a server S’, request (Ci, Vi) is removed from server S and submitted to server S’ Why this works For problem 1 Clients looking for the same video may find different sets of servers, since the video lookups launched by these clients are from different spots in the system For problem 2 A mismatch can now be corrected as new requests arrive

Challenge: Chain of Rescheduling To reschedule a pending request (Ci, Vi), a client needs to find a server that can serve the request at an earlier time The servers found by the client may be busy as well and none of them can serve (Ci, Vi) at an earlier time The client may try to reschedule the pending requests on each server This may result in a chain of rescheduling (C1, V1) (C2, V3) S1 [V1, V2] S2 [V1, V3] Client C requesting V2 finds S1, but S1 needs to serve (C1, V1) first C tries to reschedule (C1, V1) and finds S2, but S2 needs to serve (C2, V3) first C tries to reschedule (C2, V3), and so on

Client Design Building Closure Set Shaking Closure Set Executing Shaking Plan

Building Closure Set Set ClosureSet=NULL, ServerSet=NULL and VideoSet=NULL Send command Search(v) For each server found, add it to ServerSet While ServerSet != NULL Remove a server, say S, from ServerSet Add S to ClosureSet Contact S for its waiting queue Q For each request (Ci, Vi) in Q, if Vi is not in VideoSet, then Send command Search (Vi) If a server found is not in ServerSet, add it to ServerSet Add Vi to VideoSet

Building Closure Set Closure Set Set ClosureSet=NULL, ServerSet=NULL and VideoSet=NULL Send command Search(v) For each server found, add it to ServerSet While ServerSet != NULL Remove a server, say S, from ServerSet Add S to ClosureSet Contact S for its waiting queue Q For each request (Ci, Vi) in Q, if Vi is not in VideoSet, then Send command Search (Vi) If a server found is not in ServerSet, add it to ServerSet Add Vi to VideoSet S1 S2 S3 S4 Closure Set

Shaking Closure Set Given a set of servers and their service queues, the client tries to find a shaking plan that can minimize its service latency A shaking plan is an ordered list of action items, each denoted as T([C, V], S, S’), meaning that “transfering [C, V] from S to S’”. The client can try each server as the start point of shaking For each request (Ci, Vi), try to find another server that can serve it at an earlier time

An Example

An Example

An Example

Shaking([C, V], S)

Executing Shaking Plan Transfer([C, V], S’), T1([C1, V1], S, S’) T2([C2, V2], S, S’) Add([C, V], L) Abort/OK ::: For each action T([C, V], S, S’), the client sends server S a message Transfer([C, V], S’) Upon receiving such a request, S first checks if [C, V] is still in its queue If it is not, S sends an Abort message to C Otherwise, S sends a message Add([C, V], L) to S’, where L is the expected service latency of [C, V] at S When S’ receives Add([C, V], L), it checks if it can serve [C, V] in the next L time unit If not, sends an Abort message to S, which forwards to the client Otherwise, add [C, V] in its queue and sends an OK message to S’ S’ will remove [C, V] from its queue and inform the client

Advantages For the shaking client For the entire system Closure Set By rescheduling early service request, the current client itself can be served at an earlier time Each time a service request is rescheduled, the request will be served at an earlier time For the entire system Each shake may balance only the server workload within a small region, the combination of shaking effort, by each client, can balance the workload of the entire system What factors determine the shaking scope? S1 S2 S3 S4 Closure Set

Performance Study Performance Metrics Factors to investigate Average service latency Factors to investigate Video request rate Number of servers Default parameters 100 videos, each 1 hours The popularity of these videos follows a zipf distribution, skep = 0.5

Performance Study

Performance Study

Shaking Improvement The shaking discussed above is greedy A request is not rescheduled unless the request can be served at an earlier time Does greedy shaking result in the best performance?

Motivation Example Expected service time for C1 is T1 Expected service time for C1 is T2 (C1, V1) S1 S2 If transferred from S1 to S2, C1 suffers if T1<T2 However, the service requests behind (C1, V1) will all benefit, i.e., the service latency for each request reduced |V1|

Observation The average service latency is not minimized if we try Expected service time for C1 is T1 Expected service time for C1 is T2 (C1, V1) S1 S2 The average service latency is not minimized if we try to serve each request at the earliest possible time!!

Grace Shaking To transfer a request, calculate: Expected service time for C1 is T1 Expected service time for C1 is T2 (C1, V1) S1 S2 To transfer a request, calculate: Cost: (T2-T1)^a Gain: N*|V1| If Cost < Gain, let’s do it.