Download presentation
Presentation is loading. Please wait.
1
Multicast with Cache (Mcache): An Adaptive Zero-Delay Video-on-Demand Service Sridhar Ramesh, Injong Rhee, and Katherine Guo INFOCOM 2001
2
Abstract A demand-driven approach towards VoD services is proposed. Techniques Prefix-caching Batching Patching
3
Categories of VoD schemes Closed-loop Demand-driven approach The server allocates channels and schedules transmission of video streams based on client requests. Open-loop The server bandwidth usage is independent of the request arrival rate. An open-loop approach wastes bandwidth when the request frequency is low.
4
Multicast Cache (Mcache) Properties Closed-loop scheme Clients do not experience any playout delay. The amount of disk space at clients and at caches has less impact on its performance. It does not require any priori knowledge about client request rates or client disk space.
5
System environment Video Prefix – the first few minutes of each video Body - The clip after the prefix Server Store video clips Transmit the body to the client upon request Cache – store the prefix of each video Proxy Local prefix cache Client – receive transmissions from at most two channels
6
Mcache scheme Transmissions Object transmission – multicast the entire body Patch transmission – multicast portions of the clip right after the prefix Client actions Request the prefix from the cache Request the clip body from the video server
7
Mcache scheme (Cont ’ d) Server schedule Schedule a patch transmission and instruct the client to join both the existing object transmission and the patch transmission Schedule a new object transmission Cutoff threshold (y) If the existing transmission has been running more than y time units, then a new object transmission is created. Otherwise, a patch is used.
8
Mcache (u, x, y, L) Constants u – the request time x – the prefix length y – the cutoff threshold L – the length of movie body
9
Algorithm Batching If there is an object transmission scheduled to start in [u, u+x), then the client simply joins this multicast when it starts. If there is no object transmission that hast started in [u-y, u), or is scheduled to start in [u, u+x), then the server schedules a new object transmission at the latest possible time u+x. patchjoin u-yuu+x new
10
Algorithm (Cont ’ d) Patching When there is an object transmission that started at t [u-y, u), the client joins it at u+x instead of u. (to facilitate batching together requests for the same patch) The client needs a patch for the first u+x-t units of the clip body. Schedule patch transmission If there is a patch transmission scheduled to start before the client finishes receiving the prefix at the cache, then the client can join that patch channel when it starts.
11
Algorithm (Cont ’ d) Otherwise, the server has to schedule one patch transmission to start before u+x. Because the existing object transmission was started at t, this patch should start no later than t+y. (y is the cutoff threshold) So, the starting time s of the patch is set to min{y+t, u+x}. The patch length is u+x-t
12
Patching u-yuu+xt TimePlaying tC 0 : x uC 0 : x+u-t, C 1 : 0 u+xC 0 : 2x+u-t, C 1 : x C0C0 C1C1 share prefix patch
13
Segmented Mcache (SMcache) The body of the video clip is broken down into N segments. L 1, L 2, …, L N are the lengths of segments 1, 2, …, N respectively. Let x 1 be the length of the prefix, and y 1 be the cutoff parameter of segment 1. Virtual prefix After the patch transmission, the client virtually make the request for the next segment. The server can delay serving the request for the second segment for up to L 1 -y 1.
14
Algorithm at the server
15
SMcache with limited client disk space
16
Partitioned SMcache Each regional cache stores the first n segments of the body. The main server stores the remaining N-n segments.
17
Other schemes Greedy Disk-conserving Broadcasting (GDB) GDB requires less resources than Skyscraper Broadcasting while guaranteeing the same quality. Both the server resource and the client storage space required in GDB is close to the minimum achievable by any disk-conserving broadcast scheme. Controlled multicast Threshold-based multicast The most efficient “ client-pull ” technique in delivering “ cold ” video objects.
18
Other schemes (Cont ’ d) Catching Clients catch up with the current broadcast cycle by retrieving the missing frames from the server via a unicast channel. Selective catching Determining when to apply catching and when to apply controlled-multicast. Dynamic skyscraper
19
Server channel usage vs request rate
20
Bursty arrivals
21
Server load vs prefix size
22
Partitioned SMcache: network load at server and cache Selective Catching?
23
Main server and Regional cache costs
24
Conclusions A closed-loop scheme, called Mcache, for providing zero-delay video-on-demand services is proposed. The SMcache is a generalized and improved version of Mcache where the clip is partitioned into several segments. SMcache has significantly better performance than dynamic skyscraper when the prefix is very small.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.