Video On Demand
Video on Demand One video server Many video data Many clients Client want to watch at any time In this lecture, we will look at some representative research in video on demand protocols. Imagine you have a video server, loaded with tons of video. Clients can send request to the server and request to watch a particular video. The server have to response by streaming the requested video to the client. We want to be able to support large number of clients, and clients should be able to watch at anytime. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Assumptions Constant bitrate stream Perfect network transport All of the work we will study today make simplifying assumptions about the data and network. Firstly, it assumes that the video data are encoded at a constant bitrate. Secondly, it assumes that the network is perfect – no packet loss, no jitter, no fluctuating available bandwidth. These assumptions are not reasonable for Internet, but is suitable for CATV network. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Unicast Solution One channel per client No start-up latency No client buffer Low client bandwidth Large server bandwidth Not scalable Even with these assumptions, the problem of serving video to large number of clients remain challenging. Let’s look at the most straight forward solution. Suppose the server send a new stream in response to each new client request. Under this scheme, there is little start-up latency – the client only have to wait for one round trip time to receive the stream. The client bandwidth requirement is also small – it only needs to downlaod at the rate the video is playback. There is no buffer required at the client. The server, however, requires large amount of bandwidth to support large number of clients. Therefore the scheme is not scalable at all. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution Batching aggregate client requests serve using multicast clients have to wait No client buffer Low client bandwidth “Scheduling Policies for an On-Demand Video Server with Batching” Dan, Sitaram, Shahabuddin, IBM The other obvious solution is to use multicast to serve clients requesting the same video. Clients requests that arrive with a short time period (say 10 minutes) can be served together using a single multicast stream. The drawback is that the clients have to wait. The obvious tradeoff here is the client waiting time and server bandwidth. Just like the unicast solution, the client does not need any buffering or any additional bandwidth. This approach is call batching, because clients are served in batch. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution User-centered approach Data-centered approach Scheduling data based on user requests Data-centered approach Don’t care about user Just broadcast popular video Batching is considered a user-centric approach. Data streams are scheduled based on user requests. If no client requests any stream, no stream is sent. The other approach, which we will focus on today, is the data-centric approach. Under this approach, the server just multicast videos on to specific channels, and don’t care about users. User can just tune in to watch the video she desires. This approach is similar to that of TV or VoD on plane. This is useful for broadcasting hot video. This is the method we will focus on today. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution Batching Staggered Broadcast Let’s begin by looking at the simplest way to implement a data-centric vod – staggered broadcast. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Staggered Broadcast Video C0 C1 C2 : Staggered broadcast just begins broadcasting a video periodically, at regular interval. Client can tune in to a channel, and wait for the beginning of the video. C1 C2 : NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Staggered Broadcast 2 hour video 5 minutes waiting time Number of channels = 2 x 60 / 5 = 24 Required bandwidth = The server needs large amount of bandwidth to support staggered broadcast. Suppose we have a 2 hour video, and we set the maximum client waiting time to be 5 minutes. The server will need to send the video in 24 different channels. This translates to 36Mbps per video (assume VCD). The advantage is that client does not require buffers. 1.5Mbps x 24 = 36Mbps NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution Batching Staggered Broadcast clients have to wait No client buffer Low client bandwidth Huge server bandwidth NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution Batching Staggered Broadcast Periodic Broadcast NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Periodic Broadcast Video C0 C1 C2 : Periodic broadcast uses technique that is very similar to staggered broadcast. However, instead of broadcast each video entirely in each channel, periodic broadcast divide the video into segments. Each segment can then be broadcast to different channel. Client will tune into different channel at different time to receive different segments of the video. Because client have to wait for the beginning of the first segment to start watching the video, the time the client have to wait is equal to the first segment. C0 C1 C2 : NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Pyramid Broadcast Video C0 C1 C2 : If we make the first segment smaller, then we can reduce the waiting time! This is the basic idea behind pyramid broadcast. Pyramid broadcast divide a video in segments who size is an increasing geometric series. Client receive data from two channels at one time. C0 C1 C2 : NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Pyramid Broadcast Video C0 C1 C2 : Let’s look at pyramid broadcast in greater details. We want to analyze the server bandwidth requirements, clients waiting time, client buffer requirements and client bandwidth requirements. First, note that if we send each segment in each channel at the same rate as the video, we will encounter “hiccup” – discontinuity in video playback. We now analyze how much bandwidth the server need to broadcast one video. C0 C1 C2 : NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Analysis of Pyramid Broadcast Notations B : Total available bandwidth Bv : Bandwidth of video Tv : Total length of each video K : Number of segments per video Ti : Length of segment i : Factor in geometric series NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Channel Bandwidth playback time = Ti D’oh! download time = Ti+1Bv/Bi Just miss it! download time = Ti+1Bv/Bi Download time for segment i+1 needs to be smaller than Ti for it to arrive in time. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Channel Bandwidth NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
= 2 NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Start-up Latency Worst case waiting time = NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Optimal T1 2.5 NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Storage Requirements NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Pyramid Broadcast Large client bandwidth (KBv) Huge client buffer (70–80% Tv) NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Permutation-based Pyramid Broadcast Permutation-based pyramid broadcast (PPB) is a combination of staggered broadcast and pyramid broadcast (PB) It has two main differences vs PB: (1) Each segment is staggered and broadcast in p channels. (2) Client downloads one channel at a time (3) client can pause while download a segment, and resume later from another channel. C0 C1 C2 NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Channel Bandwidth playback time = Ti D’oh! Just miss X it! X download time = Ti+1Bv/Bi X needs to be smaller than Ti for segment i+1 to arrive in time. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Channel Bandwidth NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Client Latency NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Storage Requirement One channel at a time Can pause and wait NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Storage Requirement Within time X, better not consume all data pause k-1 k-1 k resume k Within time X, better not consume all data in buffer. X NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Storage Requirement Within time X, better not consume all data pause k-1 k-1 k k resume Within time X, better not consume all data in buffer. X NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Storage Requirement NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Comparisons Scheme Storage Server’s BW Client’s Pyramid 70% KBv 20% (+p)KBv 2-3 Bv PBP has huge improvement in client access latency, storage requirement and bandwidth, at the expense of larger server’s bandwidth. Carter, Long and Paris “Video on Demand Broadcasting Protocols” NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Pyramid Broadcasting NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Skyscraper Broadcasting Observations: storage requirement is affected by size of the largest chunk So, let’s limit the size of the largest chunk! The next broadcasting scheme we will see today is skyscrapper broadcasting. It is different from the previous scheme in that (1) it limits the size of a largest chunk, (2) the series is design in such a way to reduce channel bandwidth and client buffer. (3) the channel bandwidth is the same as the video bandwidth. (4) client have to downlaod from two channels at a time. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Pyramid Skyscraper NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Skyscraper Broadcasting Uses series 1 2 2 5 5 12 12 25 25 52 52 … W W W NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Skyscraper Example NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Skyscraper Example NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Comparisons Scheme Storage Server’s BW Client’s Pyramid 70% KBv 20% (+p)KBv 2-3 Bv Skyscraper 10% KBv 1-2 Bv Carter, Long and Paris “Video on Demand Broadcasting Protocols” NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Other schemes Pagoda Broadcasting 1 3 5 15 25 75 125 … Harmonic Broadcasting Equal segment size, varies bandwidth instead! There are many other broadcasting schemes – just to name a few, pagoda broadcasting that uses a different series. Harmonic broadcasting that uses equal segment size, but varies the channel bandwidth instead. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution Batching Staggered Broadcast Periodic Broadcast Sending rate ≥ playback rate May need multiple channels Need additional client buffer Need to wait NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Multicast Solution Batching Staggered Broadcast Periodic Broadcast Patching NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Patching Time mcast unicast Client Request All periodic broadcast protocols requires client to wait before viewing video. This is sometimes called “near” voD. Patching is a “true” VoD protocol. Clients can start watching immediately after request, provided the server have enough channels. The server multicast a new video, based on previous requests. Suppose a client wants to join and watch the same video. The server can unicast a patch stream to the client immediately. The client, will download the patched stream, and the multicast stream. The patched stream is played back immediately. The multicast stream is stored and when the patched stream is done playing, the client will continue playback from the multicast stream. unicast Client Request NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Patching Patching Window: W Time mcast mcast Client Request The patching window W is a threshold, if client request arrives within the patching window, a patch stream will be sent. If the patching window is large, in the worst case, the client may have to buffer a lots of data. If W is larger than client buffer, B, the server will have to start a new multicast stream. mcast Client Request NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Grace Patching if W < B client buffer video[W .. end] 30 minutes video 1 client arrival per minute Total data delivered = This is call grace patching. NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Scenario 1: B = 15mins 30 minutes video 1 client arrival per minute Total data delivered = NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Scenario 2: B = 5mins 30 minutes video 1 client arrival per minute Total data delivered = NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Scenario 3: B = 2mins 30 minutes video 1 client arrival per minute Total data delivered = NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)
Summary Batching (User Centered) Staggered Broadcast (Data Centered) Periodic Broadcast (Data Centered) Patching (True VOD) NUS.SOC.CS5248-2010 Roger Zimmermann (based in part on slides by Ooi Wei Tsang)