Download presentation
Presentation is loading. Please wait.
1
Distributed Servers Architecture for Networked Video Services S. H. Gary Chan, Member IEEE, and Fouad Tobagi, Fellow IEEE
2
Contents Introduction System Architecture Caching schemes for video-on-demand services Analytical Model Results Conclusion
3
Introduction (1) In a VoD system, the central server has limited streaming capacities Distributed servers architecture Hierarchy of servers Local servers cache the videos Some servers are placed close to the user cluster Determine which movie and how much video data should be stored to minimize the total cost
4
Introduction (2) Three models of distributed servers architecture Uncooperative - unicast Uncooperative - multicast Cooperative - communicating servers This paper studied a number of caching schemes, all employing circular buffers and partial caching All requests arriving during the cache window duration are served from the local cache Claim that using partial caching on temporary storage can lower the system cost by an order of magnitude
5
System Cost (1) Cost associated with storing a movie in a local server How much and how long the storage is used For example, 1-GB disk costing about $200 Disk to be amortized one year Cost = $200/(365 X 6) = 0.091/h per GB Streaming rate = 5 Mb/s One minute of video data (0.0375 GB) costs: 0.091/h X 0.0375 = $5.71 X 10 -5 /min
6
System Cost (2) Cost associated with streaming a movie from the central server Range from low (Internet) to quite high (satellite channels) γ = ratio of storage cost to network small γ -> relatively cheap storage Tradeoff network cost versus storage cost 100-min movie cost $3
7
Video popularity Movies have notion of skewness High demand movies should be cached locally Low demand serviced directly Intermediate class should be partially cached Geometric video popularity r/m mean r% requests ask for m% of the movies 500 movies 4000 requests/h
8
Caching Scheme (1) Unicast Delivery A new arrival for a video opens a network channel of T h minutes to stream the video from the central server Local server caches W minutes of data with circular buffer All requests within a window size (W) form a group Arrivals more than W minutes start a new stream
9
Caching Scheme (2) Multicast Delivery Pre-storing: local server stores permanently the leading portion of each movie Leader size: W minutes W minutes of video data served by the local server T h -W multicast from the central server
10
Caching Scheme (3) Multicast Delivery Pre-caching: local server decides if it should cache a multicast video or not Two schemes depending on the multicast schedule Periodic multicasting with pre-caching Request-driven pre-caching
11
Caching Scheme (4) Movie is multicast at regular intervals of W minutes Local server pre-caches W minutes’ video data Multicast stream will be held for T h minutes or aborted at the end of W minutes The start of multicast streams is driven by request arrival Request arriving within W minutes can be served by the local server Periodic multicasting with pre-caching Request-driven pre-caching
12
Caching Scheme (5) Communicating Servers (“server chain”) Video data can be transferred from one server to another using the server network LS1 receives data from the central server using unicast stream LS1 buffers W minutes’ video data Requests at LS2 arriving within W are served by LS1 Chain is broken when two buffer allocations are separated by more than W minutes
13
Scheme Analysis Movie length T h min Streaming rate b 0 MB/min Request process is Poisson Interested in Average number of network channels, Average buffer size, Total system cost:
14
Analysis - Unicast Interarrival time = W + 1/λ By Little’s formula: Average number of buffers allocated: 1/(W+1/λ))T h which yields System Cost: To minimize, either cache or don’t λ<γ B = W = 0 λ>γ B = T h
15
Analysis - Multicast Pre-storing Buffer size depends on whether any request arrives within a multicast interval Probability of an arrival within W
16
Analysis - Multicast Pre-caching (Periodic multicasting with pre- caching) Pre-caching (Request-driven precaching) Probability of stream not started by the server
17
Analysis – Communicating Servers Communicating Servers If there are many local servers, a subsequent request likely comes from a different server Average number of concurrent requests is λT h Average interval between successive channel allocation Average time data is kept in the server network Average requests’ inter-arrival time given that the successive requests are within W
18
Results - Unicast For unicast, tradeoff between S and B given λ is linear with slope (-λ) Optimal caching strategy is all or nothing Determining factors for caching a movie SkewnessSkewness Cheapness of storageCheapness of storage
19
Results - Multicast with Pre-storing There is an optimal W to minimize cost The storage component of this curve becomes steeper as γ increases = 5 req/min, N s = 20, =0.002/min, T h = 90 min
20
Results - Multicast N s = 20, =0.002/min, T h = 90 min Pre-storing Pre-caching
21
Results – Communicating Servers Low arrival rate, large W to form a chain The higher the request rate, the easier it is to chain =0.002/min, T h = 90 min
22
Results Multicast achieves cost reduction within a certain window of arrival rate Chaining shouldn’t be higher cost than other systems unless local communication costs are very high =0.002/min, T h = 90 min
23
System Using Batching and Multicast Requests arriving within a period of time are grouped together Batching allows fewer multicast streams to be used, thus lowering the associated cost Users will tolerate some delay, D max
24
Results – Request Batching N s is low enough, distributed servers based on unicast delivery can achieve lower cost Combined system: batching > ’ => local server D max = 6 min, T h = 90 min ’
25
Results =0.002/min, T h = 90 min Unicast and Combined scheme require similar cost Multicast can further reduce the cost but not significant Chaining achieves the greatest saving in cost
26
Conclusion Distributed servers architecture to provide video- on-demand service Different local caching for video streaming Given certain cost function, determine which video and how much video data should be cached More skewed the video popularity is, the more saving a distributed servers architecture can achieve
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.