An Analysis of Chaining Protocols for Video-on-Demand J.-F. Pâris University of Houston Thomas Schwarz, S. J. Universidad Católica del Uruguay
Introduction Video-on-demand lets Different customers watch Different videos at Different times Very high bandwidth requirements
Solutions (I) Distributing server workload among several sites Content-delivery networks Local caches, … Letting the server broadcast same video data to all customers watching the same video Not possible on today's Internet
Solutions (II) Let customers participate in the video distribution P2P solution Available distribution bandwidth grows linearly with the demand Cheap and easy to deploy Requires everyone to cooperate Must penalize selfish customers
Chaining One of the oldest VOD solutions S. Sheu, K. A. Hua, and W. Tavanapong. Chaining: A Generalized Batching Technique for Video-on-Demand Systems. Proc. ICMS Conference, June Involves clients in video distribution process
Assumptions Customers have enough upstream bandwidth to forward the video to the next client Customer buffer sizes do not allow them to store entire videos Can only store last β minutes A reasonable assumption in 1997
Basic chaining Customer requests form a chain First customer in the chain receives its data from the server Subsequent customers receive their data from their immediate predecessor Chain is broken each time two consecutive requests are more than β minutes apart
An example Customer A Customer B Customer C Stream from server Stream from customer A Stream from server
Expanded chaining Assumes that Customers have enough buffer space to cache the whole contents of the video Helps with rewind command Customers will disconnect once they have finished playing the video A realistic assumption
How it works From server Customer A From A Customer B From B Customer C To ATo BTo C SERVER tt t’ From server
Server bandwidth requirements (2-hour video)
Accelerated chaining Has clients forward their video data to the next client in the chain at a slightly higher rate than the video consumption Acceleration factor will vary between 1.01 and 1.1
How it works From server Customer A From A Customer B From B Customer C To A To B To C SERVER tt t’ From server
Server bandwidth requirements (2-hour video)
Motivation for further work All these results were obtained through discrete simulation Mere numerical values Could we not use analytical methods? Would get algebraic solutions Could derive maxima/minima
Our assumptions D is video duration β is buffer size λ is customer arrival rate f is video acceleration rate Time between arrivals is governed by the exponential distribution with probability density function p(t) = λ e- λt
Basic chaining (I) Two cases to consider Interarrival time is less than β Previous customer forwards the video No server workload Interarrival time is more than β Server transmits whole video
Basic chaining (II) Average server workload per video is Average server bandwidth is
Expanded chaining (I) Two cases to consider Interarrival time Δt is less than D Previous customer forwards part of the video (D – Δt) Server transmits remaining part (Δt) Interarrival time Δt is more than D Server transmits whole video
Expanded chaining (II) Customer A From A First case: Customer B Second case: Customer C tt t’ > D From server Customer A
Expanded chaining (III) Average server workload per video is Average server bandwidth is
Accelerated chaining (I) Two cases to consider Interarrival time Δt is less than D Previous customer forwards part of the video: min(D, f (D – Δt)) Server transmits remaining part Interarrival time Δt is more than D Server transmits whole video
Accelerated chaining (II) Result is a fairly complicated expression with ρ = 1/f
Comparing analytical results with simulation results (I)
Comparing analytical results with simulation results (II)
Conclusion Very good agreement between analytical and simulation results Two techniques validate each other Analytical results provide a better investigation tool than simulation results Can compute bandwidth maxima, …
Future work Add an incentive mechanism To penalize freeloaders Investigate how mechanism interacts with protocol Implement fast forward/jump Develop a test bed implementation
Handling early termination: Original schedule From server Customer A From A Customer B From B Customer C To ATo BTo C SERVER tt t’ From server
Handling early termination: After customer B leaves From server Customer A Already played Customer B Customer C To ATo C SERVER tt t’ From server Was from B From A