Download presentation
Presentation is loading. Please wait.
1
Periodic Broadcast and Patching Services - Implementation, Measurement, and Analysis in an Internet Streaming Video Testbed Michael K. Bradshaw, Bing Wang, Subhabrata Sen, Lixin Gao, Jim Kurose, Prashant Shenoy, and Don Towsley ACM Multimedia 2001
2
Introduction Multimedia streaming : significant loads place on both server and network resources. Multicast approaches : Batching Periodic Broadcast Patching Issues : control/signaling overhead, the interaction between disk and CPU scheduling, multicast join/leave times
3
Batching Server batches requests that arrive close together in time and multicast the stream to the set of batched clients. A drawback is that client playback latency increase with an increasing amount of client request aggregation.
4
Periodic Broadcast Server divides a video object into multiple segments, and continuously broadcasts segments over a set of multicast addresses. Earlier portions are broadcast more frequently than later ones to limit playback startup latency. Clients simultaneously listen to multiple addresses, storing future segments for later playback.
5
Patching (stream tapping) Server streams the entire video sequentially to the very first client. Client-side workahead buffering is used to allow a later-arriving client to receive its future playback data by listening to an existing ongoing transmission of the same video. Server need only additionally transmit those earlier frames that were missed by the later-arriving client.
6
Server and Client Architecture
7
Server Architecture Server Control Engine (SCE) One listener thread A pool of free scheduler threads One transmission schedule per video Server Data Engine (SDE) A global buffer cache manager Disk thread (DT) : round-lengthδ Network thread (NT) : round-lengthτ
8
Schedule Data Structure
9
Signaling between Server and Client
10
Testbed (1) 100 Mbps switched Ethernet LAN Three machines (server, workload generator and client) with Pentium-II 400 MHz CPU, 400 MB RAM, running Linux OS Workload Generator generates a background load of client requests in a Poisson manner and logs the timing information for the request to be served
11
Testbed (2) Periodic broadcast : L. Gao, J. Kurose, and D. Towsley. Efficient schemes for broadcasting popular videos (Greedy Disk-conserving Broadcasting segmentation scheme) l-GDB : the initial segment is l seconds Subsequent segments are of size 2 i-1 l where 1 < i < [log 2 L]
12
Testbed (3) Sample Videos for the experiments VideoFormatLength (min) Frame rateBandwidth (Mbps) File size (MB) # of RTP pkts Blade1MPEG-112301.99180.1155146 Blade2MPEG-115303337296706 DemoMPEG-22.730240.635138 3Mbps, 15min MPEG-1 Blade2 video SchemeSegs.Segment Lengths (sec) 3-GDB93,6,12,24,48,96,192,384,134.5(768) 10-GDB710,20,40,80,160,320,270.9(640) 30-GDB530,60,120,240,450.9(480)
13
Testbed (4) Patching algorithm : L. Gao and D. Towsley. Supplying instantaneous video-on-demand services using controlled multicast. (Threshold-based Controlled Multicast scheme) When client arrival rate for a video is Poisson with parameterλand the length of a video is L seconds, the threshold is chosen to be (sqrt(2Lλ+1)-1)/λ seconds.
14
Performance Metrics Server Side : System Read Load (SRL) Server Network Throughput (SNT) Deadline Conformance Percentage (DCP) Client Side : Client Frame Interarrival Time (CFIT) Reception Schedule Latency (RSL)
15
Catching Implications (1) PB :
16
Catching Implications (2) Patching :
17
Catching Implications (3) SRL for patching and 10-GDB with LFU caching
18
Component Benchmarks Configuration# Videos# Addresses per Video Bandwidth per Video NT completion Time DT completion Time I3816M bits1.60ms / 33ms6.16ms / 1sec II12448M bits5.08ms / 33ms8.39ms / 1sec
19
End-End Performance (1) Client Frame Interarrival Time (CFIT) histogram under 3-GDB, 10-GDB, and 30-GDB at 600 requests per minute. PB :
20
End-End Performance (2) Patching : Request RateNetwork Load CFITDCP 1 per minute20.85M bpsSimilar to the 30-GDB 99.9% 5 per minute55.27M bpsSimilar to the 30-GDB 99.9% Higher ratesBottleneck occurs --
21
Scheduling Among Videos
22
Conclusions Network bandwidth, rather than server resources, is likely to be the bottleneck. PB : 600 requests per minute Patching : fully loading a 100 Mb network An initial client startup delay of less than 1.5 sec is sufficient to handle startup signaling and absorb data jitter. Dramatic reductions can be gained via application-level data caching using LFU replacement policy.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.