Design and Implementation of a Caching System for Streaming Media over the Internet Ethendranath Bommaiah, Katherine Guo, Markus Hofmann,and Sanjoy Paul IEEE Real-Time Technology and Applications Symposium, 2000
Outline Focus on the design and implementation issues Protocols RTSP as control protocol RTP as data protocol Performance Network load Server load Client start-up latency
Application layer aware helper in the network
The streaming cache design Helper Caching and data forwarding proxy Each client is associated with on helper. Client requests are redirected to the client’s helper. The helper serves the request itself if possible, otherwise it forwards the request to the most appropriate helper or the server.
The streaming cache design (Cont’d) Segmentation of streaming objects Client request aggregation Temporal distance Ring buffer Data transfer rate control Reduce startup latency
Startup latency Without helper L 0 = 2(d 1 + d 2 ) + K With helper L 1 = d 2 + max(K 1 r/b, 2d) + d 2 + (K – K 1 )r/min(a, b) The client does not start playing until its playout buffer is filled.
Startup latency (Cont’d) The client does not start playing until its playout buffer is filled. With helper: Download K 1 seconds of data to the client Request K – K 1 seconds of data from either its local disk, or another helper, or the server.
Start-up latency when getting data from different sources d = 0, b > a, L 1 = d 2 + K 1 r/b + d 2 + (K – K 1 )r/b
Main module of a helper
Implementation RTSP/RTP client and server Buffer management Attach a new request to an existing buffer Allocate a new buffer Cache management Maps URLs to local filenames Manage the disk space allocated for caching Scheduler Manage the global queue of events Producer and consumer events
Buffer organization
Buffer management Modeled by producer and consumer events Garbage collection Buffer temporal distance is statically chosen, but the number of packets within the ring might vary. Solution Associate a reference count with each RTP packet. Use a garbage collection event to free packets after they have been forwarded by the last consumer. Outgoing stream composition RTP SSRC (synchronizing source identifier) Timestamp
Timestamp translation
Experimental results Server Read server on a Sun Ultra-4 workstation with 4 processors, 1GB main memory Helper 400MHz Pentium II with 250MB main memory Client 300MHz Pentium Pro with 250MB main memory Network 10Mbps Ethernet
Traffic reduction ratio R = (D out – D in ) / D out D out : data transferred from the helper to the client D in : data transferred from the server to the helper A larger value of R indicates larger server load and network.
Prefix caching benefits No cache replacement
Buffer request aggregation benefits
Improvement on startup latency K = 5 sec