Ali Saman Tosun Computer Science Department Video Streaming Ali Saman Tosun Computer Science Department
Broadcast to True Media-on-Demand Broadcast (No-VoD) Traditional, no control Pay-per-view (PPV) Paid specialized service Near Video On Demand (N-VoD) Same media distributed in regular time intervals Simulated forward / backward True Video On Demand (T-VoD) Full control for the presentation, VCR capabilities Bi-directional connection
Streaming Stored Video media stored at source transmitted to client streaming: client playout begins before all data has arrived timing constraint for still-to-be transmitted data: in time for playout
Streaming Video constant bit rate video transmission variable network delay client video reception constant bit rate video playout at client client playout delay Cumulative data buffered video time Client-side buffering, playout delay compensate for network-added delay, delay jitter
Smoothing Stored Video For prerecorded video streams: All video frames stored in advance at server Prior knowledge of all frame sizes (fi, i=1,2,..,n) Prior knowledge of client buffer size (b) workahead transmission into client buffer n 2 1 b bytes Client Server
Smoothing Constraints U number of bytes rate changes S L time (in frames) Given frame sizes {fi} and buffer size b Buffer underflow constraint (Lk = f1 + f2 + … + fk) Buffer overflow constraint (Uk = min(Lk + b, Ln)) Find a schedule Sk between the constraints Algorithm minimizes peak and variability
Proxy-based Video Distribution Server Proxy adapts video Proxy caches video Proxy Client Client
Proxy Operations Drop frames Quality Adaptation Drop B,P frames if not enough bandwidth Quality Adaptation Transcoding Change quantization value Most of current systems don’t support Video staging, caching, patching Staging: store partial frames in proxy Prefix caching: store first few minutes of movie Patching: multiple users use same video
Online Smoothing Source or proxy can delay the stream by w time units: streaming video stream with delay w b bytes Client Source/Proxy Larger window w reduces burstiness, but… Larger buffer at the source/proxy Larger processing load to compute schedule Larger playback delay at the client
Online Smoothing Model B b Di-w Ai Si proxy client Arrival of Ai bits to proxy by time i in frames Smoothing buffer of B bits at proxy Smoothing window (playout delay) of w frames Playout of Di-w bits by client by time i Playout buffer of b bits at client Transmission of Si bits by proxy by time i
Online Smoothing max{Di-w, Ai - B} <= Si <= min{Di-w + b, Ai} Must send enough to avoid underflow at client Si must be at least Di-w Cannot send more than the client can store Si must be at most Di-w + b Cannot send more than the data that has arrived Si must be at most Ai Must send enough to avoid overflow at proxy Si must be at least Ai - B max{Di-w, Ai - B} <= Si <= min{Di-w + b, Ai}
Online Smoothing Constraints Source/proxy has w frames ahead of current time t: number of bytes don’t know the future U L ? time (in frames) t t+w-1 Modified smoothing constraints as more frames arrive...
Smoothing Star Wars GOP averages 2-second window 30-second window MPEG-1 Star Wars,12-frame group-of-pictures Max frame 23160 bytes, mean frame 1950 bytes Client buffer b=512 kbytes
Prefix Caching to Avoid Start-Up Delay Avoid start-up delay for prerecorded streams Proxy caches initial part of popular video streams Proxy starts satisfying client request more quickly Proxy requests remainder of the stream from server smooth over large window without large delay Use prefix caching to hide other Internet delays TCP connection from browser to server TCP connection from player to server Dejitter buffer at the client to tolerate jitter Retransmission of lost packets apply to “point-and-click” Web video streams
Changes to Smoothing Model Separate parameter s for client start-up delay Prefix cache stores the first w-s frames Arrival vector Ai includes cached frames Prefix buffer does not empty after transmission Send entire prefix before overflow of bs Frame sizes may be known in advance (cached) Ai bs Si Di-s bc bp
Best possible quality at possible sending rate Scalable coding Typically used as Layered coding A base layer Provides basic quality Must always be transferred One or more enhancement layers Improve quality Transferred if possible Sending rate Quality Best possible quality at possible sending rate Base layer Enhancement layer
Temporal Scalability Frames can be dropped In a controlled manner Frame dropping does not violate dependancies Low gain example: B-frame dropping in MPEG-1
Better compression due to low values Spatial Scalability Base layer Downsample the original image Send like a lower resolution version Enhancement layer Subtract base layer pixels from all pixels Send like a normal resolution version If enhancement layer arrives at client Decode both layers Add layers 72 75 83 61 73 -1 -12 2 10 Base layer Less data to code Enhancement layer Better compression due to low values
SNR Scalability SNR – signal-to-noise ratio Idea Base layer Is regularly DCT encoded A lot of data is removed using quantization Enhancement layer is regularly DCT encoded Run Inverse DCT on quantized base layer Subtract from original DCT encode the result If enhancement layer arrives at client Add base and enhancement layer before running Inverse DCT
Multiple Description Coding Idea Encode data in two streams Each stream has acceptable quality Both streams combined have good quality The redundancy between both streams is low Problem The same relevant information must exist in both streams Old problem: started for audio coding in telephony Currently a hot topic
Delivery Systems Developments Several Programs or Timelines Network Saving network resources: Stream scheduling
Patching Join ! Central server 1st client 2nd client Unicast patch stream multicast cyclic buffer 1st client 2nd client Server resource optimization is possible
Proxy Prefix Caching Split movie Operation Goal Central server Prefix Suffix Operation Store prefix in prefix cache Coordination necessary! On demand Deliver prefix immediately Prefetch suffix from central server Goal Reduce startup latency Hide bandwidth limitations, delay and/or jitter in backbone Reduce load in backbone Unicast Prefix cache Unicast Client
Interval Caching (IC) caches data between requests following requests are thus served from the cache sort intervals on length I32 I33 I12 I31 I11 I21 S11 S12 S11 Video clip 1 S12 S11 S13 Video clip 2 S22 S21 Video clip 3 S33 S31 S32 S34 Video clip 1 Video clip 1 I11 I12 I21 I31 I32 I33
Receiver-driven Layered Multicast (RLM) Requires IP multicast layered video codec (preferably exponential thickness) Operation Each video layer is one IP multicast group Receivers join the base layer and extension layers If they experience loss, they drop layers (leave IP multicast groups) To add layers, they perform "join experiments“ Advantages Receiver-only decision Congestion affects only sub-tree quality Multicast trees are pruned, sub-trees have only necessary traffic
Receiver-driven Layered Multicast (RLM)