Download presentation
Presentation is loading. Please wait.
Published byHope Floyd Modified over 8 years ago
1
Cost-Effective Video Streaming Techniques Kien A. Hua School of EE & Computer Science University of Central Florida Orlando, FL 32816-2362 U.S.A
2
Server Channels Videos are delivered to clients as a continuous stream. Server bandwidth determines the number of video streams can be supported simultaneously. Server bandwidth can be organized and managed as a collection of logical channels. These channels can be scheduled to deliver various videos.
3
Using Dedicated Channel Video Server Client Too Expensive ! Client Dedicated stream
4
Batching FCFS MQL (Maximum Queue Length First) MFQ (Maximum Factored Queue Length) Can multicast provide true VoD ?
5
Low Latency: requests must be served immediately Challenges – conflicting goals Highly Efficient: each multicast must still be able to serve a large number of clients
6
Some Solutions Patching [Hua98] Range Multicast [Hua02]
7
Patching Regular Multicast Video A
8
Proposed Technique: Patching Regular Multicast A Video Player Buffer B Video t Patching Stream Skew point
9
Proposed Technique: Patching Regular Multicast A Buffer B Video 2t Skew point is absorbed by client buffer Video Player
10
Client Design Video Server Lr Video Player Regular Multicast Patching Multicast Data Loader Regular Stream Patching Stream Client A LrLp Video Player Client B Buffer LrLp Video Player Client C
11
Server Design Server must decide when to schedule a regular stream or a patching stream A r B p C p D p E r F p G p Multicast group time
12
Two Simple Approaches If no regular stream for the same video exists, a new regular stream is scheduled Otherwise, two policies can be used to make decision: Greedy Patching and Grace Patching
13
Greedy Patching Patching stream is always scheduled Video Length Shared Data Buffer Size Shared Data Buffer Size Shared Data A B D C Time
14
Grace Patching If client buffer is large enough to absorb the skew, a patching stream is scheduled; otherwise, a new regular stream is scheduled. Video Length Buffer Size Regular Stream A Shared Data B C Time
15
Performance Study Compared with conventional batching Maximum Factored Queue (MFQ) is used Performance metric is average service latency
16
Simulation Parameters Request rate (requests/min) Client buffer (min of data) Server bandwidth (streams) Video length (minutes) Number of videos Parameter 5010-90 50-10 1,200400-1,800 90N/A 100N/A DefaultRange Video Access Skew factor0.7N/A Number of requests200,000N/A
17
Effect of Server Bandwidth
18
Effect of Client Buffer
19
Effect of Request Rate
20
Optimal Patching A r B p C p D p E r F p G p patching window Multicast group time What is the optimal patching window ?
21
Optimal Patching Window D is the mean total amount of data transmitted by a multicast group Minimize Server Bandwidth Requirement, D/W, under various W values Video Length Buffer Size A W
22
Optimal Patching Window Compute D, the mean amount of data transmitted for each multicast group Determine , the average time duration of a multicast group Server bandwidth requirement is D/ which is a function of the patching period Finding the patching period that minimize the bandwidth requirement
23
Candidates for Optimal Patching Window
24
Piggybacking [Golubchik96] new arrivals departures +5% -5% CBA Slow down an earlier service and speed up the new one to merge them into one stream Limited stream sharing due to long catch-up delay Implementation is complicated
25
Concluding Remarks Unlike conventional multicast, requests can be served immediately under patching Patching makes multicast more efficient by dynamically expanding the multicast tree Video streams usually deliver only the first few minutes of video data Patching is very simple and requires no specialized hardware
26
Patching on Internet Problem: –Current Internet does not support multicast A Solution: –Deploying an overlay of software routers on the Internet –Multicast is implemented on this overlay using only IP unicast
27
Content Routing Each router forwards its Find messages to other routers in a round-robin manner.
28
Removal of An Overlay Node Inform the child nodes to reconnect to the grandparent
29
Failure of Parent Node –Data stop coming from the parent –Reconnect to the server
30
Slow Incoming Stream Reconnect upward to the grandparent
31
Downward Reconnection When reconnection reaches the server, future reconnection of this link goes downward. Downward reconnection is done through a sibling node selected in a round-robin manner. When downward reconnection reaches a leave node, future reconnection of this link goes upward again.
32
Limitation of Patching The performance of Patching is limited by the server bandwidth. Can we scale the application beyond the physical limitation of the server ?
33
–Chaining [Hua97] Using a hierarchy of multicasts Clients multicast data to other clients in the downstream Demand on server bandwidth is substantially reduced
34
Chaining – Highly scalable and efficient – Implementation is complex Video Server disk Screen disk Screen disk Client A Client B Client C
35
Range Multicast [Hua02] Deploying an overlay of software routers on the Internet Video data are transmitted to clients through these software routers Each router caches a prefix of the video streams passing through This buffer may be used to provide the entire video content to subsequent clients arriving within a buffer-size period
36
Range Multicast Group Four clients join the same server stream at different times without delay Each client sees the entire video Buffer Size: Each router can cache 10 time units of video data. Assumption: No transmission delay
37
Multicast Range All members of a conventional multicast group share the same play point at all time –They must join at the multicast time Members of a range multicast group can have a range of different play points –They can join at their own time Multicast Range at time 11: [0, 11]
38
Network Cache Management Initially, a cache chunk is free. When a free chunk is dispatched for a new stream, the chunk becomes busy. A busy chunk becomes hot if its content matches a new service request.
39
RM vs. Proxy Servers Popular data are heavily duplicated if we cache long videos. RM routers cache only a small leading portion of the video passing through Caching long videos is not advisable. Many data must still be obtained from the server Majority of the data are obtained from the network. Proxy ServersRange Multicast
40
2-Phase Service Model (2PSM) [Hua99] Browsing Videos in a Low Bandwidth Environment
41
Search Model Use similarity matching or keyword search to look for the candidate videos. Preview some of the candidates to identify the desired video. Apply VCR-style functions to search for the video segments.
42
Conventional Approach Advantage: Reduce wait time 1. Download S o 2. Download S 1 while playing S 0 3.3. Download S 2 while playing S 1... Disadvantage: Unsuitable for searching video
43
Search Techniques Use extra preview files to support the preview function Requires more storage space Downloading the preview file adds delay Use separate fast-forward and fast- reverse files to provide the VCR-style operations Requires more storage space Server can become a bottleneck
44
Challenges How to download the preview frames for FREE ? No additional delay No additional storage requirement How to support VCR operations without VCR files ? No overhead for the server No additional storage requirement
45
2PSM – Preview Phase
46
2PSM – Playback Phase t
47
Remarks 1. It requires no extra files to provide the preview feature. 2. Downloading the preview frames is free. 3. It requires no extra files to support the VCR functionality. 4. Each client manages its own VCR-style interaction. Server is not involved.
48
–2PSM Video Browser
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.