Download presentation
Presentation is loading. Please wait.
Published byOctavio Crowden Modified over 9 years ago
1
Presentation of M.Sc. Thesis Work Presented by: S. M. Farhad [040405056P] Department of Computer Science and Engineering, BUET Supervised by: Dr. Md. Mostofa Akbar Associate Professor Department of Computer Science and Engineering, BUET
2
2 Thesis Title Multicast Video-on-Demand Service in Enterprise Networks with Client Assisted Patching
3
3 Video-on-Demand Service Architectures Server C1C2 C3 C4 C5 C6 Centralized: Star-bus topologyEnterprise Network: Multiple Servers S1 S5 S3 S6 S4 Server C1 C2 C5 C6 S2 Server C3C4
4
4 Video-on-Demand Service Architectures Internet: Overlay topology S2 S5 C2 S3 C3 Server S1 S6 S4 C1
5
5 Characteristics of VoD Services Long-lived sessions (90-120 minutes) High bandwidth requirements MPEG-1 (1.5 Mbps) and MPEG-2 (3-10 Mbps) VCR like functionality Pause Fast-forward Rewind Slow motion, etc. Quality of service (QoS) Jitter-free delivery No service interruption
6
6 Classification of VoD Services True VoD (TVoD) User has complete control High quality of service Full-function VCR capabilities Not scalable Near VoD (NVoD) Stream sharing Service latency Service interruption Scalable
7
7 Video-on-Demand Service Techniques Unicast Dedicated channel for each client Easier to implement Expensive to operate Non scalable Multicast One-to-many data transmission Complex system Cost effective Scalable Broadcast Periodic time shifted multicast over some fixed channels Suitable only for popular movies Increases initial service latency
8
8 Unicast VoD Service in an Enterprise Network by M. M. Islam et al [2005] Designing an Admission Controller Batching Several media servers K-shortest path (MD) SLA (Multi-choice) Consider both network and server parameters Profit maximization MMKP (Heuristics) S1 S5 S3 S6 S2 S4 C3 C4 C1C2 ADC Server
9
9 Limitations of Unicast Model Not scalable Expensive-to-operate Profit maximization, hence might be unfair to some clients
10
10 Multicast VoD Service by S. Deering [1989] Avoid transmitting the same packet more than once on each link Branch routers duplicate and send the packet over multiple down stream branches Not that much scalabe S1 S5 S3 S6 S2 S4 C3 C4 C1C2 ADC C5 Server
11
11 Batching Technique by A. Dan et al [1994] Multiple client requests for the same movie arriving within short time Can be batched together Can be serviced using a single stream More scalable than simple multicasting Batching incurs a service latency (NVoD) Increased batch duration increases reneging probability but increases the probability of larger group formation
12
12 Dynamic Multicasting The multicast tree is expanded dynamically to accommodate new clients Eliminates service latency incurred by “Batching” technique But it requires client side cache Some works Adaptive Piggybacking [1996] Stream Tapping [1997] Chaining [1997] Patching [1998] Some variants of patching
13
13 Patching by K. Hua et al [1998] S1 S4 S3 S2 Server C3 C4 C1C2 C5 C6 C7 After the missing portion is patched the channel is released Patching stream Regular stream 5 10 0 0 0 0
14
14 Shortcoming of Patching S1 S4 S3 S2 Server C3 C4 C1C2 C8 C7 C5 C6 Incurs heavy server load 5 6 7 8 Patching window effect One of our objectives 0 0
15
15 Multicast VoD in Enterprise Network with Client Assisted Patching Our proposal Features Using multicast Batching Client Assisted Patching Fair scheduling Several media servers Admission controller S1 S5 S3 S6 S2 S4 C3 C4 C1C2 ADC C5 Server
16
16 Using Multicast with Batching Multicast on the shortest path tree rooted at each server Batching introduces service latency but increases the possibility of forming larger group S1 S5 S3 S6 S2 S4 C3 C4 C1C2 C5 Server
17
17 Client Assisted Patching: A New Patching Technique Each client will maintain a buffer and cache an initial part of a movie Upon arrival of a new client request shortly afterward a nearby client will be selected by Admission Controller that will supply the patching stream The newly introduced client in turn can also supply the patching stream to the later clients A client will serve only a single client Benefits Significantly decreases server load Increases the scalability of the system Eliminates service latency that incurs by batching
18
18 Client Assisted Patching: A New Patching Technique C4 C5 C2C3 C1 C6 The outcome is to alleviate the server load Server S1 S2 S3 0 0 0 5 10 Patching stream Regular stream
19
19 Multicast with VCR Functionality If another session of a multicast group exists playing the same movie that has the actual start-time within the interval [displaced start-time, displaced start- time + threshold] VCR functionality is granted if resources are available S1 S5 S3 S6 S2 S4 C3 C4 C1C2 C7 Server C5 C6
20
20 Admission Policy Maximum Factored Queue Length First (MFQLF) A request queue is maintained for each movie The pending batch with the largest queue size weighted by the factor, √(associated access frequency), to serve next A profit maximization but fair policy Largest queue size maximizes profit Access frequency ensures fairness
21
21 Workflow of the Admission Controller (ADC) The server advertises about their available multimedia data and other resources to the ADC The users will put their requests to the ADC The ADC accepts or rejects any client’s request according to an admission control principle Patch the request If patching is not possible then batch the request The client is notified about the acceptance of the request Clients cache data received from the server and forwards data to a client if requested
22
22 Database of Admission Controller ADC maintains a central database containing the following information Resources of the EN (Server and network bandwidths) Network topology Shortest path multicast trees rooted at each server Also maintains the detail information about ongoing sessions The multicast tree of each session Each client information (client source) If any client is serving other client (patching parent)
23
23 Architectural Environment Connectivity Switch to switch Gigabit Ethernet Switch to server Gigabit Ethernet Switch to clients LAN or ADSL Switch nodes Layer 3 switch Capacity: several millions pps Servers Network Attached storage (NAS) Workstations Capacity supports to play movies and related softwares Admission Controller High performance machine NAS is attached to it
24
24 The Architecture
25
25 Procedure Admission-Control Procedure Initialize Admission Controller Create shortest path trees rooted at each server node (Dijkstra’s algorithm) Online requests processing thread Process Movie Requests Process VCR Requests Process Session End Requests Process Patch Parent Requests Batched requests processing thread
26
26 Procedure Process-Movie-Request Select a patchable session (patching window) Select a patching parent If there is a patchable session and a patching parent available Admit the client in the session Else Enqueue the request for future processing
27
27 Procedure Select-Patching-Parent Forming shortest path tree rooted at the requesting client node For each client of the session Find the shortest distant client from the requesting client node Thus the total complexity is
28
28 Batched Requests Processing Thread Sorting the batched lists according to descending order of the factor “queuesize/√(associated access frequency)” For each movie of the batches admit each request Thus the complexity of the thread is
29
29 Client Buffer Requirement Case 1:
30
30 Client Buffer Requirement Contd. Patching window seconds Case 1 Session starts at t 0 Client requests the movie at t 1 and The missing portion is made up at time The initial portion of the buffer is not needed for patching after time Thus the buffer requirement is at most M is the stream rate of each stream
31
31 Client Buffer Requirement Contd. Case 2:
32
32 Client Buffer Requirement Contd. Case 2 Session starts at t 0 Client requests the movie at t 1 and The initial portion of the buffer is not needed for patching after time The missing portion is made up at time Thus the buffer requirement is at most
33
33 Simulation Parameters No of switch nodes: 20 No of links: 32 Total Clients: 400-1200 No of servers: 5-10 No of movies: 30 Batch interval: 1-5 Minutes Movie length: 1 hour
34
34 Simulation Parameters Replication of popular movies: 1-3 Bandwidth per link: 1Gbps Server I/O bandwidth: 1Gbps Patch window: 5-10 Minutes Movie stream type MPEG-2 (5Mbps) Simulation language: PERSEC
35
35 Some Probability Distribution Client requests are generated in our simulation according to a Poisson process The videos are requested with frequencies following a Zipf-like distribution We consider different interactive rates in the system
36
We compare Proxy-Prefix Caching Server accepts and rejects the requests Proxy caches the initial portion of the ongoing movies Proxy servers serve the missing portion 36 S1 S5 S3 S6 S2 S4 C3 C4 C1C2 C5 Server Proxy
37
37
38
38
39
39
40
40
41
41
42
42
43
43
44
44
47
47
53
53 Observations Client Assisted Patching outperforms Patching in the followings Significantly alleviates the server load More scalable Cheaper to operate Reduces the service latency introduced by batching technique
54
54 Compare with Conventional Patching ParametersClient Assisted Patching Server bandwidth req.20-30% decreased Percentage served10-20% increased Revenue income10-20% increased Patch window40-50% increased VCR action blocking10-20% decreased
55
Question?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.