Download presentation
Presentation is loading. Please wait.
Published byClifton Morgan Modified over 9 years ago
1
Analytic Evaluation of Quality of Service for On-Demand Data Delivery Hongfei Guo (guo@cs.wisc.edu) Haonan Tan (haonan@cs.wisc.edu)
2
05/09/01CS747 Project Presentation2 Outline Background Two Multicast Protocols Customized MVA Analysis Validation Model Improvement (Interpolation) Evaluation of Different Multicast Protocols Conclusion & Future Work
3
05/09/01CS747 Project Presentation3 Background Eager et al. reasoned minimum bandwidth requirements. But – How about Quality of Service ? – Balking probability – Waiting time Given: – server bandwidth – multicast protocol
4
05/09/01CS747 Project Presentation4 Two Multicast Protocols Grace Patching – Shared multicast stream (current data) – Unicast “patch” stream (missed data) – Average required server bandwidth
5
05/09/01CS747 Project Presentation5 Two Multicast Protocols (cont’d) Hierarchical Multicast Stream Merging – Each data transmission stream is multicast – Clients accumulate data faster than file play rate – Clients merged into larger and larger groups – Once merged, clients listen to the same streams – Average required server bandwidth
6
05/09/01CS747 Project Presentation6 CMVA Analysis Customer Balking Model –Fixed number of streams in the server –An arriving customer leaves if no streams available Customer Waiting Model –Fixed number of streams in the server –An arriving customer waits till it being served –Customers with same request coalesce in the waiting queue
7
05/09/01CS747 Project Presentation7 Input Parameters C server capacity external customer arrival rate Mnumber of file categories For i = 1, 2, …, M K i the total number of distinct files in category i T i mean duration of the entire file in category i i zipfian parameter in category i P i probability accessing category i files
8
05/09/01CS747 Project Presentation8 Output Parameters (Balking) S 1 average service time at center 1 R 0 mean residence time at center 0 X system throughput. For i = 1, 2, … #files on the server p i fraction of customer requests for file i C i ’average b/w for file i S 1i mean service time of file i streams at center 1 S 0 mean service time at center 0 Q 0 mean queue length at center 0 X i throughput of streams serving file i P B mean incoming costumer balking probability
9
05/09/01CS747 Project Presentation9 Output Parameters (Waiting) Wmean waiting time for a request (not coalesced) U system utilization Soverall mean stream duration estimate For i = 1, 2, …, #files on the server p i fraction of customer requests for file i S i mean stream duration for file i Q i mean number of waiting requests (not coalesced) for file i X i mean throughput of requests (not coalesced) for file i R i mean residence time of a request (not coalesced) for file i C i ’average number of active streams for file i R i ’mean residence time adjusted for coalescing W i ’mean waiting time adjusted for coalescing
10
05/09/01CS747 Project Presentation10 (1) Customer Balking Model Center 0 – SSFR center – Represent the waiting state of a stream Center 1 – Delay center – Represent the active state of a stream … Center 1 Center 0 C streams X
11
05/09/01CS747 Project Presentation11 CMVA Equations (Protocol result) (interarrival time)
12
05/09/01CS747 Project Presentation12 (2) Waiting Model Center 0 – multi-channel server with C streams Two kinds of measurements (from two perspectives) –Server only see non-coalesced customer requests –Customers count in both coalesced and non-coalesced requests. C streams X Center 0
13
05/09/01CS747 Project Presentation13 CMVA Equations Measurements for the server
14
05/09/01CS747 Project Presentation14 CMVA Equations (cont’d) Measurements for the customers
15
05/09/01CS747 Project Presentation15 Validation (1)
16
05/09/01CS747 Project Presentation16 Validation (2)
17
05/09/01CS747 Project Presentation17 Validation (3)
18
05/09/01CS747 Project Presentation18 Comparison of Patching Results Capa- city File1File2File3 ModelSimModelSimModelSim 1000.2490.9570.2530.9470.2580.846 1250.1950.560.2010.4980.2080.455 1500.1520.3270.1610.2830.1700.265 1750.1170.1750.130.1680.1410.171 2000.0890.0960.1060.1120.120.126 Average Stream Durationa – Big error here!
19
05/09/01CS747 Project Presentation19 Interpolation of Stream Duration g(N i ) – Threshold for patching Exact for two extreme cases: W i or W i 0 Exact for other cases ???
20
05/09/01CS747 Project Presentation20 Evaluation of Two Protocols (1)
21
05/09/01CS747 Project Presentation21 (2)
22
05/09/01CS747 Project Presentation22 (3)
23
05/09/01CS747 Project Presentation23 (4)
24
05/09/01CS747 Project Presentation24 Conclusion Balking model – big relative error when utilization is low. Waiting model – good for HSMS, but underestimates Patching when utilization is high. Interpolation helps ! C* is a good trade-off between QoS and server utilization. HSMS is always better than Patching.
25
05/09/01CS747 Project Presentation25 Future Work Further investigate the discrepancy between model results and simulation results Use the models to evaluate QoS of stream servers with multiple categories
26
05/09/01CS747 Project Presentation26 Comparison of Patching Results (1) Capa- city File1File2File3 ModelSimModelSimModelSim 1000.9560.9830.9160.9670.880.952 1250.9230.9620.8580.9280.8020.896 1500.8680.9160.7690.8470.6910.79 1750. 770.8130.6310.6930.5370.608 2000.5870.5820.4270.4440.3370.363 Coalesce Fraction
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.