Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distribution – Part I 20/10 – 2006 INF 5071 – Performance in Distributed Systems.

Similar presentations


Presentation on theme: "Distribution – Part I 20/10 – 2006 INF 5071 – Performance in Distributed Systems."— Presentation transcript:

1 Distribution – Part I 20/10 – 2006 INF 5071 – Performance in Distributed Systems

2 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems ITV Network Architecture Approaches Wide-area network backbones  ATM  SONET Local Distribution network  HFC (Hybrid Fiber Coax)  ADSL (Asymmetric Digital Subscriber Line)  FTTC (Fiber To The Curb)  FTTH (Fiber To The Home)  EPON (Ethernet Based Passive Optical Networks)  IEEE 802.11 Distribution Node End Systems Server ATM Wireless ATM ADSL Cable network EPON

3 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Delivery Systems Developments Network

4 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Delivery Systems Developments Network Saving network resources: Stream scheduling Several Programs or Timelines

5 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems From Broadcast to True Media-on-Demand Broadcast (No-VoD)  Traditional, no control Pay-per-view (PPV)  Paid specialized service Quasi Video On Demand (Q-VoD)  Distinction into interest groups  Temporal control by group change Near Video On Demand (N-VoD)  Same media distributed in regular time intervals  Simulated forward / backward True Video On Demand (T-VoD)  Full control for the presentation, VCR capabilities  Bi-directional connection [Little, Venkatesh 1994]

6 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Background/Assumption:  Performing all delivery steps for each user wastes resources  Scheme to reduce (network & server) load needed  Terms Stream: a distinct multicast stream at the server Channel: allocated server resources for one stream Segment: non-overlapping pieces of a video  Combine several user requests to one stream Mechanisms  Type I: Delayed on-demand delivery  Type II: Prescheduled delivery  Type III: Client-side caching

7 Type I: Delayed On Demand Delivery

8 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Delayed On Demand Delivery  Collecting requests  Joining requests  Batching Delayed response Collect requests for same title Batching Features o Simple decision process o Can consider popularity Drawbacks o Obvious service delays o Limited savings multicast Central server 1st client2nd client3rd client [Dan, Sitaram, Shahabuddin 1994]

9 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Delayed On Demand Delivery  Collecting requests  Joining requests  Batching Delayed response  Content Insertion E.g. advertisement loop  Piggybacking “Catch-up streams” Display speed variations  Typical Penalty on the user experience Single point of failure multicast Central server 1st client2nd client3rd client

10 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Graphics Explained Y - the current position in the movie  the temporal position of data within the movie that is leaving the server X - the current actual time time position in movie (offset) stream leaving faster than playback speed leaving slower than playback speed

11 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Piggybacking [Golubchik, Lui, Muntz 1995] Save resources by joining streams  Server resources  Network resources Approach  Exploit limited user perception  Change playout speed Up to +/- 5% are considered acceptable Only minimum and maximum speed make sense  i.e. playout speeds 0 +10%

12 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Piggybacking time position in movie (offset) Request arrival fast slow

13 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Piggybacking time position in movie (offset)

14 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Adaptive Piggybacking time position in movie (offset) [Aggarwal, Wolf, Yu 1996]

15 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Performance Percentage of savings in BW Interarrival Time in seconds Original piggybacking Optimize for every arrival, “snapshot” Greedy odd-even merging [Drawing after Aggarwal, Wolf and Yu (1996)] 0 100 200 300 400 500 20 90 40 30 50 60 70 80

16 Type II: Prescheduled Delivery

17 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Prescheduled Delivery  No back-channel  Non-linear transmission  Client buffering and re-ordering  Video segmentation  Examples Staggered broadcasting, Pyramid b., Skyscraper b., Fast b., Pagoda b., Harmonic b., …  Typical Good theoretic performance High resource requirements Single point of failure

18 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Cut into segments begin end Movie Reserve channels for segments Central server 1st client 2nd client 3rd client 1 2 3 4 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 broadcasting Determine a transmission schedule

19 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Prescheduled Delivery Arrivals are not relevant  users can start viewing at each interval start

20 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Staggered Broadcasting Near Video-on-Demand  Applied in real systems  Limited interactivity is possible (jump, pause)  Popularity can be considered  change phase offset time position in movie (offset) Jump forward Pause Continue Phase offset [Almeroth, Ammar 1996]

21 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting Idea  Variable size segments a 1 … a n  One segment repeated per channel  Fixed number of HIGH-bitrate channels C i with bitrate B  Several movies per channel, total of m movies (constant bitrate 1)  Segment length is growing exponentially Operation  Client waits for the next segment a 1 (on average ½ len(d 1) )  Receives following segments as soon as linearly possible Segment length  Size of segment a i :   is limited   >1 to build a pyramid   ≤ B/m for sequential viewing   =2.5 considered good value Drawback  Client buffers more than 50% of the video  Client receives all channels concurrently in the worst case [Viswanathan, Imielinski 1996]

22 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting  Pyramid broadcasting with B=4, m=2,  =2  Movie a a1a2a3a4 time time to play a1 back at normal speed

23 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting a2 a3 a4 Channel 1 Channel 2 Channel 3 Channel 4 a1  Pyramid broadcasting with B=4, m=2,  =2  Movie a a1a2a3a4 time Channels bandwidth for B normal speeds Time to send a segment: len(a n )/B Sending several channels in parallel

24 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting a2 a3 a4 Channel 1 Channel 2 Channel 3 Channel 4 b2 b3 b4 b1a1  Pyramid broadcasting with B=4, m=2,  =2  Movie a a1a2a3a4 time Segments of m different movies per channel: a & b

25 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting a2b2a2b2 a3b3 a2b2a2b2a2b2a2b2a2b2a2b2 a3b3a3b3a3b3 a4b4a4b4a4 a3b3 a2b2a2b2 Channel 1 Channel 2 Channel 3 Channel 4 a1b1  Pyramid broadcasting with B=4, m=2,  =2 request for a arrives client starts receiving and playing a1 client starts receiving and playing a2 client starts receiving a3client starts playing a3 client starts receiving a4client starts playing a4

26 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting a2b2a2b2 a3b3 a2b2a2b2a2b2a2b2a2b2a2b2 a3b3a3b3a3b3 a4b4a4b4a4 a3b3 a2b2a2b2 Channel 1 Channel 2 Channel 3 Channel 4 a1b1  Pyramid broadcasting with B=4, m=2,  =2

27 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pyramid Broadcasting Channel 1 Channel 2 Channel 3 Channel 4 a1b1  Pyramid broadcasting with B=5, m=2,  =2.5 a2b2a2b2a2b2a2b2a2b2a2b2a2b2a2b2 a3b3a3b3a3b3 a4b4  Choose m=1  Less bandwidth at the client and in multicast trees  At the cost of multicast addresses

28 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Skyscraper Broadcasting Idea  Fixed size segments  More than one segment per channel  Channel bandwidth is playback speed  Segments in a channel keep order  Channel allocation series 1,2,2,5,5,12,12,25,25,52,52,...  Client receives at most 2 channels  Client buffers at most 2 segments Operation  Client waits for the next segment a1  Receive following segments as soon as linearly possible [Hua, Sheu 1997]

29 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Skyscraper Broadcasting a1a2a3a4a5a6a7a8a9a10 time a1 a2a3a2a3a2a3a2a3 a4a5a4a5a4a5a4a5 a6a7a8a9a10a6a7a8 Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a1a2a3a4a5a6a7a8

30 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Skyscraper Broadcasting a1a2a3a4a5a6a7a8a9a10 time a1 a2a3a2a3a2a3a2a3 a4a5a4a5a4a5a4a5 a6a7a8a9a10a6a7a8 Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a1a2a3a4a5a6a7a8

31 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Other Pyramid Techniques Fast Broadcasting  Many more, smaller segments Similar to previous Sequences of fixed-sized segments instead of different sized segments  Channel allocation series Exponential series: 1,2,4,8,16,32,64,...  Segments in a channel keep order  Shorter client waiting time for first segment  Channel bandwidth is playback speed  Client must receive all channels  Client must buffer 50% of all data [Juhn, Tseng 1998]

32 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Fast Broadcasting a1 time Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a2a3a4a5a6a7a8a9a10a11a12a13a14a15 a1 a2a3a2a3a2a3a2a3a2a3a2a3a2a3a2a3 a4a5a6a7a4a5a6a7a4a5a6a7a4a5a6a7 a8a9a10a11a12a13a14a15a8a9a10a11a12a13a14a15 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15

33 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Fast Broadcasting a1 time Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a2a3a4a5a6a7a8a9a10a11a12a13a14a15 a1 a2a3a2a3a2a3a2a3a2a3a2a3a2a3a2a3 a4a5a6a7a4a5a6a7a4a5a6a7a4a5a6a7 a8a9a10a11a12a13a14a15a8a9a10a11a12a13a14a15 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15

34 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pagoda Broadcasting  Channel allocation series 1,3,5,15,25,75,125  Segments are not broadcast linearly  Consecutive segments appear on pairs of channels  Client must receive up to 7 channels For more channels, a different series is needed !  Client must buffer 45% of all data  Based on the following Segment 1 – needed every round Segment 2 – needed at least every 2 nd round Segment 3 – needed at least every 3 rd round Segment 4 – needed at least every 4 th round … [Paris, Carter, Long 1999]

35 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pagoda Broadcasting a1 time C 1 C 2 C 3 C 4 time request for a arrives a2a3a4a5a6a7a8a9a10a11a12a13a14a15 a1 a2a4a2a5a2a4a2a5a2a4a2a5a2a4a2a5 a3a6a12a3a7a13a3a6a14a3a7a15a3a6a12a3 a8a9a10a11a16a17a18a19a8a9a10a11a16a17a18a19 a16a17a18a19 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15a16a17a18a19

36 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Pagoda Broadcasting time C 1 C 2 C 3 C 4 time request for a arrives a1 a2a4a2a5a2a4a2a5a2a4a2a5a2a4a2a5 a3a6a12a3a7a13a3a6a14a3a7a15a3a6a12a3 a8a9a10a11a16a17a18a19a8a9a10a11a16a17a18a19 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15a16a17a18 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15a16a17a18a19

37 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Harmonic Broadcasting Idea  Fixed size segments  One segment repeated per channel  Later segments can be sent at lower bitrates  Receive all other segments concurrently  Harmonic series determines bitrates Bitrate(a i ) = Playout-rate(a i )/i Bitrates 1/1, 1/2, 1/3, 1/4, 1/5, 1/6, … Consideration  Size of a 1 determines client start-up delay  Growing number of segments allows smaller a 1  Required server bitrate grows very slowly with number of segments Drawback  Client buffers about 37% of the video for >=20 channels  (Client must re-order small video portions)  Complex memory cache for disk access necessary [Juhn, Tseng 1997]

38 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Harmonic Broadcasting time a1a2a3a4a5 a1 a2 a3 a4 a5 C 1 C 2 C 3 C 4 C 5 a1a2a3a4a5 request for a arrives

39 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Harmonic Broadcasting time a1a2a3a4a5 a1 a2 a3 a4 a5 C 1 C 2 C 3 C 4 C 5 a1a2a3a4a5 request for a arrives ERROR

40 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Harmonic Broadcasting time a1a2a3a4a5 a1 a2 a3 a4 a5 C 1 C 2 C 3 C 4 C 5 a1a2a3a4a5 request for a arrives Read a1 and consume concurrently Read rest of a2 and consume concurrently  Consumes 1 st segment faster than it is received !!!

41 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Harmonic Broadcasting: Bugfix Delayed Harmonic Broadcasting  Wait until a 1 is fully buffered  All segments will be completely cached before playout  Fixes the bug in Harmonic Broadcasting or Cautious Harmonic Broadcasting  Wait an additional a 1 time  Starts the harmonic series with a 2 instead of a 1  Fixes the bug in Harmonic Broadcasting [By Paris, Long, …]

42 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Prescheduled Delivery Evaluation Techniques  Video segmentation  Varying transmission speeds  Re-ordering of data  Client buffering Advantage  Achieve server resource reduction Problems  Tend to require complex client processing  May require large client buffers  Are incapable (or not proven) to work with user interactivity Current research to work with VCR controls  Guaranteed bandwidth required

43 Type III: Client Side Caching

44 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Client Side Caching  On-demand delivery  Client buffering  Multicast complete movie  Unicast start of movie for latecomers (patch) Examples  Stream Tapping, Patching, Hierarchical Streaming Merging, … Typical  Considerable client resources  Single point of failure

45 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Patching [Hua, Cai, Sheu 1998, also as Stream Tapping Carter, Long 1997] Server resource optimization is possible multicast Unicast patch stream Central server 1st client2nd client Join ! cyclic buffer

46 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) (patch) window size restart time of full stream request arrival min buffer size patch stream full stream

47 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) interarrival time interdeparture time full stream patch stream

48 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) Number of concurrent streams Concurrent full streams Concurrent patch streams Total number of concurrent streams The average number of patch streams is constant if the arrival process is a Poisson process

49 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) Number of concurrent streams Shown patch streams are just examples But always: patch end times on the edge of a triangle Compare the numbers of streams

50 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) Number of concurrent streams

51 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) Number of concurrent streams

52 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling time position in movie (offset) Number of concurrent streams

53 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Minimization of server load Minimum average number of concurrent streams Depends on  Fmovie length   U expected interarrival time    patching window size  C U cost of unicast stream at server  C M cost of multicast stream at server  S U setup cost of unicast stream at server  S M setup cost of multicast stream at server

54 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Optimal patching window size  For identical multicast and unicast setup costs Servers can estimate  U  And achieve massive saving  For different multicast and unicast setup costs Patching window size Interarrival time Movie length Unicast7445 Mio $ Greedy patching 3722 Mio $ -patching 375 Mio $

55 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems HMSM Hierarchical Multicast Stream Merging Key ideas  Each data transmission uses multicast  Clients accumulate data faster than their playout rate multiple streams accelerated streams  Clients are merged in large multicast groups  Merged clients continue to listen to the same stream to the end Combines  Dynamic skyscraper  Piggybacking  Patching [Eager, Vernon, Zahorjan 2001]

56 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems HMSM Always join the closest neighbour HMSM(n,1)  Clients can receive up to n streams in parallel HMSM(n,e)  Clients can receive up to n full-bandwidth streams in parallel  but streams are delivered at speeds of e, where e << 1 Basically  HMSM(n,1) is another recursive application of patching

57 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems HMSM(2,1) time position in movie (offset) request arrival determine patch size client’s cyclic buffer playout

58 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems HMSM(2,1) time position in movie (offset) request arrival determine patch size client’s cyclic buffer Not received because n=2 extended patch closest neighbor first extended buffer

59 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems HMSM(2,1) time position in movie (offset) request arrival closest neighbor first patch extension

60 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Client Side Caching Evaluation Techniques  Video segmentation  Parallel reception of streams  Client buffering Advantage  Achieves server resource reduction  Achieves True VoD behaviour Problems  Optimum can not be achieved on average case  Needs combination with prescheduled technique for high-popularity titles  May require large client buffers  Are incapable (or not proven) to work with user interactivity  Guaranteed bandwidth required

61 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Overall Evaluation Advantage  Achieves server resource reduction Problems  May require large client buffers  Incapable (or not proven) to work with user interactivity  Guaranteed bandwidth required Fixes  Introduce loss-resistant codecs and partial retransmission  Introduce proxies to handle buffering  Choose computationally simple variations

62 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Optimum depends on popularity  Estimate the popularity of movies  Frequently used: Zipf distribution

63 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Problem  Being Zipf-distributed is only an observed property

64 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Density function of the Zipf distribution  Compared to real-world data

65 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Optimized delivery scheduling Conclusion  Major optimizations possible  Independent optimizations don’t work Centralized systems problems  Scalability is limited  Minimum latency through distance  Single point of failure Look at distributed systems  Clusters  Distribution Architectures

66 2006 Carsten Griwodz & Pål Halvorsen INF 5071 – performance in distributed systems Some References 1. Thomas D.C. Little and Dinesh Venkatesh: "Prospects for Interactive Video-on-Demand", IEEE Multimedia 1(3), 1994, pp. 14-24 2. Asit Dan and Dinkar Sitaram and Perwez Shahabuddin: "Scheduling Policies for an On-Demand Video Server with Batching", IBM TR RC 19381, 1993 3. Rajesh Krishnan and Dinesh Venkatesh and Thomas D. C. Little: "A Failure and Overload Tolerance Mechanism for Continuous Media Servers", ACM Multimedia Conference, November 1997, pp. 131-142 4. Leana Golubchik and John C. S. Lui and Richard R. Muntz: "Adaptive Piggybacking: A Novel Technique for Data Sharing in Video-on-Demand Storage Servers", Multimedia Systems Journal 4(3), 1996, pp. 140-155 5. Charu Aggarwal, Joel Wolf and Philipp S. Yu: “On Optimal Piggyback Merging Policies for Video-on- Demand Systems”, ACM SIGMETRICS Conference, Philadelphia, USA, 1996, pp. 200-209 6. Kevin Almeroth and Mustafa Ammar: "On the Use of Multicast Delivery to Provide a Scalable and Interactive Video-on-Demand Service", IEEE JSAC 14(6), 1996, pp. 1110-1122 7. S. Viswanathan and T. Imielinski: "Metropolitan Area Video-on-Demand Service using Pyramid Broadcasting", Multimedia Systems Journal 4(4), 1996, pp. 197--208 8. Kien A. Hua and Simon Sheu: "Skyscraper Broadcasting: A New Broadcasting Scheme for 9. Metropolitan Video-on-Demand Systems", ACM SIGCOMM Conference, Cannes, France, 1997, pp. 89-100 10. L. Juhn and L. Tsend: "Harmonic Broadcasting for Video-on-Demand Service", IEEE Transactions on Broadcasting 43(3), 1997, pp. 268-271 11. Carsten Griwodz and Michael Liepert and Michael Zink and Ralf Steinmetz: "Tune to Lambda Patching", ACM Performance Evaluation Review 27(4), 2000, pp. 202-206 12. Kien A. Hua and Yin Cai and Simon Sheu: "Patching: A Multicast Technique for True Video-on Demand Services", ACM Multimedia Conference, Bristol, UK, 1998, pp. 191-200 13. Derek Eager and Mary Vernon and John Zahorjan: "Minimizing Bandwidth Requirements for On-Demand Data Delivery", Multimedia Information Systems Conference 1999 14. Jehan-Francois Paris: http://www.cs.uh.edu/~paris/


Download ppt "Distribution – Part I 20/10 – 2006 INF 5071 – Performance in Distributed Systems."

Similar presentations


Ads by Google