Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distribution – Part I August 10, 2015 INF 5071 – Performance in Distributed Systems.

Similar presentations


Presentation on theme: "Distribution – Part I August 10, 2015 INF 5071 – Performance in Distributed Systems."— Presentation transcript:

1 Distribution – Part I August 10, 2015 INF 5071 – Performance in Distributed Systems

2 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Distribution  Why does distribution matter? −Digital replacement of earlier methods  What can be distributed?

3 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Distribution Network Approaches  Wide-area network backbones −FDM/WDM −10GB Ethernet −ATM −SONET  Local Distribution network −Wired HFC (Hybrid Fiber Coax - cable) ADSL (Asymmetric Digital Subscriber Line) EPON (Ethernet Based Passive Optical Networks) −Wireless IEEE 802.11 WiMax Distribution Node End Systems Server ATM Wireless ATM ADSL Cable network EPON

4 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Delivery Systems Developments Network

5 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Delivery Systems Developments Network Saving network resources: Stream scheduling Several Programs or Timelines

6 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo From Broadcast to True Media-on-Demand  Broadcast (No-VoD) −Traditional, no control  Pay-per-view (PPV) −Paid specialized service  Quasi Video On Demand (Q-VoD) −Distinction into interest groups −Temporal control by group change  Near Video On Demand (N-VoD) −Same media distributed in regular time intervals −Simulated forward / backward  True Video On Demand (T-VoD) −Full control for the presentation, VCR capabilities −Bi-directional connection [Little, Venkatesh 1994]

7 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Background/Assumption: −Performing all delivery steps for each user wastes resources −Scheme to reduce (network & server) load needed −Terms Stream: a distinct multicast stream at the server Channel: allocated server resources for one stream Segment: non-overlapping pieces of a video −Combine several user requests to one stream  Mechanisms −Type I: Delayed on-demand delivery −Type II: Prescheduled delivery −Type III: Client-side caching

8 Type I: Delayed On Demand Delivery

9 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Delayed On Demand Delivery −Collecting requests −Joining requests −Batching Delayed response Collect requests for same title Batching Features  Simple decision process  Can consider popularity Drawbacks  Obvious service delays  Limited savings multicast Central server 1st client2nd client3rd client [Dan, Sitaram, Shahabuddin 1994]

10 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Delayed On Demand Delivery −Collecting requests −Joining requests −Batching Delayed response −Content Insertion E.g. advertisement loop −Piggybacking “Catch-up streams” Display speed variations −Typical Penalty on the user experience Single point of failure multicast Central server 1st client2nd client3rd client

11 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Graphics Explained  Y - the current position in the movie −the temporal position of data within the movie that is leaving the server  X - the current actual time time position in movie (offset) stream leaving faster than playback speed leaving slower than playback speed

12 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Piggybacking [Golubchik, Lui, Muntz 1995]  Save resources by joining streams −Server resources −Network resources  Approach −Exploit limited user perception −Change playout speed Up to +/- 5% are considered acceptable  Only minimum and maximum speed make sense −i.e. playout speeds 0 +10%

13 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Piggybacking time position in movie (offset) Request arrival fast slow

14 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Piggybacking time position in movie (offset)

15 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Adaptive Piggybacking time position in movie (offset) [Aggarwal, Wolf, Yu 1996]

16 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Performance Percentage of savings in BW Interarrival Time in seconds Original piggybacking Optimize for every arrival, “snapshot” Greedy odd-even merging [Drawing after Aggarwal, Wolf and Yu (1996)] 0 100 200 300 400 500 20 90 40 30 50 60 70 80

17 Type II: Prescheduled Delivery

18 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Prescheduled Delivery −No back-channel −Non-linear transmission −Client buffering and re-ordering −Video segmentation −Examples Staggered broadcasting, Pyramid b., Skyscraper b., Fast b., Pagoda b., Harmonic b., … −Typical Good theoretic performance High resource requirements Single point of failure

19 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling Cut into segments begin end Movie Reserve channels for segments Central server 1st client 2nd client 3rd client 1 2 3 4 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 1 2 3 4 1 2 1 2 4 11 3 1 2 4 11 2 3 11 3 broadcasting Determine a transmission schedule

20 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Prescheduled Delivery  Arrivals are not relevant −users can start viewing at each interval start

21 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Staggered Broadcasting  Near Video-on-Demand −Applied in real systems −Limited interactivity is possible (jump, pause) −Popularity can be considered  change phase offset time position in movie (offset) Jump forward Pause Continue Phase offset [Almeroth, Ammar 1996]

22 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting  Idea −Fixed number of HIGH-bitrate channels C i with bitrate B −Variable size segments a 1 … a n −One segment repeated per channel −Segment length is growing exponentially −Several movies per channel, total of m movies (constant bitrate 1)  Operation −Client waits for the next segment a 1 (on average ½ len(d 1) ) −Receives following segments as soon as linearly possible  Segment length −Size of segment a i : −  is limited −  >1 to build a pyramid −  ≤ B/m for sequential viewing −  =2.5 considered good value  Drawback −Client buffers more than 50% of the video −Client receives all channels concurrently in the worst case [Viswanathan, Imielinski 1996]

23 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting  Pyramid broadcasting with B=4, m=2,  =2  Movie a a1a2a3a4 time time to play a1 back at normal speed

24 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting a2 a3 a4 Channel 1 Channel 2 Channel 3 Channel 4 a1  Pyramid broadcasting with B=4, m=2,  =2  Movie a a1a2a3a4 time Channels bandwidth for B normal speeds Time to send a segment: len(a n )/B Sending several channels in parallel

25 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting a2 a3 a4 Channel 1 Channel 2 Channel 3 Channel 4 b2 b3 b4 b1a1  Pyramid broadcasting with B=4, m=2,  =2  Movie a a1a2a3a4 time Segments of m different movies per channel: a & b

26 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting a2b2a2b2 a3b3 a2b2a2b2a2b2a2b2a2b2a2b2 a3b3a3b3a3b3 a4b4a4b4a4 a3b3 a2b2a2b2 Channel 1 Channel 2 Channel 3 Channel 4 a1b1  Pyramid broadcasting with B=4, m=2,  =2 request for a arrives client starts receiving and playing a1 client starts receiving and playing a2 client starts receiving a3client starts playing a3 client starts receiving a4client starts playing a4

27 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting a2b2a2b2 a3b3 a2b2a2b2a2b2a2b2a2b2a2b2 a3b3a3b3a3b3 a4b4a4b4a4 a3b3 a2b2a2b2 Channel 1 Channel 2 Channel 3 Channel 4 a1b1  Pyramid broadcasting with B=4, m=2,  =2

28 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pyramid Broadcasting Channel 1 Channel 2 Channel 3 Channel 4 a1b1  Pyramid broadcasting with B=5, m=2,  =2.5 a2b2a2b2a2b2a2b2a2b2a2b2a2b2a2b2 a3b3a3b3a3b3 a4b4  In the Internet: choose m=1  Less bandwidth at the client and in multicast trees  At the cost of multicast addresses

29 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Skyscraper Broadcasting  Idea −Fixed size segments −More than one segment per channel −Channel bandwidth is playback speed −Segments in a channel keep order −Channel allocation series 1,2,2,5,5,12,12,25,25,52,52,... −Client receives at most 2 channels −Client buffers at most 2 segments  Operation −Client waits for the next segment a1 −Receive following segments as soon as linearly possible [Hua, Sheu 1997]

30 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Skyscraper Broadcasting a1a2a3a4a5a6a7a8a9a10 time a1 a2a3a2a3a2a3a2a3 a4a5a4a5a4a5a4a5 a6a7a8a9a10a6a7a8 Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a1a2a3a4a5a6a7a8

31 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Skyscraper Broadcasting a1a2a3a4a5a6a7a8a9a10 time a1 a2a3a2a3a2a3a2a3 a4a5a4a5a4a5a4a5 a6a7a8a9a10a6a7a8 Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a1a2a3a4a5a6a7a8

32 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Other Pyramid Techniques  Fast Broadcasting −Many more, smaller segments Similar to previous Sequences of fixed-sized segments instead of different sized segments −Channel allocation series Exponential series: 1,2,4,8,16,32,64,... −Segments in a channel keep order −Shorter client waiting time for first segment −Channel bandwidth is playback speed −Client must receive all channels −Client must buffer 50% of all data [Juhn, Tseng 1998]

33 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Fast Broadcasting a1 time Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a2a3a4a5a6a7a8a9a10a11a12a13a14a15 a1 a2a3a2a3a2a3a2a3a2a3a2a3a2a3a2a3 a4a5a6a7a4a5a6a7a4a5a6a7a4a5a6a7 a8a9a10a11a12a13a14a15a8a9a10a11a12a13a14a15 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15

34 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Fast Broadcasting a1 time Channel 1 Channel 2 Channel 3 Channel 4 time request for a arrives a2a3a4a5a6a7a8a9a10a11a12a13a14a15 a1 a2a3a2a3a2a3a2a3a2a3a2a3a2a3a2a3 a4a5a6a7a4a5a6a7a4a5a6a7a4a5a6a7 a8a9a10a11a12a13a14a15a8a9a10a11a12a13a14a15 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15

35 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pagoda Broadcasting  Pagoda Broadcasting −Channel allocation series 1,3,5,15,25,75,125 −Segments are not broadcast linearly −Consecutive segments appear on pairs of channels −Client must receive up to 7 channels For more channels, a different series is needed ! −Client must buffer 45% of all data −Based on the following Segment 1 – needed every round Segment 2 – needed at least every 2 nd round Segment 3 – needed at least every 3 rd round Segment 4 – needed at least every 4 th round … [Paris, Carter, Long 1999]

36 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pagoda Broadcasting a1 time C 1 C 2 C 3 C 4 time request for a arrives a2a3a4a5a6a7a8a9a10a11a12a13a14a15 a1 a2a4a2a5a2a4a2a5a2a4a2a5a2a4a2a5 a3a6a12a3a7a13a3a6a14a3a7a15a3a6a12a3 a8a9a10a11a16a17a18a19a8a9a10a11a16a17a18a19 a16a17a18a19 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15a16a17a18a19

37 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pagoda Broadcasting time C 1 C 2 C 3 C 4 time request for a arrives a1 a2a4a2a5a2a4a2a5a2a4a2a5a2a4a2a5 a3a6a12a3a7a13a3a6a14a3a7a15a3a6a12a3 a8a9a10a11a16a17a18a19a8a9a10a11a16a17a18a19 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15a16a17a18 a1a2a3a4a5a6a7a8a9a10a11a12a13a14a15a16a17a18a19

38 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Harmonic Broadcasting  Idea −Fixed size segments −One segment repeated per channel −Later segments can be sent at lower bitrates −Receive all other segments concurrently −Harmonic series determines bitrates Bitrate(a i ) = Playout-rate(a i )/i Bitrates 1/1, 1/2, 1/3, 1/4, 1/5, 1/6, …  Consideration −Size of a 1 determines client start-up delay −Growing number of segments allows smaller a 1 −Required server bitrate grows very slowly with number of segments  Drawback −Client buffers about 37% of the video for >=20 channels −(Client must re-order small video portions) −Complex memory cache for disk access necessary [Juhn, Tseng 1997]

39 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Harmonic Broadcasting time a1a2a3a4a5 a1 a2 a3 a4 a5 C 1 C 2 C 3 C 4 C 5 a1a2a3a4a5 request for a arrives

40 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Harmonic Broadcasting time a1a2a3a4a5 a1 a2 a3 a4 a5 C 1 C 2 C 3 C 4 C 5 a1a2a3a4a5 request for a arrives ERROR

41 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Harmonic Broadcasting time a1a2a3a4a5 a1 a2 a3 a4 a5 C 1 C 2 C 3 C 4 C 5 a1a2a3a4a5 request for a arrives Read a1 and consume concurrently Read rest of a2 and consume concurrently  Consumes 1 st segment faster than it is received !!!

42 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Harmonic Broadcasting: Bugfix  Delayed Harmonic Broadcasting −Wait until a 1 is fully buffered −All segments will be completely cached before playout −Fixes the bug in Harmonic Broadcasting or  Cautious Harmonic Broadcasting −Wait an additional a 1 time −Starts the harmonic series with a 2 instead of a 1 −Fixes the bug in Harmonic Broadcasting [By Paris, Long, …]

43 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Prescheduled Delivery Evaluation  Techniques −Video segmentation −Varying transmission speeds −Re-ordering of data −Client buffering  Advantage −Achieves server resource reduction  Problems −Tends to require complex client processing −May require large client buffers −Is incapable (or not proven) of working with user interactivity Current research to work with VCR controls −Guaranteed bandwidth required

44 Type III: Client Side Caching

45 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Client Side Caching −On-demand delivery −Client buffering −Multicast complete movie −Unicast start of movie for latecomers (patch)  Examples −Stream Tapping, Patching, Hierarchical Streaming Merging, …  Typical −Considerable client resources −Single point of failure

46 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Patching  Server resource optimization is possible multicast Unicast patch stream Central server 1st client2nd client Join ! cyclic buffer [Hua, Cai, Sheu 1998, also as Stream Tapping Carter, Long 1997]

47 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) (patch) window size restart time of full stream request arrival min buffer size patch stream full stream

48 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) interarrival time interdeparture time full stream patch stream

49 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) Number of concurrent streams Concurrent full streams Concurrent patch streams Total number of concurrent streams The average number of patch streams is constant if the arrival process is a Poisson process

50 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) Number of concurrent streams Shown patch streams are just examples But always: patch end times on the edge of a triangle Compare the numbers of streams

51 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) Number of concurrent streams

52 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) Number of concurrent streams

53 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling time position in movie (offset) Number of concurrent streams

54 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Minimization of server load  Minimum average number of concurrent streams  Depends on −Fmovie length −  U expected interarrival time −   patching window size −C U cost of unicast stream at server −C M cost of multicast stream at server −S U setup cost of unicast stream at server −S M setup cost of multicast stream at server

55 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Optimal patching window size −For identical multicast and unicast setup costs  Servers can estimate  U −And achieve massive saving −For different multicast and unicast setup costs Patching window size Interarrival time Movie length Unicast7445 Mio $ Greedy patching 3722 Mio $ -patching 375 Mio $ Based on prices by Deutsche Telekom, IBM and Real Networks from 1997

56 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Pull-Patching  Combination −adaptive segmented HTTP streaming −patching playout time quality Pull-Patching

57 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo 8 Mbps bandwidth  Experiment −client joins 100 segments into the stream −4 video layers  Results if … −… enough resources (8 Mbps) Pull-Patching

58 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo 8 Mbps bandwidth 4 Mbps bandwidth 2 Mbps bandwidth  Experiment −client joins 100 segments into the stream −4 video layers  Results if … −… enough resources −… bandwidth limitations Pull-Patching

59 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo 8 Mbps bandwidth 8 Mbps bandwidth, 1 % loss 8 Mbps bandwidth, 3 % loss Pull-Patching  Experiment −client joins 100 segments into the stream −4 video layers  Results if … −… enough resources −… bandwidth limitations −… loss −(… delay and jitter) ➠ Pull-Patching works − client side decisions − overcomes traditional patching limitations − dynamic adaption − loss and delay handling − patch server scalability

60 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo HMSM  Hierarchical Multicast Stream Merging  Key ideas −Each data transmission uses multicast −Clients accumulate data faster than their playout rate multiple streams accelerated streams −Clients are merged in large multicast groups −Merged clients continue to listen to the same stream to the end  Combines −Dynamic skyscraper −Piggybacking −Patching [Eager, Vernon, Zahorjan 2001]

61 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo HMSM  Always join the closest neighbour  HMSM(n,1) −Clients can receive up to n streams in parallel  HMSM(n,e) −Clients can receive up to n full-bandwidth streams in parallel −but streams are delivered at speeds of e, where e « 1  Basically −HMSM(n,1) is another recursive application of patching

62 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo HMSM(2,1) time position in movie (offset) request arrival determine patch size client’s cyclic buffer playout HMSM(2,1): max 2 streams, 1 playout speed each

63 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo HMSM(2,1) time position in movie (offset) request arrival determine patch size client’s cyclic buffer Not received because n=2 extended patch closest neighbor first extended buffer HMSM(2,1): max 2 streams, 1 playout speed each

64 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo HMSM(2,1) time position in movie (offset) request arrival closest neighbor first patch extension HMSM(2,1): max 2 streams, 1 playout speed each

65 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Client Side Caching Evaluation  Techniques −Video segmentation −Parallel reception of streams −Client buffering  Advantage −Achieves server resource reduction −Achieves True VoD behaviour  Problems −Optimum can not be achieved on average case −Needs combination with prescheduled technique for high-popularity titles −May require large client buffers −Are incapable (or not proven) to work with user interactivity −Guaranteed bandwidth required

66 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Overall Evaluation  Advantage −Achieves server resource reduction  Problems −May require large client buffers −Incapable (or not proven) to work with user interactivity −Guaranteed bandwidth required  Fixes −Introduce loss-resistant codecs and partial retransmission −Introduce proxies to handle buffering −Choose computationally simple variations

67 Zipf-distribution The typical way or modeling access probability

68 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Zipf distribution and features  Popularity −Estimate the popularity of movies (or any kind of product) −Frequently used: Zipf distribution  Danger −Zipf-distribution of a process can only be applied while popularity doesn’t change is only an observed property a subset of a Zipf- distributed dataset is no longer Zipf-distributed Objects sorted by popularity probability

69 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Optimum depends on popularity −Estimate the popularity of movies −Frequently used: Zipf distribution

70 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Problem −Being Zipf-distributed is only an observed property probability frequency

71 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Density function of the Zipf distribution −Compared to real-world data

72 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Optimized delivery scheduling  Conclusion −Major optimizations possible −Independent optimizations don’t work  Centralized systems problems −Scalability is limited −Minimum latency through distance −Single point of failure  Look at distributed systems −Clusters −Distribution Architectures

73 INF5071, Carsten Griwodz & Pål Halvorsen University of Oslo Some References 1. Thomas D.C. Little and Dinesh Venkatesh: "Prospects for Interactive Video-on-Demand", IEEE Multimedia 1(3), 1994, pp. 14-24 2. Asit Dan and Dinkar Sitaram and Perwez Shahabuddin: "Scheduling Policies for an On-Demand Video Server with Batching", IBM TR RC 19381, 1993 3. Rajesh Krishnan and Dinesh Venkatesh and Thomas D. C. Little: "A Failure and Overload Tolerance Mechanism for Continuous Media Servers", ACM Multimedia Conference, November 1997, pp. 131-142 4. Leana Golubchik and John C. S. Lui and Richard R. Muntz: "Adaptive Piggybacking: A Novel Technique for Data Sharing in Video-on-Demand Storage Servers", Multimedia Systems Journal 4(3), 1996, pp. 140-155 5. Charu Aggarwal, Joel Wolf and Philipp S. Yu: “On Optimal Piggyback Merging Policies for Video-on- Demand Systems”, ACM SIGMETRICS Conference, Philadelphia, USA, 1996, pp. 200-209 6. Kevin Almeroth and Mustafa Ammar: "On the Use of Multicast Delivery to Provide a Scalable and Interactive Video-on-Demand Service", IEEE JSAC 14(6), 1996, pp. 1110-1122 7. S. Viswanathan and T. Imielinski: "Metropolitan Area Video-on-Demand Service using Pyramid Broadcasting", Multimedia Systems Journal 4(4), 1996, pp. 197--208 8. Kien A. Hua and Simon Sheu: "Skyscraper Broadcasting: A New Broadcasting Scheme for 9. Metropolitan Video-on-Demand Systems", ACM SIGCOMM Conference, Cannes, France, 1997, pp. 89-100 10. L. Juhn and L. Tsend: "Harmonic Broadcasting for Video-on-Demand Service", IEEE Transactions on Broadcasting 43(3), 1997, pp. 268-271 11. Carsten Griwodz and Michael Liepert and Michael Zink and Ralf Steinmetz: "Tune to Lambda Patching", ACM Performance Evaluation Review 27(4), 2000, pp. 202-206 12. Kien A. Hua and Yin Cai and Simon Sheu: "Patching: A Multicast Technique for True Video-on Demand Services", ACM Multimedia Conference, Bristol, UK, 1998, pp. 191-200 13. Derek Eager and Mary Vernon and John Zahorjan: "Minimizing Bandwidth Requirements for On-Demand Data Delivery", Multimedia Information Systems Conference 1999 14. Jehan-Francois Paris: http://www.cs.uh.edu/~paris/


Download ppt "Distribution – Part I August 10, 2015 INF 5071 – Performance in Distributed Systems."

Similar presentations


Ads by Google