Download presentation
Presentation is loading. Please wait.
Published byRafe Mathews Modified over 9 years ago
1
PROP: A Scalable and Reliable P2P Assisted Proxy Streaming System Computer Science Department College of William and Mary Lei Guo, Songqing Chen, and Xiaodong Zhang
2
Computer Science Department, College of William and Mary Media Streaming in Internet Rapidly growing applications –Scientific data retrieval and processing –Commercial applications –Education and professional training –Entertainments Challenges –Large size of media objects –Real time requirement of media content delivering
3
Computer Science Department, College of William and Mary Existing Systems Content Delivery Network (CDN) –Performance effective but very expensive –Need dedicated hardware and administration Proxy-based media caching –Cost effective but not scalable –Limited storage and bandwidth, single point failure Client-based P2P collaboration –Scalable, cost effective, but not guarantee to QoS –Non-dedicated service –Peers come and go frequently
4
Computer Science Department, College of William and Mary PROP: Design Rationale and Objectives Integrate proxy caching and P2P collaboration techniques Coordinate the proxy and its P2P clients The functions of proxy and clients are complementary –Media proxy works as a backup server To provide a reliable and dedicated service –Clients self-organize into a P2P system To provide a scalable and cost-effective service Build a scalable and reliable streaming proxy system for VoD in cost-effective manner.
5
Computer Science Department, College of William and Mary Outline Introduction System architecture Resource management Performance evaluation Conclusion
6
Computer Science Department, College of William and Mary Infrastructure Overview Internet Media Server Firewall Media Proxy P2P Overlay Content Addressable Network Intranet DHT
7
Computer Science Department, College of William and Mary System Components Streaming proxy –Interface between the system and media servers –Bootstrap site of the system P2P overlay of users, in which each peer is –A client –A streaming server –An index server and router
8
Computer Science Department, College of William and Mary Media Proxy Bootstrap Fetch media data from media server Cache media objects by segment Serve media data for clients New client join Media server Media proxy Client A Client B Internet
9
Computer Science Department, College of William and Mary Receive data Playback Cache data locally Peer as a Client
10
Computer Science Department, College of William and Mary Peer as a Streaming Server Local Cache Client B Client A Receive requests Stream media data Peer Streaming Server
11
Computer Science Department, College of William and Mary pointers to serving peers …… meta data Segment Index peer proxy Segment ID value Peer as an Index Server/Router DHT Routing table ? Is Segment ID in my key space? Yes No key? peer
12
Computer Science Department, College of William and Mary Basic Operations Publishing and unpublishing media segments –publish (segment_id, location) –unpublish (segment_id, location) Requesting and serving media segments –request (segment_id, URL) Getting and updating segment meta data –update_info (segment_id, data) –get_info (segment_id)
13
Computer Science Department, College of William and Mary Peer Serves Streaming Internet Media Server Firewall Media Proxy P2P Overlay DHT ? Overlay routing Point to point ready? yes!
14
Computer Science Department, College of William and Mary Proxy Fetches Data Internet Media Server Firewall Media Proxy P2P Overlay DHT ? Overlay routing Point to point NUL L ask proxy to fetch publish
15
Computer Science Department, College of William and Mary Outline Introduction System architecture Resource management Performance evaluation Conclusion
16
Computer Science Department, College of William and Mary pointers to serving peers …… meta data Segment Index peer proxy Streaming Server Selection The index maintains a list of serving peers, including the proxy The proxy works as a backup server, takes over media streaming when necessary Peer with the largest available serving capacity is selected as the serving peer 100Kbps1Mbps
17
Computer Science Department, College of William and Mary Data Management Each client maintains its cache separately (use LRU)? –Not efficient: popular media data have too many replicas –Not effective: a single streaming session can flush all cached data 1 hour streaming video may consume more than 100MB cache –Even worse in some cases: many media are accessed only once People are not interested in viewing the same movie repeatedly Exploit locality of all clients collectively? –Global cache: each peer maintains cached data based on the access of all clients, instead of itself –Keep both popular and unpopular media data –Consider both the popularity and replica number of media objects
18
Computer Science Department, College of William and Mary Popularity of Media Segments segment meta data T 0 : first access time T r : most recent access time S sum : total accessed bytes S 0 : segment size n: # of requests r: # of copies pointers to serving peers …… meta data Segment index
19
Computer Science Department, College of William and Mary Popularity of Media Segments segment meta data T 0 : first access time T r : most recent access time S sum : total accessed bytes S 0 : segment size n: # of requests r: # of copies pointers to serving peers …… meta data Segment index popularity
20
Computer Science Department, College of William and Mary average access rate in the past Popularity of Media Segments segment meta data T 0 : first access time T r : most recent access time S sum : total accessed bytes S 0 : segment size n: # of requests r: # of copies pointers to serving peers …… meta data Segment index popularity
21
Computer Science Department, College of William and Mary average access rate in the pastprobability of future access Popularity of Media Segments segment meta data T 0 : first access time T r : most recent access time S sum : total accessed bytes S 0 : segment size n: # of requests r: # of copies pointers to serving peers …… meta data Segment index popularity
22
Computer Science Department, College of William and Mary average access rate in the pastprobability of future access Popularity of Media Segments segment meta data T 0 : first access time T r : most recent access time S sum : total accessed bytes S 0 : segment size n: # of requests r: # of copies pointers to serving peers …… meta data Segment index popularity average access interval in the past future access prob. is small
23
Computer Science Department, College of William and Mary average access rate in the pastprobability of future access Popularity of Media Segments segment meta data T 0 : first access time T r : most recent access time S sum : total accessed bytes S 0 : segment size n: # of requests r: # of copies pointers to serving peers …… meta data Segment index popularity the popularity of media objects fades with time going average access interval in the past future access prob. is small
24
Computer Science Department, College of William and Mary Utility of Cached Media Segments Media data are cached with the progress of media accessing Each media segment may have multiple copies cached in the system The popularity of media objects/segments –follows heavy tail distribution –varies with time going Segments with small popularity and large number of replications Segments with large popularity and large number of replications Define the segment utility function Segment with smallest utility should be evicted
25
Computer Science Department, College of William and Mary Distribution of Segment Replicas Resilient to peer failure –Segments of an object should be distributed across multiple peers instead of a single peer Balancing load of media serving –Popular segments should have more copies in the system 0 0 0 1 1 2 012 proxy 012345 media object Use replacement operations to achieve such a distribution 345 popularunpopular
26
Computer Science Department, College of William and Mary 00 Cache Replacement Proxy cache replacement –popularity-based –segments with the smallest popularity are replaced –better than LRU Peer cache replacement –utility-based –segments with the smallest utility value are replaced 0 0 0 012 proxy 012345 object 1 012345 object 2 12 1 1 1 0 01 1 345012 0 0 0 ?
27
Computer Science Department, College of William and Mary Fault Tolerance Graceful degradation when proxy fails –DHT still works (no single point of failure) –Clients fetch data from media server directly Peer failure –Each peer replicates its neighbors’ DHT and the index of cached data –When a peer and all its neighbors fail (small prob.) The peer who detect this situation initiate a broadcast to let all peers republish its cached contents and check the validation the serving peer list in the index
28
Computer Science Department, College of William and Mary Outline Introduction System architecture Resource management Performance evaluation Conclusion
29
Computer Science Department, College of William and Mary Performance Evaluation Metrics –Streaming jitter byte ratio –Delayed startup ratio –Byte hit ratio Simulations –Proxy caching system –Pure P2P system without proxy –Our system
30
Computer Science Department, College of William and Mary Workload Summary HP Corporate Media Solutions: REAL Synthetic workload: WEB and PART –Media object popularity follows Zipf distribution –Request arrival follows Possion distribution # of req# of obj# of peerssize(GB)length (min)duration REAL9000403800206-13110 days WEB15188400800512-1201 day PART15188400800512-1201 day
31
Computer Science Department, College of William and Mary Simulation Results Overall Performance Proxy load changes Replacement policies Routing hops
32
Computer Science Department, College of William and Mary Streaming Jitter Byte Ratio
33
Computer Science Department, College of William and Mary Delayed Startup Ratio
34
Computer Science Department, College of William and Mary Byte Hit Ratio
35
Computer Science Department, College of William and Mary Simulation Results Overall Performance Proxy load changes Replacement policies Routing hops
36
Computer Science Department, College of William and Mary Proxy Load
37
Computer Science Department, College of William and Mary Simulation Results Overall Performance Proxy load changes Replacement policies Routing hops
38
Computer Science Department, College of William and Mary Streaming Jitter Byte Ratio
39
Computer Science Department, College of William and Mary Delayed Startup Ratio
40
Computer Science Department, College of William and Mary Byte Hit Ratio
41
Computer Science Department, College of William and Mary Simulation Results Overall Performance Proxy load changes Replacement policies Routing hops
42
Computer Science Department, College of William and Mary Routing Hops
43
Computer Science Department, College of William and Mary Outline Introduction System architecture Resource management Performance evaluation Conclusion
44
Computer Science Department, College of William and Mary Conclusion Propose a scalable and reliable P2P media streaming system Address the limits of –Unscalability of proxy caching systems –Unreliable QoS of pure P2P systems Propose global replacement policies for –Proxy to maximize its cache utilization –Peers to optimize data distribution across the system
45
Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.