Popularity-Awareness in Temporal DHT for P2P-based Media Streaming Applications Abhishek Bhattacharya, Zhenyu Yang & Deng Pan IEEE International Symposium on Multimedia (ISM2011) Dana Point, California, USA December 5-7, 2011.
Outline Introduction Background Popularity-Aware Search Estimation Results Summary 2
Introduction: Distributed Hash Tables (DHT) DHT is a generic interface There are several implementations of this interface – Chord [MIT] – Pastry [Microsoft Research UK, Rice University] – Tapestry [UC Berkeley] – Content Addressable Network (CAN) [UC Berkeley] – SkipNet [Microsoft Research US, Univ. of Washington] – Kademlia [New York University] – Viceroy [Israel, UC Berkeley] – P-Grid [EPFL Switzerland] – Freenet [Ian Clarke] 3
Introduction: Chord (DHT) Identifier Circle x succ(x) pred(x) Exponentially spaced pointers! x succ(x) source O(log n) hops for routing 4
Introduction: Video on Demand (VoD) c1c1 c2c2 c3c3 c4c4 c5c5 c6c6 c7c7 c8c8 p1p4p1p4 p5p5 p3p3 p2p2 Content Discovery: Tracking Server Decentralized Indexing Structures Content Distribution: Overlay Tree/Multi-Tree/Mesh 5
Outline Introduction Background Popularity-Aware Search Estimation Results Summary 6
Background: DHT-based VoD System c1c1 c2c2 c3c3 c4c4 c5c5 c6c6 c7c7 c8c8 p1p1 c 1 :p 1 p1p1 c 2 :p 1 p1p1 c 6 :p 1 7
Background: Temporal-DHT 8
… C i+1 CiCi C i+2 C i+z ……… CiCi C i+1 C i+2 T Range Query Reformulation 9 pipi
Outline Introduction Background Popularity-Aware Search Estimation Results Summary 10
11 Popularity-Aware Search
12 Popularity-Aware Search Cost: (1) log N = 4 + Range = 4 (2) log N = 4 + Range = 4 (3) log N = 4 + Range = 4 (4) log N = 4 + Range = 4 (5) log N = 4 + Range = 4 Total: 20 (excluding the common log N part) Popularity: 3 : 1 : 1
13 Popularity-Aware Search Cost: (1) log N = 4 + Range = 2 (2) log N = 4 + Range = 2 (3) log N = 4 + Range = 2 (4) log N = 4 + Range = 6 (5) log N = 4 + Range = 6 Total: 18 (excluding the common log N part) Popularity: 3 : 1 : 1
Outline Introduction Background Popularity-Aware Search Estimation Results Summary 14
Estimation: Centralized 15 C1 C2 C3 C4 C5 C6 C7
Estimation: Decentralized 16 C1C2C3C4C5C6C Initialize x i 1. Local Value: x j 2. Update: x j x j + γ j (x i ~ x j ) 2. Update: x i x i - γ j (x i ~ x j )
Outline Introduction Background Popularity-Aware Search Estimation Results Summary 17
Results: Simulation Network Setting: GT-ITM with 15 transit domains, each connected to 10 stub domains with 15 stub nodes each. Data Setting: 256 to 4096 peers with randomly distributed out/in-bound bandwidths in the range of 500~1000 Kbps. User arrival model: Poisson distribution with λ = 1 sec Peer Lifetime: Exponential distribution with mean of 30 mins User Request Pattern: 50% follow Zipf distribution with different values of α Rest 50% with initial 6~7 random jumps followed by continuous playback mode. Compare with VMesh, TDHTM, TDHTM -PA(α = 0.4), TDHTM-PA (α = 2.0) Performance Metrics: Server Stress, Streaming Quality, Messaging Overhead, Seek Latency. 18
Results: Experiments 19
Results: Experiments 20
Outline Introduction Background Popularity-Aware Search Estimation Results Summary 21
Summary 22 We incorporated the notion of popularity-awareness within the framework of a Temporal-DHT based VoD System. Improvement of the overall performance by optimizing the search cost among the content set within the entire system. Dynamic adaptation of the update interval based on the popularity of the content. Decentralized computation of the popularities of various content. Extensive simulation results demonstrate the effectiveness of the popularity awareness mechanism.
Please send all your questions to: 23