Download presentation
Presentation is loading. Please wait.
1
Chunkyspread: Multi-tree Unstructured Peer to Peer Multicast Vidhyashankar Venkataraman (Vidhya) Paul Francis (Cornell University) John Calandrino (University of North Carolina)
2
Introduction Increased interest in P2P live streaming in recent past Existing multicast approaches Swarming style (tree-less) Tree-based
3
Swarming-based protocols Data-driven tree-less multicast (swarming) getting popular Neighbors send notifications about data arrivals Nodes pull data from neighbors Eg. Coolstreaming, Chainsaw Simple, unstructured Latency-overhead tradeoff Not yet known if these protocols can have good control over heterogeneity (upload volume)
4
Tree-based solutions Low latency and low overhead Tree construction/repair considered complex Eg. Splitstream (DHT, Pushdown, Anycast) [SRao] Tree repair takes time Requires buffering, resulting in delays Contribution: A multi-tree protocol that Is simple, unstructured Gives fine-grained control over load Has low latencies, low overhead, robust to failures [SRao] Sanjay Rao et. Al. The Impact of Heterogeneous Bandwidth Constraints on DHT-Based Multicast Protocols, IPTPS February 2005.
5
Chunkyspread – Basic Idea Build heterogeneity-aware unstructured neighbor graph Tree building: Sliced data stream: one tree per slice (Splitstream) Simple and fast loop avoidance and detection Parent/child relationships locally negotiated to optimize criteria of interest Load, latency, tit-for-tat, node-disjointness, etc.
6
Heterogeneity-aware neighbor graph Neighbor graph built with simple random walks Using “Swaplinks”, developed at Cornell [SWAP] Degree of node in graph proportional to its desired transmit load Notion of heterogeneity-awareness So that higher-capacity nodes have more children in multicast trees [SWAP] V. Vishnumurthy and P. Francis. On Heterogeneous Overlay Construction and Random Node Selection in Unstructured P2P Networks. To appear in INFOCOMM, Barcelona 2006.
7
Sliced Data Stream Slice 1 Slice 2 Slice 3 Source Slice Source 2 Slice Source 1 Slice Source 3 Multicasts the slice Source sends one slice to each node - acts as slice source to a tree Source selects random neighbors using Swaplinks
8
Building trees Initialized by flooding control message Pick parents subject to capacity (load) constraints Produces loop-free but bad trees Subsequently fine-tune trees according to criteria of interest
9
Simple and fast loop avoidance/ detection Proposed by Whitaker and Wetherall [ICARUS] All data packets carry a bloom filter Each node adds its mask to the filter Small probability of false positives Avoidance: advertise per-slice bloom filters to neighbors Detection: by first packet that traverses the loop [ICARUS] A. Whitaker and D. Wetherall. Forwarding without loops in Icarus. In OPENARCH, 2002.
10
Parent/child selection based on load & latency ML = Maximum Load TL+δ TL-δ Children Improve Latency 0 TL = Target Load Adds children Sheds children Never enter this load region Higher Load (more children) Lower Load (less children)
11
Parent/Child Switch Child Parent for slice k (Load>Satisfactory threshold) Potential Parents (Load<Satisfactory Threshold) 1)Child sends info about A and B 2) Gets info from all children 4) Child requests A A 5) A says yes if still underloaded B 3) Chooses A and asks child to switch
12
Chunkyspread Evaluation Discrete event-based simulator implemented in C++ Run over transit-stub topologies having 5K routers Heterogeneity : Stream split into 16 slices TL uniformly distributed between 4 & 28 slices ML=(1.5)TL: Enough capacity in network Two cases No latency improvement (δ=0) With latency improvement: δ=(2/16).TL (or 12.5% of TL) 0 ML=(1.5).(TL) TL+δ TL-δ TL = Target Load
13
Control over load Flash crowd scenario 2.5K nodes join a 7.5K node network at 100 joins/sec Nodes within 20% of TL even with latency reduction (δ=12.5%) Peak of 40 control messages per node per second with latency reduction Median of 10 messages per node per second during period of join [(Load-TL)/TL]% Snap shot of system after nodes finished fine-tuning trees Trees optimized ~95s after all nodes join with latency reduction With latency
14
Latency Improvement No Latency With Latency 90 th percentile network stretch of 9 ~ small buffer Maximum latency ~ Buffer capacity without node failures Flash crowd scenario
15
Burst Failure Recovery Time Recovery time within a few seconds Buffering: Dominant over effects of latency Neighbor failure timeout set at 4 seconds FEC codes improve recovery time 0 Redundant slices 1 Redundant slice 3 Redundancy Failure Burst: 1K nodes fail in a 10K-node network at the same time instant Shown with various Redundancy levels CDF of Disconnect duration with latency reduction
16
Conclusion Chunkyspread is a simple multi-tree multicast protocol A design alternative to swarming style protocols Achieves fine-grained control over load with good latencies Suited to non-interactive live streaming applications Need to do apples-to-apples comparisons with swarming protocols and Splitstream
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.