Presentation is loading. Please wait.

Presentation is loading. Please wait.

SCOPE: Scalable Consistency in Structured P2P Systems

Similar presentations


Presentation on theme: "SCOPE: Scalable Consistency in Structured P2P Systems"— Presentation transcript:

1 SCOPE: Scalable Consistency in Structured P2P Systems
Xin Chen1, Shansi Ren, Haining Wang, and Xiaodong Zhang College of William and Mary 1. With AskJeeves, Piscataway, NJ Introduce WM, supported by AskJ

2 Overview: P2P Systems P2P traffic: P2P users: P2P data size:
50-70% traffic in consumer ISPs 95% upstream traffic (MediaMetrix) P2P users: 75% broadband users (Jupiter Media) 6 millions simultaneous users P2P data size: 10 Petabytes (10,000,000 GB) 11/10/2018 INFOCOM 2005

3 Consistency in P2P Applications
Consistency: provide the most updated object to any peer. No Consistency File sharing Real-time media streaming Weak Consistency P2P-based Web caching DNS systems Partial Strong Consistency Wide-area file systems Font; consistency definition Strong Consistency Publish/subscribe systems Directory services, Online auctions Not well supported 11/10/2018 INFOCOM 2005

4 Problems of Existing Solutions
Reliability: recoverable from node failures, e.g. eliminating single point failure. Scalability: maintaining increasingly large system size. Maintainability: low cost and low overhead. High Reliability Low Scalability High Maintenance Reliability SCOPE Highly scalable Highly reliable Low Cost Graph Path-record Low Reliability Low Scalability Low Maintenance Centralized Time-to-Live Low Reliability High Scalability Low Maintenance Decent Reliability Decent Scalability High Maintenance Scalability 11/10/2018 INFOCOM 2005

5 Our Objectives High Scalability High Reliability Easy management
Low overhead as system scales to large size. High Reliability Recoverable for node failures. Easy management Low maintenance costs General solutions Deployable for all structured P2P systems 11/10/2018 INFOCOM 2005

6 Outline Background SCOPE Performance Conclusion
How SCOPE record the replica locations? How SCOPE operate efficiently? Performance Conclusion font 11/10/2018 INFOCOM 2005

7 SCOPE: Scalable Consistency in Structured P2P Systems
Design Target: High Scalability: Distributed consistency maintenance among all nodes Low maintenance overhead High Reliability: Easy to recover from frequent node failures Able to finish consistency operations with node failures Design Approach Partition the whole ID space into partitions Select a representative in every partition Construct a tree to record the replica locations Revise high level introduction of scope 11/10/2018 INFOCOM 2005

8 SCOPE: ID Space Partitioning
000 01 10 Partition 0: [000,011] + 00 11 111 001 010 110 10 Partition 1: [100,111] 1 + 00 01 11 Sigcomm 2004 DN SIGCOMM 2004 DNS 101 011 100 3-bit ID space: [000,111] 11/10/2018 INFOCOM 2005

9 Key Mapping 001 01 101 1 01 + + 000 Representative 00 10 11 111
+ 10 11 001 111 Partition 0 010 110 Partition 1 1 + 00 01 10 11 101 011 Root/Representative 100 3-bit ID space: [000,111] 11/10/2018 INFOCOM 2005

10 Replica Partition Tree (RPT)
101 000 101 101 111 001 Y Y 001 Height O(logM) Y Y 010 Y N 110 011 101 011 Every ID is represented by a bit in RPT 100 101 11/10/2018 INFOCOM 2005

11 RPT Optimization: Leaf Node
101 000 101 101 111 001 Y Y 001 Y Y 010 N 110 Y 011 101 011 100 Partitioning is necessary only when #node > 1 101 11/10/2018 INFOCOM 2005

12 RPT Optimization: Intermediate Node
000 101 111 001 Y Y 010 110 Y 101 011 101 100 Partitioning is necessary only when #subpartition > 1 101 11/10/2018 INFOCOM 2005

13 New Operations -- Subscribe, Unsubscribe, and Update
3. Root Node Update the records for the lower level representatives 3 2. Intermediate Representative Update the records for its lower level representatives Inform the next upper level representative of subscriber 2 1. Subscriber Inform its immediate upper level representative Subscribe, unsubscribe, update; font 1 How to find the upper level representatives? 11/10/2018 INFOCOM 2005

14 Upper Level Representatives
000 1. Find the partition start address Predecessor: 111 Node address: 000 Partition Start: 000 2. Find the partition end address Partition Start: 000 Successor: 100 Partition End: 011 3. Try smaller partition if the end address larger than the successor 111 001 010 110 101 011 100 11/10/2018 INFOCOM 2005

15 Upper Level Representatives (cont.)
000 1. Find the partition start address Partition predecessor: 101 Partition start address: 100 Upper partition Start: 100 2. Find the partition end address Upper Partition Start: 100 Partition successor: 100 Upper partition End: 100 3. Try smaller partition if the end address larger than the successor 111 001 010 110 101 011 100 11/10/2018 INFOCOM 2005

16 Level Index Level Index Fast operations Easy updates
which level partition should made to identify a node Each node maintain its level index Fast operations no contact with predecessor and successor Easy updates only O(1) nodes updates when a node joins/leaves 000 111 001 [1,0,0] 010 110 [1,0,3] [1,0,3] 101 011 100 11/10/2018 INFOCOM 2005

17 Outline Background SCOPE Performance Conclusion System Scalability
Operation Effectiveness Maintenance Costs Conclusion 11/10/2018 INFOCOM 2005

18 Experimental environment
Simulation Methods ID Space: bit Hash Function: SHA-1 Partitions: 16 at each level Routing Tables: Pastry 40 levels, each with 15 entries 32 entries in each node’s leaf set Performance metrics System Scalability: load distribution, RPT height Operation Effectiveness: routing path of operations Maintenance Costs: node joining/leaving, recovery process 11/10/2018 INFOCOM 2005

19 System Scalability Most nodes have <3 records
Very few nodes have >3 records Left: Records distribution in a 104-node network; Right: Average RPT height changes with number of nodes. 11/10/2018 INFOCOM 2005

20 Operation Effectiveness
When the #subscriber = 1 When the #subscriber changes 200 (2%) Left: Operation path length comparison; Right: Average path length changes with # of subscribers. 11/10/2018 INFOCOM 2005

21 Maintenance Costs Less than 1 node’s level index changed
90% message reduction Left: Level index update costs; Right: Failure recovery costs. 11/10/2018 INFOCOM 2005

22 Conclusion SCOPE provides scalable consistent supports for structured P2P systems Scalable structures Effective operations Minimal maintenance overhead Compared with existing solutions Better load balance Better fault tolerance Better consistency support 11/10/2018 INFOCOM 2005


Download ppt "SCOPE: Scalable Consistency in Structured P2P Systems"

Similar presentations


Ads by Google