Distributed Computing Group A Self-Repairing Peer-to-Peer System Resilient to Dynamic Adversarial Churn Fabian Kuhn Stefan Schmid Roger Wattenhofer IPTPS.

Slides:



Advertisements
Similar presentations
Scalable and Dynamic Quorum Systems Moni Naor & Udi Wieder The Weizmann Institute of Science.
Advertisements

One Hop Lookups for Peer-to-Peer Overlays Anjali Gupta, Barbara Liskov, Rodrigo Rodrigues Laboratory for Computer Science, MIT.
Stefan Schmid & Christian Scheideler Dept. of Computer Science
Scalable Content-Addressable Network Lintao Liu
CHORD – peer to peer lookup protocol Shankar Karthik Vaithianathan & Aravind Sivaraman University of Central Florida.
A Blueprint for Constructing Peer-to-Peer Systems Robust to Dynamic Worst-Case Joins and Leaves Fabian Kuhn, Microsoft Research, Silicon Valley Stefan.
Fabian Kuhn, Microsoft Research, Silicon Valley
Structuring Unstructured Peer-to-Peer Networks Stefan Schmid Roger Wattenhofer Distributed Computing Group HiPC 2007 Goa, India.
A Distributed and Oblivious Heap Christian Scheideler and Stefan Schmid Dept. of Computer Science University of Paderborn.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Ad-Hoc Networks Beyond Unit Disk Graphs
Small-world Overlay P2P Network
Routing, Anycast, and Multicast for Mesh and Sensor Networks Roland Flury Roger Wattenhofer RAM Distributed Computing Group.
Churn and Selfishness: Two Peer-to-Peer Computing Challenges Invited Talk University of California, Berkeley 380 Soda Hall March, 2006 Stefan Schmid Distributed.
Dynamic Internet Congestion with Bursts Stefan Schmid Roger Wattenhofer Distributed Computing Group, ETH Zurich 13th International Conference On High Performance.
On the Topologies Formed by Selfish Peers Thomas Moscibroda Stefan Schmid Roger Wattenhofer IPTPS 2006 Santa Barbara, California, USA.
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
Taming Dynamic and Selfish Peers “Peer-to-Peer Systems and Applications” Dagstuhl Seminar March 26th-29th, 2006 Stefan Schmid Distributed Computing Group.
Distributed Computing Group Roger Wattenhofer P2P P2P 2005 Past 2 Present.
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
Dynamic Hypercube Topology Stefan Schmid URAW 2005 Upper Rhine Algorithms Workshop University of Tübingen, Germany.
Fault-tolerant Routing in Peer-to-Peer Systems James Aspnes Zoë Diamadi Gauri Shah Yale University PODC 2002.
Building Low-Diameter P2P Networks Eli Upfal Department of Computer Science Brown University Joint work with Gopal Pandurangan and Prabhakar Raghavan.
Efficient Content Location Using Interest-based Locality in Peer-to-Peer Systems Presented by: Lin Wing Kai.
Aggregating Information in Peer-to-Peer Systems for Improved Join and Leave Distributed Computing Group Keno Albrecht Ruedi Arnold Michael Gähwiler Roger.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
Algorithmic Models for Sensor Networks Stefan Schmid and Roger Wattenhofer WPDRTS, Island of Rhodes, Greece, 2006.
Maximal Independent Set Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
Improving Data Access in P2P Systems Karl Aberer and Magdalena Punceva Swiss Federal Institute of Technology Manfred Hauswirth and Roman Schmidt Technical.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
1CS 6401 Peer-to-Peer Networks Outline Overview Gnutella Structured Overlays BitTorrent.
Structured P2P Network Group14: Qiwei Zhang; Shi Yan; Dawei Ouyang; Boyu Sun.
A DoS-Resilient Information System for Dynamic Data Management Stefan Schmid & Christian Scheideler Dept. of Computer Science University of Paderborn Matthias.
Introduction to Peer-to-Peer Networks. What is a P2P network Uses the vast resource of the machines at the edge of the Internet to build a network that.
Roger ZimmermannCOMPSAC 2004, September 30 Spatial Data Query Support in Peer-to-Peer Systems Roger Zimmermann, Wei-Shinn Ku, and Haojun Wang Computer.
Symmetric Replication in Structured Peer-to-Peer Systems Ali Ghodsi, Luc Onana Alima, Seif Haridi.
COCONET: Co-Operative Cache driven Overlay NETwork for p2p VoD streaming Abhishek Bhattacharya, Zhenyu Yang & Deng Pan.
Developing Analytical Framework to Measure Robustness of Peer-to-Peer Networks Niloy Ganguly.
Introduction to Peer-to-Peer Networks. What is a P2P network A P2P network is a large distributed system. It uses the vast resource of PCs distributed.
Models and Techniques for Communication in Dynamic Networks Christian Scheideler Dept. of Computer Science Johns Hopkins University.
GeoGrid: A scalable Location Service Network Authors: J.Zhang, G.Zhang, L.Liu Georgia Institute of Technology presented by Olga Weiss Com S 587x, Fall.
Introduction Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
Using the Small-World Model to Improve Freenet Performance Hui Zhang Ashish Goel Ramesh Govindan USC.
CCAN: Cache-based CAN Using the Small World Model Shanghai Jiaotong University Internet Computing R&D Center.
Content Addressable Network CAN. The CAN is essentially a distributed Internet-scale hash table that maps file names to their location in the network.
A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.
PODC Distributed Computation of the Mode Fabian Kuhn Thomas Locher ETH Zurich, Switzerland Stefan Schmid TU Munich, Germany TexPoint fonts used in.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Paper Survey of DHT Distributed Hash Table. Usages Directory service  Very little amount of information, such as URI, metadata, … Storage  Data, such.
Dynamic Networks for Peer-to-Peer Systems Pierre Fraigniaud CNRS LRI, Univ. Paris Sud Joint work with Philippe Gauron.
K-Anycast Routing Schemes for Mobile Ad Hoc Networks 指導老師 : 黃鈴玲 教授 學生 : 李京釜.
TELE202 Lecture 6 Routing in WAN 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »Packet switching in Wide Area Networks »Source: chapter 10 ¥This Lecture.
Chord Advanced issues. Analysis Theorem. Search takes O (log N) time (Note that in general, 2 m may be much larger than N) Proof. After log N forwarding.
Plethora: Infrastructure and System Design. Introduction Peer-to-Peer (P2P) networks: –Self-organizing distributed systems –Nodes receive and provide.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Dynamic Networks for Peer-to-Peer Systems Pierre Fraigniaud CNRS Lab. de Recherche en Informatique (LRI) Univ. Paris-Sud, Orsay Joint work with Philippe.
BATON A Balanced Tree Structure for Peer-to-Peer Networks H. V. Jagadish, Beng Chin Ooi, Quang Hieu Vu.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
Towards a Scalable and Robust DHT Baruch Awerbuch Johns Hopkins University Christian Scheideler Technical University of Munich.
PROCESS RESILIENCE By Ravalika Pola. outline: Process Resilience  Design Issues  Failure Masking and Replication  Agreement in Faulty Systems  Failure.
Peer-to-Peer Data Structures Christian Scheideler Dept. of Computer Science Johns Hopkins University.
ICDCS 05Adaptive Counting Networks Srikanta Tirthapura Elec. And Computer Engg. Iowa State University.
CS694 - DHT1 Distributed Hash Table Systems Hui Zhang University of Southern California.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
Data Center Network Architectures
CHAPTER 3 Architectures for Distributed Systems
Plethora: Infrastructure and System Design
SKIP GRAPHS James Aspnes Gauri Shah SODA 2003.
DHT Routing Geometries and Chord
Presentation transcript:

Distributed Computing Group A Self-Repairing Peer-to-Peer System Resilient to Dynamic Adversarial Churn Fabian Kuhn Stefan Schmid Roger Wattenhofer IPTPS 2005 Cornell University, Ithaca, New York, USA

Stefan Schmid, ETH IPTPS Dynamic Peer-to-Peer Systems Properties compared to centralized client/server approach –Availability –Reliability –Efficiency => Peers may join and leave the network at any time! However, P2P systems are –composed of unreliable desktop machines –under control of individual users

Stefan Schmid, ETH IPTPS Churn How to maintain desirable properties such as –Connectivity, –Network diameter, –Peer degree? Churn: Permanent joins and leaves

Stefan Schmid, ETH IPTPS Road-map Motivation for adversarial (worst-case) churn Components of our system Assembling the components Results Outlook and open problems

Stefan Schmid, ETH IPTPS Motivation Why permanent churn? Saroiu et al.: „A Measurement Study of P2P File Sharing Systems“ Peers join system for one hour on average Hundreds of changes per second with millions of peers in the system! Why adversarial (worst-case) churn? E.g., a crawler takes down neighboring machines rather than randomly chosen peers!

Stefan Schmid, ETH IPTPS The Adversary Model worst-case faults with an adversary ADV(J,L, ) ADV(J,L, ) has complete visibility of the entire state of the system May add at most J and remove at most L peers in any time period of length Note: Adversary is not Byzantine!

Stefan Schmid, ETH IPTPS Synchronous Model Our system is synchronous, i.e., our algorithms run in rounds –One round: receive messages, local computation, send messages However: Real distributed systems are asynchronous! But: Notion of time necessary to bound the adversary

Stefan Schmid, ETH IPTPS A First Approach What if number of peers is not 2 i ? How to prevent degeneration? Where to store data? Idea: Simulate the hypercube! Fault-tolerant hypercube?

Stefan Schmid, ETH IPTPS Simulated Hypercube System Basic components: Simulation: Node consists of several peers! Route peers to sparse areas Adapt dimension Token distribution algorithm! Information aggregation algorithm!

Stefan Schmid, ETH IPTPS Components: Peer Distribution and Information Aggregation Peer Distribution Goal: Distribute peers evenly among all hypercube nodes in order to balance biased adversarial churn Basically a token distribution problem Counting the total number of peers (information aggregation) Goal: Estimate the total number of peers in the system and adapt the dimension accordingly Tackled next!

Stefan Schmid, ETH IPTPS Peer Distribution (1) Algorithm: Cycle over dimensions and balance! Perfectly balanced after d steps!

Stefan Schmid, ETH IPTPS Peer Distribution (2) But peers are not fractional! And an adversary inserts at most J and removes at most L peers per step! Theorem 1: Given adversary ADV(J,L,1), discrepancy never exceeds 2J+2L+d!

Stefan Schmid, ETH IPTPS Components: Peer Distribution and Information Aggregation Peer Distribution Goal: Distribute peers evenly among all hypercube nodes in order to balance biased adversarial churn Basically a token distribution problem Counting the total number of peers (information aggregation) Goal: Estimate the total number of peers in the system and adapt the dimension accordingly Tackled next!

Stefan Schmid, ETH IPTPS Information Aggregation (1) Goal: Provide the same (and good!) estimation of the total number of peers presently in the system to all nodes –Thresholds for expansion and reduction Means: Exploit again the recursive structure of the hypercube!

Stefan Schmid, ETH IPTPS Information Aggregation (2) Algorithm: Count peers in every sub-cube by exchange with corresponding neighbor! Correct number after d steps!

Stefan Schmid, ETH IPTPS Information Aggregation (3) But again, we have a concurrent adversary! Solution: Pipelined execution! Theorem 2: The information aggregation algorithm yields the same estimation to all nodes. Moreover, this number represents the correct state of the system d steps ago!

Stefan Schmid, ETH IPTPS Composing the Components Our system permanently runs –Peer distribution algorithm to balance biased churn –Information aggregation algorithm to estimate total number of peers and change dimension accordingly But: How are peers connected inside a node, and how are the edges of the hypercube represented? And: Where is the data of the DHT stored?

Stefan Schmid, ETH IPTPS Distributed Hash Table Hash function determines node where data item is replicated Problem: Peer which has to move to another node must replace all data items. Idea: Divide peers of a node into core and periphery –Core peers store data, –Peripheral peers are used for peer distribution

Stefan Schmid, ETH IPTPS Intra- and Interconnections Peers inside a node are completely connected. Peers are connected to all core peers of all neighboring nodes. –May be improved: Lower peer degree by using a matching.

Stefan Schmid, ETH IPTPS Maintenance Algorithm Maintenance algorithm runs in phases –Phase = 6 rounds In phase i: –Snapshot of the state of the system in round 1 –One exchange to estimate number of peers in sub-cubes (information aggregation) –Balances tokens in dimension i mod d –Dimension change if necessary All based on the snapshot made in round 1, ignoring the changes that have happened in-between!

Stefan Schmid, ETH IPTPS Results Given an adversary ADV(d+1,d+1,  )... => Peer discrepancy at most 5d+4 (Theorem 1) => Total number of peers with delay d (Theorem 2)... we have, in spite of ADV(O(log n), O(log n), 1): –always at least one core peer per node (no data lost!), –at most O(log n) peers per node, –network diameter O(log n).

Stefan Schmid, ETH IPTPS Summary Dynamics is a crucial part of every P2P system, but research is only emerging. Simulated topology: A simple blueprint for dynamic P2P systems! –Requires algorithms for token distribution and information aggregation on the topology. –Straight-forward for skip graphs –Also possible for pancake graphs!

Stefan Schmid, ETH IPTPS Open Problems Asynchrony: Message complexity! Data: Copying and load balancing? Byzantine peers? Thank you for your attention!

Stefan Schmid, ETH IPTPS Questions and Feedback?

Stefan Schmid, ETH IPTPS Related Work Fiat, Saia, Gribble, Karlin, Saroiu –“Censorship Resistant Peer-to-Peer Content Addressable Networks” (SODA 2002) –“Dynamically Fault-Tolerant Content Addressable Networks” (IPTPS 2002) –First paper which studies worst-case failures –(1-  )-fractions of peers and data survive deletion of half of all nodes –Static model (rebuild from scratch) Abraham, Awerbuch, Azar, Bartal, Malkhi, Pavlov –“A Generic Scheme for Building Overlay Networks in Adversarial Scenarios” (IPDPS 2003) –Maintaining balanced network in the presence of concurrent faults –Times of quiescence Li, Misra, Plaxton –“Active and Concurrent Topology Maintenance” (DISC 2004) –Concurrent worst-case joins and leaves –Asynchronous –Weaker failure model: no crashes