Dynamic Hypercube Topology Stefan Schmid URAW 2005 Upper Rhine Algorithms Workshop University of Tübingen, Germany.

Slides:



Advertisements
Similar presentations
Stefan Schmid & Christian Scheideler Dept. of Computer Science
Advertisements

P2P data retrieval DHT (Distributed Hash Tables) Partially based on Hellerstein’s presentation at VLDB2004.
Lecture 7: Synchronous Network Algorithms
A Blueprint for Constructing Peer-to-Peer Systems Robust to Dynamic Worst-Case Joins and Leaves Fabian Kuhn, Microsoft Research, Silicon Valley Stefan.
Fabian Kuhn, Microsoft Research, Silicon Valley
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Structuring Unstructured Peer-to-Peer Networks Stefan Schmid Roger Wattenhofer Distributed Computing Group HiPC 2007 Goa, India.
A Distributed and Oblivious Heap Christian Scheideler and Stefan Schmid Dept. of Computer Science University of Paderborn.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Wireless Sensor Networks 21st Lecture Christian Schindelhauer.
Slide 1 Parallel Computation Models Lecture 3 Lecture 4.
Churn and Selfishness: Two Peer-to-Peer Computing Challenges Invited Talk University of California, Berkeley 380 Soda Hall March, 2006 Stefan Schmid Distributed.
Dynamic Internet Congestion with Bursts Stefan Schmid Roger Wattenhofer Distributed Computing Group, ETH Zurich 13th International Conference On High Performance.
On the Topologies Formed by Selfish Peers Thomas Moscibroda Stefan Schmid Roger Wattenhofer IPTPS 2006 Santa Barbara, California, USA.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
Taming Dynamic and Selfish Peers “Peer-to-Peer Systems and Applications” Dagstuhl Seminar March 26th-29th, 2006 Stefan Schmid Distributed Computing Group.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Distributed Coloring in Õ(  log n) Bit Rounds COST 293 GRAAL and.
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
1 Denial-of-Service Resilience in P2P File Sharing Systems Dan Dumitriu (EPFL) Ed Knightly (Rice) Aleksandar Kuzmanovic (Northwestern) Ion Stoica (Berkeley)
Distributed Computing Group A Self-Repairing Peer-to-Peer System Resilient to Dynamic Adversarial Churn Fabian Kuhn Stefan Schmid Roger Wattenhofer IPTPS.
Building Low-Diameter P2P Networks Eli Upfal Department of Computer Science Brown University Joint work with Gopal Pandurangan and Prabhakar Raghavan.
ETH Zurich – Distributed Computing Group Jasmin Smula 1ETH Zurich – Distributed Computing – Stephan Holzer Yvonne Anne Pignolet Jasmin.
Aggregating Information in Peer-to-Peer Systems for Improved Join and Leave Distributed Computing Group Keno Albrecht Ruedi Arnold Michael Gähwiler Roger.
Chord-over-Chord Overlay Sudhindra Rao Ph.D Qualifier Exam Department of ECECS.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
Algorithmic Models for Sensor Networks Stefan Schmid and Roger Wattenhofer WPDRTS, Island of Rhodes, Greece, 2006.
Localized Self- healing using Expanders Gopal Pandurangan Nanyang Technological University, Singapore Amitabh Trehan Technion - Israel Institute of Technology,
Distributed Combinatorial Optimization
Maximal Independent Set Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
P2P Course, Structured systems 1 Introduction (26/10/05)
Correctness of Gossip-Based Membership under Message Loss Maxim Gurevich, Idit Keidar Technion.
1CS 6401 Peer-to-Peer Networks Outline Overview Gnutella Structured Overlays BitTorrent.
1 Koorde: A Simple Degree Optimal DHT Frans Kaashoek, David Karger MIT Brought to you by the IRIS project.
Algorithms for Self-Organization and Adaptive Service Placement in Dynamic Distributed Systems Artur Andrzejak, Sven Graupner,Vadim Kotov, Holger Trinks.
Lecture 6: Introduction to Distributed Computing.
Developing Analytical Framework to Measure Robustness of Peer-to-Peer Networks Niloy Ganguly.
A Delaunay Triangulation Architecture Supporting Churn and User Mobility in MMVEs Mohsen Ghaffari, Behnoosh Hariri and Shervin Shirmohammadi Advanced Communications.
GeoGrid: A scalable Location Service Network Authors: J.Zhang, G.Zhang, L.Liu Georgia Institute of Technology presented by Olga Weiss Com S 587x, Fall.
Introduction Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
All that remains is to connect the edges in the variable-setters to the appropriate clause-checkers in the way that we require. This is done by the convey.
Distributed Coloring Discrete Mathematics and Algorithms Seminar Melih Onus November
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Controller and Estimator for Dynamic Networks Amos Korman Shay Kutten Technion.
CCAN: Cache-based CAN Using the Small World Model Shanghai Jiaotong University Internet Computing R&D Center.
Distributed Algorithms Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization WorkshopDistributed Algorithms1.
A Routing Underlay for Overlay Networks Akihiro Nakao Larry Peterson Andy Bavier SIGCOMM’03 Reviewer: Jing lu.
PODC Distributed Computation of the Mode Fabian Kuhn Thomas Locher ETH Zurich, Switzerland Stefan Schmid TU Munich, Germany TexPoint fonts used in.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
InterConnection Network Topologies to Minimize graph diameter: Low Diameter Regular graphs and Physical Wire Length Constrained networks Nilesh Choudhury.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Analyzing the Vulnerability of Superpeer Networks Against Attack Niloy Ganguly Department of Computer Science & Engineering Indian Institute of Technology,
Plethora: Infrastructure and System Design. Introduction Peer-to-Peer (P2P) networks: –Self-organizing distributed systems –Nodes receive and provide.
Basic Linear Algebra Subroutines (BLAS) – 3 levels of operations Memory hierarchy efficiently exploited by higher level BLAS BLASMemor y Refs. FlopsFlops/
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Peer to Peer Network Design Discovery and Routing algorithms
Data Structures and Algorithms in Parallel Computing Lecture 7.
Brief Announcement : Measuring Robustness of Superpeer Topologies Niloy Ganguly Department of Computer Science & Engineering Indian Institute of Technology,
Deterministic Distributed Resource Discovery Shay Kutten Technion David Peleg Weizmann Inst. Uzi Vishkin Univ. of Maryland & Technion.
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. Fast.
1 Concurrent Counting is harder than Queuing Costas Busch Rensselaer Polytechnic Intitute Srikanta Tirthapura Iowa State University.
ICDCS 05Adaptive Counting Networks Srikanta Tirthapura Elec. And Computer Engg. Iowa State University.
Plethora: A Locality Enhancing Peer-to-Peer Network Ronaldo Alves Ferreira Advisor: Ananth Grama Co-advisor: Suresh Jagannathan Department of Computer.
1 Roie Melamed, Technion AT&T Labs Araneola: A Scalable Reliable Multicast System for Dynamic Wide Area Environments Roie Melamed, Idit Keidar Technion.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 24 – Introduction to Peer-to-Peer (P2P) Systems Klara Nahrstedt (presented by Long Vu)
Data Center Network Architectures
A Study of Group-Tree Matching in Large Scale Group Communications
Interconnection topologies
DHT Routing Geometries and Chord
High Performance Computing & Bioinformatics Part 2 Dr. Imad Mahgoub
Presentation transcript:

Dynamic Hypercube Topology Stefan Schmid URAW 2005 Upper Rhine Algorithms Workshop University of Tübingen, Germany

Stefan Schmid, ETH URAW Static vs. Dynamic Networks (1) Network graph G=(V,E) –V = set of vertices (“nodes”, machines, peers, …) –E = set of edges (“connections”, wires, links, pointers, …) “Traditional”, static networks –Fixed set of vertices, fixed set of edges –E.g., interconnection network of parallel computers Fat Tree Topology Parallel Computer

Stefan Schmid, ETH URAW Static vs. Dynamic Networks (2) Dynamic networks –Set of nodes and/or set of edges is dynamic –Here: nodes may join and leave –E.g., peer-to-peer (P2P) systems (Napster, Gnutella, …) Dynamic Chord Topology

Stefan Schmid, ETH URAW Dynamic Peer-to-Peer Systems Peer-to-Peer Systems –cooperation of many machines (to share files, CPU cycles, etc.) –usually desktop computers under control of individual users –user may turn machine on and off at any time –=> Churn How to maintain desirable properties such as connectivity, network diameter, node degree,...?

Stefan Schmid, ETH URAW Talk Overview Model Ingredients: basic algorithms on hypercube graph Assembling the components Results for the hypercube Conclusion, generalization and open problems Discussion

Stefan Schmid, ETH URAW Model (1): Network Model Typical P2P overlay network –Vertices v 2 V: peers (dynamic: may join and leave) –Directed edges (u,v) 2 E: u knows IP address of v (static) Assumption: Overlay network builds upon complete Internet graph –Sending a message over an overlay edge => routing in the underlying Internet graph

Stefan Schmid, ETH URAW Model (2): Worst-Case (Adversarial) Dynamics Model worst-case faults with an adversary ADV(J,L, ) ADV(J,L, ) has complete visibility of the entire state of the system May add at most J and remove at most L peers in any time period of length

Stefan Schmid, ETH URAW Model (3): Communication Rounds Our system is synchronous, i.e., our algorithms run in rounds –One round: receive messages, local computation, send messages However: Real distributed systems are asynchronous! But: Notion of time necessary to bound the adversary

Stefan Schmid, ETH URAW Overview of Dynamic Hypercube System Idea: Arrange peers into a simulated hypercube where each node consists of several (logarithmically many) peers! –Gives a certain redundancy and thus time to react to changes. –But still guarantees diameter D = O(log n) and degree  = O(log n), as in the normal hypercube (n = total number of peers)! Normal Hypercube Topology Simulated Hypercube Topology Peers How to connect?

Stefan Schmid, ETH URAW Ingredients for Fault-Tolerant Hypercube System Basic components: Simulation: Node consists of several peers! Route peers to sparse areas Adapt dimension Token distribution algorithm! Information aggregation algorithm!

Stefan Schmid, ETH URAW Components: Peer Distribution and Information Aggregation Peer Distribution Goal: Distribute peers evenly among all hypercube nodes in order to balance biased adversarial churn Basically a token distribution problem Counting the total number of peers (information aggregation) Goal: Estimate the total number of peers in the system and adapt the dimension accordingly Tackled next!

Stefan Schmid, ETH URAW Dynamic Token Distribution Algorithm (1) Algorithm: Cycle over dimensions and balance! Perfectly balanced after d steps!

Stefan Schmid, ETH URAW Dynamic Token Distribution Algorithm (2) Problem 1: Peers are not fractional! However, by induction, the integer discrepancy is at most d larger than the fractional discrepancy.

Stefan Schmid, ETH URAW Dynamic Token Distribution Algorithm (3) Problem 2: An adversary inserts at most J and removes at most L peers per step! Fortunately, these dynamic changes are balanced quite fast (geometric series). Theorem 1: Given adversary ADV(J,L,1), discrepancy never exceeds 2J+2L+d! Thus

Stefan Schmid, ETH URAW Excursion: Randomized Token Distribution Again the static case, but this time assign “dangling” token to one of the edge’s vertices at random “Randomized rounding” Dangling tokens are binomially distributed => Chernoff lower tail Theorem 2: The expected discrepancy is constant (~ 3)!       p=.5

Stefan Schmid, ETH URAW Components: Peer Distribution and Information Aggregation Peer Distribution Goal: Distribute peers evenly among all hypercube nodes in order to balance biased adversarial churn Basically a token distribution problem Counting the total number of peers (information aggregation) Goal: Estimate the total number of peers in the system and adapt the dimension accordingly Tackled next!

Stefan Schmid, ETH URAW Information Aggregation Algorithm (1) Goal: Provide the same (and good!) estimation of the total number of peers presently in the system to all nodes –Thresholds for expansion and reduction Means: Exploit again the recursive structure of the hypercube!

Stefan Schmid, ETH URAW Information Aggregation Algorithm (2) Algorithm: Count peers in every sub-cube by exchange with corresponding neighbor! Correct number after d steps!

Stefan Schmid, ETH URAW Information Aggregation Algorithm (3) But again, we have a concurrent adversary! Solution: Pipelined execution! Theorem 3: The information aggregation algorithm yields the same estimation to all nodes. Moreover, this number represents the correct state of the system d steps ago!

Stefan Schmid, ETH URAW Composing the Components Our system permanently runs –Peer distribution algorithm to balance biased churn –Information aggregation algorithm to estimate total number of peers and change dimension accordingly But: How are peers connected inside a node, and how are the edges of the hypercube represented?

Stefan Schmid, ETH URAW Intra- and Interconnections Peers inside the same hypercube vertex are connected completely (clique). Moreover, there is a matching between the peers of neighboring vertices. Clique Matching

Stefan Schmid, ETH URAW Maintenance Algorithm Maintenance algorithm runs in phases –Phase = 6 rounds In phase i: –Snapshot of the state of the system in round 1 –One exchange to estimate number of peers in sub-cubes (information aggregation) –Balances tokens in dimension i mod d –Dimension change if necessary All based on the snapshot made in round 1, ignoring the changes that have happened in-between!

Stefan Schmid, ETH URAW Results for Hypercube Topology Given an adversary ADV(d+1,d+1,  )... => Peer discrepancy at most 5d+4 (Theorem 1) => Total number of peers with delay d (Theorem 3)... we have, in spite of ADV(O(log n), O(log n), 1): –always at least one peer per node, –peer degree bounded by O(log n) (asymptotically opitmal!), –network diameter O(log n).

Stefan Schmid, ETH URAW A Blueprint for Many Graphs? Conclusion: We have achieved an asymptotically optimal fault-tolerance for a O(log n) degree and O(log n) diameter topology. Generalization? We could apply the same tricks for general graphs G=(V,E), given the ingredients (on G): –token distribution algorithm –information aggregation algorithm For instance: Easy for skip graphs (  = D = O(log n)), possible for pancake graphs (  = D = O(log n / loglog n)).

Stefan Schmid, ETH URAW Open Problems Experiences with other graphs? Other models for graph dynamics? Less messages? Thank you for your attention!

Stefan Schmid, ETH URAW Discussion Questions? Inputs? Feedback?