Jörg Liebeherr, 2001 HyperCast - Support of Many-To-Many Multicast Communications Jörg Liebeherr University of Virginia.

Slides:



Advertisements
Similar presentations
1 Routing Protocols I. 2 Routing Recall: There are two parts to routing IP packets: 1. How to pass a packet from an input interface to the output interface.
Advertisements

Communication Networks Recitation 3 Bridges & Spanning trees.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Multicast on the Internet CSE April 2015.
COS 461 Fall 1997 Routing COS 461 Fall 1997 Typical Structure.
Multicast in Wireless Mesh Network Xuan (William) Zhang Xun Shi.
Chabot College Chapter 2 Review Questions Semester IIIELEC Semester III ELEC
SCRIBE A large-scale and decentralized application-level multicast infrastructure.
UNIT-IV Computer Network Network Layer. Network Layer Prepared by - ROHIT KOSHTA In the seven-layer OSI model of computer networking, the network layer.
Ranveer Chandra , Kenneth P. Birman Department of Computer Science
1 A Case For End System Multicast Yang-hua Chu, Sanjay Rao and Hui Zhang Carnegie Mellon University Largely adopted from Jonathan Shapiro’s slides at umass.
HyperCast Jorg Liebeherr University of Virginia. Acknowledgements Developed in my research group since 1999 Contributors : –Past: Bhupinder Sethi, Tyler.
Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony L. T.
ZIGZAG A Peer-to-Peer Architecture for Media Streaming By Duc A. Tran, Kien A. Hua and Tai T. Do Appear on “Journal On Selected Areas in Communications,
Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast J. Liu, S. G. Rao, B. Li and H. Zhang Proc. of The IEEE, 2008 Presented by: Yan Ding.
Scalable Application Layer Multicast Suman Banerjee Bobby Bhattacharjee Christopher Kommareddy ACM SIGCOMM Computer Communication Review, Proceedings of.
CMPE 150- Introduction to Computer Networks 1 CMPE 150 Fall 2005 Lecture 22 Introduction to Computer Networks.
Chapter 4 IP Multicast Professor Rick Han University of Colorado at Boulder
Application Layer Multicast
Two challenges for building large self-organizing overlay networks
1 IP Multicasting. 2 IP Multicasting: Motivation Problem: Want to deliver a packet from a source to multiple receivers Applications: –Streaming of Continuous.
1 An Overlay Scheme for Streaming Media Distribution Using Minimum Spanning Tree Properties Journal of Internet Technology Volume 5(2004) No.4 Reporter.
CS218 – Final Project A “Small-Scale” Application- Level Multicast Tree Protocol Jason Lee, Lih Chen & Prabash Nanayakkara Tutor: Li Lao.
An Evaluation of Scalable Application-level Multicast Using Peer-to-peer Overlays Miguel Castro, Michael B. Jones, Anne-Marie Kermarrec, Antony Rowstron,
© J. Liebeherr, All rights reserved 1 IP Multicasting.
CSE679: Multicast and Multimedia r Basics r Addressing r Routing r Hierarchical multicast r QoS multicast.
Connecting LANs, Backbone Networks, and Virtual LANs
Communication Part IV Multicast Communication* *Referred to slides by Manhyung Han at Kyung Hee University and Hitesh Ballani at Cornell University.
NETWORK LAYER (2) T.Najah AlSubaie Kingdom of Saudi Arabia Prince Norah bint Abdul Rahman University College of Computer Since and Information System NET331.
Chapter 4: Managing LAN Traffic
Communication (II) Chapter 4
Chapter 22 Network Layer: Delivery, Forwarding, and Routing
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Multicast routing.
Lecture 2 TCP/IP Protocol Suite Reference: TCP/IP Protocol Suite, 4 th Edition (chapter 2) 1.
Overcast: Reliable Multicasting with an Overlay Network CS294 Paul Burstein 9/15/2003.
Application-Layer Multicast -presented by William Wong.
Multicasting. References r Note: Some slides come from the slides associated with this book: “Mastering Computer Networks: An Internet Lab Manual”, J.
CS 5565 Network Architecture and Protocols Godmar Back Lecture 22.
A Case for End System Multicast Yang-hua Chu, Sanjay G. Rao, Srinivasan Seshan and Hui Zhang Presentation by Warren Cheung Some Slides from
Higashino Lab. Maximizing User Gain in Multi-flow Multicast Streaming on Overlay Networks Y.Nakamura, H.Yamaguchi and T.Higashino Graduate School of Information.
CS 268: Overlay Networks: Introduction and Multicast Ion Stoica April 15-17, 2003.
Multicast Routing Algorithms n Multicast routing n Flooding and Spanning Tree n Forward Shortest Path algorithm n Reversed Path Forwarding (RPF) algorithms.
Building Very Large Overlay Networks Jorg Liebeherr University of Virginia.
Network and Communications Ju Wang Chapter 5 Routing Algorithm Adopted from Choi’s notes Virginia Commonwealth University.
Network Layer4-1 Chapter 4: Network Layer r 4. 1 Introduction r 4.2 Virtual circuit and datagram networks r 4.3 What’s inside a router r 4.4 IP: Internet.
Chapter 22 Network Layer: Delivery, Forwarding, and Routing Part 5 Multicasting protocol.
Chi-Cheng Lin, Winona State University CS 313 Introduction to Computer Networking & Telecommunication Chapter 5 Network Layer.
1 Dissemination of Address Bindings in Multi-substrate Overlay Networks Jorg Liebeherr Majid Valipour University of Toronto.
TOMA: A Viable Solution for Large- Scale Multicast Service Support Li Lao, Jun-Hong Cui, and Mario Gerla UCLA and University of Connecticut Networking.
Multicasting Part I© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer.
1 Week 5 Lecture 2 IP Layer. 2 Network layer functions transport packet from sending to receiving hosts transport packet from sending to receiving hosts.
Jörg Liebeherr, 2001 Many-to-Many Communications in HyperCast Jorg Liebeherr University of Virginia.
© J. Liebeherr, All rights reserved 1 Multicast Routing.
#1 EETS 8316/NTU CC725-N/TC/ Routing - Circuit Switching  Telephone switching was hierarchical with only one route possible —Added redundant routes.
1 HyperCast Brief overview Jorg Liebeherr University of Virginia.
© J. Liebeherr, All rights reserved 1 IP Multicasting.
APPLICATION LAYER MULTICASTING
1 Computer Communication & Networks Lecture 21 Network Layer: Delivery, Forwarding, Routing Waleed.
Network Layer4-1 Chapter 4 roadmap 4.1 Introduction and Network Service Models 4.2 Routing Principles 4.3 Hierarchical Routing 4.4 The Internet (IP) Protocol.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
Push Technology Humie Leung Annabelle Huo. Introduction Push technology is a set of technologies used to send information to a client without the client.
Information-Centric Networks10b-1 Week 10 / Paper 2 Hermes: a distributed event-based middleware architecture –P.R. Pietzuch, J.M. Bacon –ICDCS 2002 Workshops.
Overlay Networks and Overlay Multicast May Definition  Network -defines addressing, routing, and service model for communication between hosts.
CS 6401 Overlay Networks Outline Overlay networks overview Routing overlays Resilient Overlay Networks Content Distribution Networks.
Multicast Communications
1 A Case For End System Multicast Yang-hua Chu, Sanjay Rao and Hui Zhang Carnegie Mellon University.
Multicasting EECS June Multicast One-to-many, many-to-many communications Applications: – Teleconferencing – Database – Distributed computing.
Many-to-Many Communications in HyperCast
Support of Many-To-Many Multicast Communications
Optional Read Slides: Network Multicast
Presentation transcript:

Jörg Liebeherr, 2001 HyperCast - Support of Many-To-Many Multicast Communications Jörg Liebeherr University of Virginia

Jörg Liebeherr, 2001 Acknowledgements Collaborators: –Bhupinder Sethi –Tyler Beam –Burton Filstrup –Mike Nahas –Dongwen Wang –Konrad Lorincz –Neelima Putreeva This work is supported in part by the National Science Foundation

Jörg Liebeherr, 2001 Many-to-Many Multicast Applications Group Size Number of Senders Streaming Content Distribution 101, ,000, ,000 1,000,000 Collaboration Tools Games Distributed Systems Peer-to-Peer Applications

Jörg Liebeherr, 2001 Need for Multicasting ? Maintaining unicast connections is not feasible Infrastructure or services needs to support a “send to group”

Jörg Liebeherr, 2001 Problem with Multicasting Feedback Implosion: A node is overwhelmed with traffic or state –One-to-many multicast with feedback (e.g., reliable multicast) –Many-to-one multicast (Incast) NAK

Jörg Liebeherr, 2001 Multicast support in the network infrastructure (IP Multicast) Reality Check (after 10 years of IP Multicast): –Deployment has encountered severe scalability limitations in both the size and number of groups that can be supported –IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and support for error, flow and congestion control

Jörg Liebeherr, 2001 Host-based Multicasting Logical overlay resides on top of the Layer-3 network Data is transmitted between neighbors in the overlay No IP Multicast support needed Overlay topology should match the Layer-3 infrastructure

Jörg Liebeherr, 2001 Host-based multicast approaches (all after 1998) Build an overlay mesh network and embed trees into the mesh: –Narada (CMU) –RMX/Gossamer (UCB) Build a shared tree:shared tree –Yallcast/Yoid (NTT) –Banana Tree Protocol (UMich) –AMRoute (Telcordia, UMD – College Park) –Overcast (MIT) Other: Gnutella

Jörg Liebeherr, 2001 Our Approach Build virtual overlay is built as a graph with known properties –N-dimensional (incomplete) hypercube –Delaunay triangulation Advantages: –Routing in the overlay is implicit –Achieve good load-balancing –Exploit symmetry Claim: Can improve scalability of multicast applications by orders of magnitude over existing solutions

Jörg Liebeherr, 2001 Multicasting with Overlay Network Approach: Organize group members into a virtual overlay network. Transmit data using trees embedded in the virtual network.

Jörg Liebeherr, 2001 Introducing the Hypercube An n-dimensional hypercube has N=2 n nodes, where –Each node is labeled k n k n-1 … k 1 (k n =0,1) – Two nodes are connected by an edge only if their labels differ in one position.

Jörg Liebeherr, 2001 We Use a Gray Code for Ordering Nodes A Gray code, denoted by G(.), is defined by –G(i) and G(i+1) differ in exactly one bit. –G(2 d-1 ) and G(0) differ in only one bit.

Jörg Liebeherr, 2001 Nodes are added in a Gray order

Jörg Liebeherr, 2001 Tree Embedding Algorithm Input: G (i) := I = I n …I 2 I 1, G (r) := R =R n … R 2 R 1 Output: Parent of node I in the embedded tree rooted at R. Procedure Parent (I,R) If (G -1 (I) < G -1 (R)) Parent := In I n-1 …I k+1 (1 - I k )I k-1 …I 2 I 1 with k = min i (I i != R I ). Else Parent := In I{ n-1 }…I{ k+1 } (1 - I k ) I k-1 …I 2 I 1 with k = max i (I i != R i ). Endif

Jörg Liebeherr, 2001 Tree Embedding Node 000 is root:

Jörg Liebeherr, 2001 Another Tree Embedding Node 111 is root:

Jörg Liebeherr, 2001 Performance Comparison (Part 1) Compare tree embeddings of: –Shared TreeShared Tree –Hypercube

Jörg Liebeherr, 2001 Comparison of Hypercubes with Shared Trees (Infocom 98)  T l, is a control tree with root l  w k (T l ) : The number of children at a node k in tree T l,  v k (T l ) : The number of descendants in the sub-tree below a node k in tree T l,  p k (T l ) : The path length from a node k in tree T l to the root of the tree

Jörg Liebeherr, 2001 Average number of descendants in a tree

Jörg Liebeherr, 2001 HyperCast Protocol Goal: The goal of the protocol is to organize a members of a multicast group in a logical hypercube. Design criteria for scalability: –Soft-State (state is not permanent) –Decentralized (every node is aware only of its neighbors in the cube) –Must handle dynamic group membership efficiently

Jörg Liebeherr, 2001 HyperCast Protocol The HyperCast protocols maintains a stable hypercube: Consistent:No two nodes have the same logical address Compact: The dimension of the hypercube is as small as possible Connected: Each node knows the physical address of each of its neighbors in the hypercube

Jörg Liebeherr, 2001 Hypercast Protocol 1. A new node joins 2. A node fails

Jörg Liebeherr, beacon The node with the highest logical address (HRoot) sends out beacons The beacon message is received by all nodes Other nodes use the beacon to determine if they are missing neighbors The node with the highest logical address (HRoot) sends out beacons The beacon message is received by all nodes Other nodes use the beacon to determine if they are missing neighbors

Jörg Liebeherr, Each node sends ping messages to its neighbors periodically ping

Jörg Liebeherr, 2001 New A node that wants to join the hypercube will send a beacon message announcing its presence

Jörg Liebeherr, New The HRoot receives the beacon and responds with a ping, containing the new logical address ping (as 111)

Jörg Liebeherr, The joining node takes on the logical address given to it and adds “110” as its neighbor ping (as 111)

Jörg Liebeherr, The new node responds with a ping. ping (as 111) ping

Jörg Liebeherr, The new node is now the new HRoot, and so it will beacon

Jörg Liebeherr, Upon receiving the beacon, some nodes realize that they have a new neighbor and ping in response ping

Jörg Liebeherr, The new node responds to a ping from each new neighbor ping

Jörg Liebeherr, 2001 The join operation is now complete, and the hypercube is once again in a stable state

Jörg Liebeherr, 2001 Hypercast Protocol 1. A new node joins 2. A node fails

Jörg Liebeherr, A node may fail unexpectedly

Jörg Liebeherr, ! ! Holes in the hypercube fabric are discovered via lack of response to pings ping

Jörg Liebeherr, The HRoot and nodes which are missing neighbors send beacons

Jörg Liebeherr, Nodes which are missing neighbors can move the HRoot to a new (lower) logical address with a ping message ping (as 001)

Jörg Liebeherr, The HRoot sends leave messages as it leaves the 111 logical address leave

Jörg Liebeherr, The HRoot takes on the new logical address and pings back ping

Jörg Liebeherr, The “old” 111 is now node 001, and takes its place in the cube

Jörg Liebeherr,

Jörg Liebeherr,

Jörg Liebeherr,

Jörg Liebeherr, The new HRoot and the nodes which are missing neighbors will beacon

Jörg Liebeherr, Upon receiving the beacons, the neighboring nodes ping each other to finish the repair operation ping

Jörg Liebeherr, The repair operation is now complete, and the hypercube is once again stable

Jörg Liebeherr, 2001 State Diagram of a Node

Jörg Liebeherr, 2001 Experiments Experimental Platform: Centurion cluster at UVA (cluster of Linux PCs) –2 to 1024 hypercube nodes (on 32 machines) –current record: 10,080 nodes on 106 machines Experiment: –Add N new nodes to a hypercube with M nodes Performance measures: Time until hypercube reaches a stable state Rate of unicast transmissions Rate of multicast transmissions

Jörg Liebeherr, 2001 log 2 (# of Nodes in Hypercube) log 2 (# of Nodes Joining Hypercube) Time (t heartbeat ) Experiment 1: Time for Join Operation vs. Size of Hypercube and Number of Joining Nodes N M

Jörg Liebeherr, 2001 log 2 (# of Nodes in Hypercube) log 2 (# of Nodes Joining Hypercube) Unicast Byte Rate (bytes/t heartbeat ) Experiment 1: Unicast Traffic for Join Operation vs. Size of Hypercube and Number of Joining Nodes N M

Jörg Liebeherr, 2001 log 2 (# of Nodes in Hypercube) log 2 (# of Nodes Joining Hypercube) Multicast Byte Rate (bytes/t heartbeat ) Experiment 1: Multicast Traffic for Join Operation vs. Size of Hypercube and Number of Joining Nodes N M

Jörg Liebeherr, 2001 Work in progress: Triangulations Hypercube topology does not consider geographical proximity Triangulation can achieve that logical neighbors are “close by”

Jörg Liebeherr, ,0 10,8 5,2 4,9 0,6 The Voronoi region of a node is the region of the plane that is closer to this node than to any other node. Voronoi Regions

Jörg Liebeherr, 2001 The Delaunay triangulation has edges between nodes in neighboring Voronoi regions. 12,0 10,8 5,2 4,9 0,6 Delaunay Triangulation

Jörg Liebeherr, ,0 10,8 5,2 4,9 0,6 beacon Leader A Leader is a node with a Y-coordinate higher than any of its neighbors. The Leader periodically broadcasts a beacon.

Jörg Liebeherr, 2001 Each node sends ping messages to its neighbors periodically 12,0 5,2 4,9 0,6 10,8 ping

Jörg Liebeherr, 2001 A node that wants to join the triangulation will send a beacon message announcing its presence 12,0 10,8 5,2 4,9 0,6 8,4 New node

Jörg Liebeherr, 2001 The new node is located in one node’s Voronoi region. This node (5,2) updates its Voronoi region, and the triangulation 12,0 4,9 0,6 8,4 10,8 5,2

Jörg Liebeherr, 2001 (5,2) sends a ping which contains info for contacting its clockwise and counterclockwise neighbors ping 12,0 4,9 0,6 8,4 10,8 5,2

Jörg Liebeherr, ,0 4,9 0,6 (8,4) contacts these neighbors... 8,4 10,8 5,2 ping 12,0 4,9

Jörg Liebeherr, ,0 4,9 … which update their respective Voronoi regions. 0,6 10,8 5,2 4,9 12,0 8,4

Jörg Liebeherr, ,0 4,9 0,6 (4,9) and (12,0) send pings and provide info for contacting their respective clockwise and counterclockwise neighbors. 10,8 5,2 4,9 12,0 8,4 ping

Jörg Liebeherr, ,0 4,9 0,6 (8,4) contacts the new neighbor (10,8)... 10,8 5,2 4,9 12,0 8,4 ping 10,8

Jörg Liebeherr, ,0 4,9 0,6 …which updates its Voronoi region... 5,2 4,9 12,0 10,8 8,4

Jörg Liebeherr, ,0 4,9 0,6 …and responds with a ping 5,2 4,9 12,0 10,8 8,4 ping

Jörg Liebeherr, ,0 4,9 This completes the update of the Voronoi regions and the Delaunay Triangulation 0,6 5,2 4,9 12,0 10,8 8,4

Jörg Liebeherr, 2001 Problem with Delaunay Triangulations Delaunay triangulation considers location of nodes, but not the network topology 2 heuristics to achieve a better mapping

Jörg Liebeherr, 2001 Hierarchical Delaunay Triangulation 2-level hierarchy of Delaunay triangulations The node with the lowest x-coordinate in a domain DT is a member in 2 triangulations

Jörg Liebeherr, 2001 Multipoint Delaunay Triangulation Different (“implicit”) hierarchical organization “Virtual nodes” are positioned to form a “bounding box” around a cluster of nodes. All traffic to nodes in a cluster goes through one of the virtual nodes

Jörg Liebeherr, 2001 Work in Progress: Evaluation of Overlays Simulation: –Network with 1024 routers (“Transit-Stub” topology) – hosts Performance measures for trees embedded in an overlay network: –Degree of a node in an embedded tree –“Relative Delay Penalty”: Ratio of delay in overlay to shortest path delay –“Stress”: Number of overlay links over a physical link

Jörg Liebeherr, 2001 Transit-Stub Network Transit-Stub GA Tech graph generator 4 transit domains 4  16 stub domains 1024 total routers 128 hosts on stub domain

Jörg Liebeherr, 2001 Overlay Topologies Delaunay Triangulation and variants –Hierarchical DT –Multipoint DT Degree-6 Graph –Similar to graphs generated in Narada Degree-3 Tree –Similar to graphs generated in Yoid Logical MST –Minimum Spanning Tree Hypercube

Jörg Liebeherr, 2001 Maximum Relative Delay Penalty

Jörg Liebeherr, 2001 Maximum Node Degree

Jörg Liebeherr, 2001 Maximum Average Node Degree (over all trees)

Jörg Liebeherr, 2001 Max Stress (Single node sending)

Jörg Liebeherr, 2001 Maximum Average Stress (all nodes sending)

Jörg Liebeherr, 2001 Time To Stabilize

Jörg Liebeherr, th Percentile Relative Delay Penalty

Jörg Liebeherr, 2001 Work in progress: Provide an API Goal: Provide a Socket like interface to applications OLSocket OLSocket Interface Group Manager Forwarding Engine ARQ_Engine Stats OLCast Application Receive Buffer Unconfirmed Datagram Transmission Confirmed Datagram Transmission TCP Transmission OL_Node Interface Overlay Protocol

Jörg Liebeherr, 2001 Summary Overlay network for many-to-many multicast applications using Hypercubes and Delaunay Triangulations Performance evaluation of trade-offs via analysis, simulation, and experimentation “Socket like” API is close to completion Proof-of-concept applications: –MPEG-1 streaming –file transfer –interactive games Scalability to groups with > 100,000 members appears feasible