CS 269: Lecture 11 Multicast Scott Shenker and Ion Stoica

Slides:



Advertisements
Similar presentations
1April 16, 2002 Layer 3 Multicast Addressing IP group addresses – “Class D” addresses = high order bits of “1110” Special reserved.
Advertisements

Multicast on the Internet CSE April 2015.
L-21 Multicast. L -15; © Srinivasan Seshan, Overview What/Why Multicast IP Multicast Service Basics Multicast Routing Basics DVMRP Overlay.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
CSE 561 – Multicast Applications David Wetherall Spring 2000.
School of Information Technologies Internet Multicasting NETS3303/3603 Week 10.
588 Section 6 Neil Spring May 11, Schedule Notes – (1 slide) Multicast review –(3slides) RLM (the paper you didn’t read) –(3 slides) ALF & SRM –(8.
COS 420 Day 18. Agenda Group Project Discussion Program Requirements Rejected Resubmit by Friday Noon Protocol Definition Due April 12 Assignment 3 Due.
Chapter 4 IP Multicast Professor Rick Han University of Colorado at Boulder
Slide Set 15: IP Multicast. In this set What is multicasting ? Issues related to IP Multicast Section 4.4.
CS 268: IP Multicast Routing Kevin Lai April 22, 2001.
Computer Networking Lecture 24 – Multicast.
1 EE 122: Multicast Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern Paxson, Jennifer.
CS 268: Computer Networking L-21 Multicast. 2 Multicast Routing Unicast: one source to one destination Multicast: one source to many destinations Two.
1 IP Multicasting. 2 IP Multicasting: Motivation Problem: Want to deliver a packet from a source to multiple receivers Applications: –Streaming of Continuous.
Wolfgang EffelsbergUniversity of Mannheim1 Multicast IP Wolfgang Effelsberg University of Mannheim September 2001.
1 CS 268: Lecture 12 (Multicast) Ion Stoica March 1, 2006.
CS 268: Multicast Transport Kevin Lai April 24, 2001.
MULTICASTING Network Security.
Multicast EECS 122: Lecture 16 Department of Electrical Engineering and Computer Sciences University of California Berkeley.
Spanning Tree and Multicast. The Story So Far Switched ethernet is good – Besides switching needed to join even multiple classical ethernet networks Routing.
CSE679: Multicast and Multimedia r Basics r Addressing r Routing r Hierarchical multicast r QoS multicast.
Receiver-driven Layered Multicast Paper by- Steven McCanne, Van Jacobson and Martin Vetterli – ACM SIGCOMM 1996 Presented By – Manoj Sivakumar.
Computer Networks 2 Lecture 1 Multicast.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Multicast routing.
Multicast Routing Protocols NETE0514 Presented by Dr.Apichan Kanjanavapastit.
Network Layer4-1 R1 R2 R3R4 source duplication R1 R2 R3R4 in-network duplication duplicate creation/transmission duplicate Broadcast Routing r Deliver.
AD HOC WIRELESS MUTICAST ROUTING. Multicasting in wired networks In wired networks changes in network topology is rare In wired networks changes in network.
CS 268: IP Multicast Routing Ion Stoica April 5, 2004.
CMPT 471 Networking II Address Resolution IPv4 ARP RARP 1© Janice Regan, 2012.
CSC 600 Internetworking with TCP/IP Unit 8: IP Multicasting (Ch. 17) Dr. Cheer-Sun Yang Spring 2001.
CS 5565 Network Architecture and Protocols Godmar Back Lecture 22.
Broadcast and Multicast. Overview Last time: routing protocols for the Internet  Hierarchical routing  RIP, OSPF, BGP This time: broadcast and multicast.
Multicast Routing Algorithms n Multicast routing n Flooding and Spanning Tree n Forward Shortest Path algorithm n Reversed Path Forwarding (RPF) algorithms.
Chapter 22 Network Layer: Delivery, Forwarding, and Routing Part 5 Multicasting protocol.
Chapter 15 Multicasting and Multicast Routing
Multicast Routing Protocols. The Need for Multicast Routing n Routing based on member information –Whenever a multicast router receives a multicast packet.
© J. Liebeherr, All rights reserved 1 Multicast Routing.
IP Multicast COSC Addressing Class D address Ethernet broadcast address (all 1’s) IP multicast using –Link-layer (Ethernet) broadcast –Link-layer.
CS 4396 Computer Networks Lab IP Multicast - Fundamentals.
Björn Landfeldt School of Information Technologies NETS 3303 Networked Systems Multicast.
11 CS716 Advanced Computer Networks By Dr. Amir Qayyum.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 21: Multicast Routing Slides used with.
Multicasting  A message can be unicast, multicast, or broadcast. Let us clarify these terms as they relate to the Internet.
Multicast Communications
Spring 2006CS 3321 Multicast Outline Link-state Multicast Distance-vector Multicast Protocol Independent Multicast.
1 Protocol Independent Multicast (PIM) To develop a scalable protocol independent of any particular unicast protocol –ANY unicast protocol to provide routing.
EE122: Multicast Kevin Lai October 7, Internet Radio  (techno station)
Multicasting EECS June Multicast One-to-many, many-to-many communications Applications: – Teleconferencing – Database – Distributed computing.
Communication Networks Recitation 11. Multicast & QoS Routing.
1 Group Communications: Reverse Path Multicast Dr. Rocky K. C. Chang 19 March, 2002.
1 Group Communications: Host Group and IGMP Dr. Rocky K. C. Chang 19 March, 2002.
DMET 602: Networks and Media Lab
COMP/ELEC 429 Introduction to Computer Networks
Multicasting protocols
Zueyong Zhu† and J. William Atwood‡
Multicast Outline Multicast Introduction and Motivation DVRMP.
(How the routers’ tables are filled in)
CMPE 252A: Computer Networks
15-744: Computer Networking
Congestion Control, Internet transport protocols: udp
Multicasting and Multicast Routing Protocols
Overlay Networking Overview.
Multicast Outline Multicast revisited
IP Multicast COSC /5/2019.
EE 122: Lecture 13 (IP Multicast Routing)
Implementing Multicast
Optional Read Slides: Network Multicast
Multicasting Unicast.
Design and Implementation of OverLay Multicast Tree Protocol
Presentation transcript:

CS 269: Lecture 11 Multicast Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776

Multicast and QoS: the lost decade This Week A Tale of Two Failures Multicast and QoS: the lost decade

History Multicast and QoS dominated research literature in the 90’s Both failed in their attempt to become pervasively available Both now available in enterprises, but not in public Internet Both now scorned as research topics

Irony The biggest critics of QoS were the multicast partisans And the QoS advocates envied the hipness of mcast… They complained about QoS being unscalable Among other complaints…. Irony #1: multicast is no more scalable than QoS Irony #2: scaling did not cause either of their downfalls Many now think economics was the problem Revenue model did not fit delivery model

Lectures Today: multicast Wednesday: QoS Focus on multicast as a state of mind, not on details Wednesday: QoS More “why” than “what”

Agenda Preliminaries Multicast routing Using multicast Reliable multicast Multicast’s philosophical legacy

Motivation Often want to send data to many machines at once Video distribution (TV broadcast) Teleconferences, etc. News updates Using unicast to reach each individual is hard and wasteful Sender state: ~O(n) and highly dynamic Total load: ~O(nd) where d is net diameter Hotspot load: load ~O(n) on host and first link Multicast: Sender state: O(1), total load O(d log n), hotspot load O(1)

Multicast Service Model Send to logical group address Location-independent Delivery limited by specified scope Can reach “nearby” members Best effort delivery

Open Membership Model Anyone, anywhere, can join Dynamic membership join and leave at will Anyone can send at any time Even nonmembers

Other Design Choices?

Division of Responsibilities Host’s responsibility to register interest with networks IGMP Network’s responsibility to deliver packets to host Multicast routing protocol Left unspecified: Address assignment (random, MASC, etc.) Application-to-group mapping (session directory, etc.)

Target Environment LANs connected in arbitrary topology LANs support local multicast Host network cards filter multicast traffic

Multicast Routing Algorithms

Routing Performance Goals Roughly equivalent to unicast best-effort service in terms of drops/delays Efficient tree No complicated forwarding machinery, etc. Low join/leave latency

Two Basic Routing Approaches Source-based trees: (e.g., DVMRP, PIM-DM) A tree from each source to group State: O(G*S) Good for dense groups (all routers involved) Shared trees: (e.g., CBT, PIM-SM) A single tree for group, shared by sources State: O(G) Better for sparse groups (only routers on path involved)

DVMRP Developed as a sequence of protocols: Reverse Path Flooding (RPF) Reverse Path Broadcast (RFB) Truncated Reverse Path Broadcasting (TRPB) Reverse Path Multicast (RPM) General Philosophy: multicast = pruned broadcast Don’t construct new tree, merely prune old one Observation: unicast routing state tells router shortest path to S Reversing direction sends packets from S without forming loops

Basic Forwarding Rule Routing state: Flooding Rule: To reach S, send along link L Flooding Rule: If a packet from S is received along link L, forward on all other links This works fine for symmetric links Ignore asymmetry today This works fine for point-to-point links Can result in multiple packets sent on LANs

Example Flooding can cause a given packet to be sent multiple times over the same link S x y a duplicate packet z b

Broadcasting Extension For each link, and each source S, define parent and child Parent: shortest path to S (ties broken arbitrarily) All other routers on link are children Broadcasting rule: only parent forwards packet to L Problem fixed But this is still broadcast, not multicast!

Multicast = Pruned Broadcast Start with full broadcast (RPB) If leaf has no members, prune state Send non-membership report (NMR) If all children of a router R prune, then router R sends NMR to parent New joins send graft to undo pruning

Problems with Approach Starting with broadcast means that all first packets go everywhere If group has members on most networks, this is ok But if group is sparse, this is lots of wasted traffic What about a different approach: Source-specific tree vs shared tree Pruned broadcast vs explicitly constructed tree

Core Based Trees (CBT) Ballardie, Francis, and Crowcroft, “Core Based Trees (CBT): An Architecture for Scalable Inter-Domain Multicast Routing”, SIGCOMM 93 Similar to Deering’s Single-Spanning Tree Unicast packet to core, but forwarded to multicast group Tree construction by receiver-based “grafts” One tree per group, only nodes on tree involved Reduce routing table state from O(S x G) to O(G)

Example Group members: M1, M2, M3 M1 sends data root control (join) messages data

Disadvantages Sub-optimal delay Single point of failure (core) Small, local groups with non-local core Need good core selection Optimal choice (computing topological center) is NP complete

Why Isn’t Multicast Pervasive? Sound technology Implemented in most routers Used by many enterprises But not available on public Internet

Possible Explanation [Holbrook & Cheriton ’99] Violates ISP input-rate-based billing model No incentive for ISPs to enable multicast! No indication of group size (needed for billing) Hard to implement sender control Any mcast app can be subject to simple DoS attack!! Multicast address scarcity Global allocation required Awkward interdomain issues with “cores”

Solution: Single-Source Multicast Each group has only one source Use both source and destination IP fields to define a group Each source can allocate 16 millions “channels” Use RPM algorithm Add a counting mechanism Use a recursive CountQuery message Use app-level relays to for multiple sources

Discussion Does multicast belong in the network layer? Why not implemented by end hosts? How important is economic analysis in protocol design? Should the design drive economics, or the other way around? Multicast addresses are “flat” Doesn’t that make it hard for routers to scale? Address allocation and aggregation? Should everything be multicast? What other delivery models are needed?

Using Multicast

A Few Examples Video Distribution Digital Fountain Web Caching

Distributing Video Send stream to multicast address at particular rate Paths to different receivers have different capacities At what rate do you send? At lowest capacity rate? At highest capacity rate? Somewhere in between? None of these are good answers…..

Layered Video Break video into layers Low-rate video: only lowest layer High-rate video: all layers combined Intermediate video: only some of the layers

Example of Rate/Quality Tradeoff 784 bytes 1208 bytes 1900 bytes 3457 bytes 5372 bytes 30274 bytes

Receiver-driven Layered Multicast Each layer is sent to a separate multicast group address Receivers join as many layers as their capacity can handle But how do receivers know the capacity of their path? They can measure loss rate They can experiment with joins

Basic Algorithm Join a new layer when there is no congestion joining may cause congestion join infrequently when the join is likely to fail Drop highest layer when there is congestion congestion detected through drops could use explicit feedback, delay Delicate design decisions: how frequently to attempt join? how to scale up to large groups?

Join Timer Timer for each level >d 4 j4 = c c 2c 2c 3 j3 = c c 2 j2 = c 1 Timer for each level Use randomization to prevent synchronization Join timer expires  join next larger layer (“join experiment”) Detect congestion  drop layer, increase join timer Detection timer (d) expires  decrease join timer for this layer

Scaling Problem Having receivers join independently does not scale # joins increases with group size  constant congestion Joins interfere with each other  unfairness Everytime I join, I see congestion caused by you Just reducing join rate would slow convergence How do we coordinate join experiments?

Scaling Solution: Shared Learning Multicast join announcement Node initiates join iff ongoing join is for higher layer Congestion  back off its own timer to join that layer Shares bottleneck with joiner As if had run own experiment No congestion  joins new layer iff it was original joiner does not share bottleneck with joiner Convergence could still be slow congestion S R0 R1 R2 join L4 R3 backoff join L2 R4 R5

RLM Paper Simulation validation weak Not clear how well it works on large scale in real world Large latencies a big problem Makes points about whether or not to use priority dropping Isn’t it obvious that priority dropping would be better?

Priority-drop and Uniform-drop drop packets randomly from all layers Priority drop drop packets from higher layers first Sending rate <= bottleneck no loss, no difference in performance Sending rate > bottleneck important, low layer packets may be dropped  uniform drop performance decreases Convex utility curve  users encouraged to remain at maximum [McCanne, Jacobson 1996] Conclusion: uniform has same performance and better incentives!

Later Work Contradicts [McCanne, Jacobson 1996] [Bajaj et al. 1998] Burstiness of traffic results in better performance for priority drop 50-100% better performance Neither has good incentive properties n flows, P(drop own packet) = 1/n, P(drop other packet) = (n-1)/n Need Fair Queueing for good incentive properties Congestion collapse for large n

Assumptions High drop rates lower video quality Is this always true? New class of codes lead to “digital fountain” Quality is function only of received bandwidth, not drop rate No need for layering (except to be friendly to network)

The Cache Consistency Problem Client downloads data from server, and copy is kept in some cache(s) along the way When other clients ask for same data, if request passes through one of these caches copy is returned Benefits: reduced load (server) and latency (client) Problem: stale data How does cache know when to get rid of data TTL only a guess, not good enough Need explicit cache invalidation

Traditional Cache Invalidation Server keeps track of all caches with copy of data Whenever data is changed, server contacts relevant caches Can apply to hierarchy of caches What is problem with this?

Multicast Cache Invalidation Construct hierarchy of caches Each cache is associated with a multicast group All children of that cache join the group Caches send out periodic heartbeats Invalidations are piggy-backed on heartbeats If cache has entry for that file, it then propagates update to its children Advantage: caches keep no state about children Delivery efficiency less important than state reduction

Reliable Multicast

How to Make Multicast Reliable? FEC can help, but isn’t perfect Must have retransmissions But sender can’t keep state about each receiver Has to be told when someone needs a packet

SRM Design Approach Let receivers detect lost packets By holes in sequence numbers They send NACK when loss is detected Any node can respond to NACK NACK/Response implosion averted through suppression Send NACKs at random times If hear NACK for same data, reset NACK timer If node has data, it resends it, using similar randomized algorithm

Repair Request Timer Randomization Chosen from the uniform distribution on A: node that lost the packet S: source C1, C2 : algorithm parameters dS,A : latency between S and A i : iteration of repair request tries seen Algorithm Detect loss  set timer Receive request for same data  cancel timer, set new timer Timer expires  send repair request

Suppression Two kinds: Deterministic suppression Randomized suppression Subject of extensive but incomplete scaling analysis

Local Recovery Large groups with low loss correlation Multicasting requests and repairs to entire group wastes bandwidth Separate recovery multicast groups e.g. hash sequence number to multicast group address only nodes experiencing loss join group recovery delay sensitive to join latency TTL-based scoping send request/repair with a limited TTL how to set TTL to get to a host that can retransmit? how to make sure retransmission reaches every host that heard request?

Application Layer Framing (ALF) Application should define Application Data Unit (ADU) ADU is unit of error recovery app can recover from whole ADU loss app treats partial ADU loss/corruption as whole loss App can process ADUs out of order

Multicast’s True Legacy

Benefits of Multicast Efficient delivery to multiple hosts (initial focus) Addressed by SSM and other simple mechanisms Logical addressing (pleasant byproduct) Provides layer of indirection Now focus of much architecture research Provided by DHTs and other kinds of name resolution mechanisms