Scaling Internet Routers Using Optics Producing a 100TB/s Router Ashley Green and Brad Rosen February 16, 2004.

Slides:



Advertisements
Similar presentations
1 EE384Y: Packet Switch Architectures Part II Load-balanced Switch (Borrowed from Isaac Keslassys Defense Talk) Nick McKeown Professor of Electrical Engineering.
Advertisements

1 Maintaining Packet Order in Two-Stage Switches Isaac Keslassy, Nick McKeown Stanford University.
Lecture 4. Topics covered in last lecture Multistage Switching (Clos Network) Architecture of Clos Network Routing in Clos Network Blocking Rearranging.
SDN + Storage.
1 IK1500 Communication Systems IK1330 Lecture 3: Networking Anders Västberg
Dynamic Topology Optimization for Supercomputer Interconnection Networks Layer-1 (L1) switch –Dumb switch, Electronic “patch panel” –Establishes hard links.
Configuring a Load-Balanced Switch in Hardware Srikanth Arekapudi, Shang-Tse (Da) Chuang, Isaac Keslassy, Nick McKeown Stanford University.
Jaringan Komputer Lanjut Packet Switching Network.
Clean Slate Design for the Internet Designing a Predictable Backbone Network with Valiant Load Balancing NSF 100 x 100 Clean.
Playback-buffer Equalization For Streaming Media Using Stateless Transport Prioritization By Wai-tian Tan, Weidong Cui and John G. Apostolopoulos Presented.
1 EL736 Communications Networks II: Design and Algorithms Class3: Network Design Modeling Yong Liu 09/19/2007.
Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu.
Router Architecture : Building high-performance routers Ian Pratt
Submitters: Erez Rokah Erez Goldshide Supervisor: Yossi Kanizo.
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
Frame-Aggregated Concurrent Matching Switch Bill Lin (University of California, San Diego) Isaac Keslassy (Technion, Israel)
A Load-Balanced Switch with an Arbitrary Number of Linecards Isaac Keslassy, Shang-Tse Chuang, Nick McKeown.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Routers with a Single Stage of Buffering Sundar Iyer, Rui Zhang, Nick McKeown High Performance Networking Group, Stanford University,
Scaling Internet Routers Using Optics UW, October 16 th, 2003 Nick McKeown Joint work with research groups of: David Miller, Mark Horowitz, Olav Solgaard.
Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University The Load-Balanced Router.
A Scalable Switch for Service Guarantees Bill Lin (University of California, San Diego) Isaac Keslassy (Technion, Israel)
Making Parallel Packet Switches Practical Sundar Iyer, Nick McKeown Departments of Electrical Engineering & Computer Science,
The Concurrent Matching Switch Architecture Bill Lin (University of California, San Diego) Isaac Keslassy (Technion, Israel)
1 Architectural Results in the Optical Router Project Da Chuang, Isaac Keslassy, Nick McKeown High Performance Networking Group
1 OR Project Group II: Packet Buffer Proposal Da Chuang, Isaac Keslassy, Sundar Iyer, Greg Watson, Nick McKeown, Mark Horowitz
Using Load-Balancing To Build High-Performance Routers Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University.
1 ENTS689L: Packet Processing and Switching Buffer-less Switch Fabric Architectures Buffer-less Switch Fabric Architectures Vahid Tabatabaee Fall 2006.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion The.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Scaling.
A Load-Balanced Switch with an Arbitrary Number of Linecards Isaac Keslassy, Shang-Tse (Da) Chuang, Nick McKeown Stanford University.
In-Band Flow Establishment for End-to-End QoS in RDRN Saravanan Radhakrishnan.
Scaling Internet Routers Using Optics Isaac Keslassy, Shang-Tse Da Chuang, Kyoungsik Yu, David Miller, Mark Horowitz, Olav Solgaard, Nick McKeown Department.
The Crosspoint Queued Switch Yossi Kanizo (Technion, Israel) Joint work with Isaac Keslassy (Technion, Israel) and David Hay (Politecnico di Torino, Italy)
1 EE384Y: Packet Switch Architectures Part II Load-balanced Switches Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University.
Fundamental Complexity of Optical Systems Hadas Kogan, Isaac Keslassy Technion (Israel)
1 Achieving 100% throughput Where we are in the course… 1. Switch model 2. Uniform traffic  Technique: Uniform schedule (easy) 3. Non-uniform traffic,
Optimal Load-Balancing Isaac Keslassy (Technion, Israel), Cheng-Shang Chang (National Tsing Hua University, Taiwan), Nick McKeown (Stanford University,
Algorithms for Self-Organization and Adaptive Service Placement in Dynamic Distributed Systems Artur Andrzejak, Sven Graupner,Vadim Kotov, Holger Trinks.
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
Load Balanced Birkhoff-von Neumann Switches
Nick McKeown CS244 Lecture 7 Valiant Load Balancing.
Merits of a Load-Balanced AAPN 1.Packets within a flow are transported to their correct destinations in sequence. This is due to the 1:1 logical connection.
ATM SWITCHING. SWITCHING A Switch is a network element that transfer packet from Input port to output port. A Switch is a network element that transfer.
1 Copyright © Monash University ATM Switch Design Philip Branch Centre for Telecommunications and Information Engineering (CTIE) Monash University
1 Flow Identification Assume you want to guarantee some type of quality of service (minimum bandwidth, maximum end-to-end delay) to a user Before you do.
Advance Computer Networking L-8 Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan Seshan.
Designing Packet Buffers for Internet Routers Friday, October 23, 2015 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford.
Applied research laboratory 1 Scaling Internet Routers Using Optics Isaac Keslassy, et al. Proceedings of SIGCOMM Slides:
ISLIP Switch Scheduler Ali Mohammad Zareh Bidoki April 2002.
Winter 2006EE384x1 EE384x: Packet Switch Architectures I a) Delay Guarantees with Parallel Shared Memory b) Summary of Deterministic Analysis Nick McKeown.
Jon Turner Resilient Cell Resequencing in Terabit Routers.
Buffered Crossbars With Performance Guarantees Shang-Tse (Da) Chuang Cisco Systems EE384Y Thursday, April 27, 2006.
1 How scalable is the capacity of (electronic) IP routers? Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University
Reduced Rate Switching in Optical Routers using Prediction Ritesh K. Madan, Yang Jiao EE384Y Course Project.
Block-Based Packet Buffer with Deterministic Packet Departures Hao Wang and Bill Lin University of California, San Diego HSPR 2010, Dallas.
Virtual-Channel Flow Control William J. Dally
A Load Balanced Switch with an Arbitrary Number of Linecards I.Keslassy, S.T.Chuang, N.McKeown ( CSL, Stanford University ) Some slides adapted from authors.
A Load-Balanced Switch with an Arbitrary Number of Linecards Offense Anwis Das.
Input buffered switches (1)
scheduling for local-area networks”
CS 740: Advance Computer Networks Hand-out on Router Design
Advance Computer Networking
On-time Network On-chip
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Chapter 3 Part 3 Switching and Bridging
Chapter 2 Switching.
Switching Chapter 2 Slides Prepared By: -
Presentation transcript:

Scaling Internet Routers Using Optics Producing a 100TB/s Router Ashley Green and Brad Rosen February 16, 2004

Presentation Outline Motivation Motivation Avi’s “Black Box” Avi’s “Black Box” Black Box: Load Balance Switch Black Box: Load Balance Switch Conclusion Conclusion

Motivation 1. How can the capacity of Internet routers scale to keep up with growths in Internet traffic? 2. Why do we need optical technology to be introduced inside routers to help increase their capacity?

Motivation Q: How can the capacity of internet routers scale to keep up with growths in Internet traffic? A: Since the underlying demand for network capacity continues to double every year, we require an increase in router capacity. My notes: The other option would be for Internet providers to double the number of routers in their network each year, but this would not be practical because the number of central offices would have to be doubled each year and doubling the number of locations would require enormous capital investment and increases in support and maintenance infrastructure.

Motivation Q: Why do we need optical technology to be introduced inside routers to help increase their capacity? A: Each generation of router consumes more power than the last. Have reached the limit for single- rack routers. Move towards multi- Rack systems. Systems Spread power over Multiple racks, but Have unpredictable Performance and bad Scalabilty. Optics?

Motivation As we saw with multi-rack routers, throughput is often unpredictable. As we saw with multi-rack routers, throughput is often unpredictable. Predictable throughput limited by Predictable throughput limited by switching capacity: 2.5 Tb/s switching capacity: 2.5 Tb/s Power consumption Power consumption Hence: optics offer virtually zero power consumption and 100Tb/s in a single rack

Presentation Outline Motivation Avi’s “Black Box” Black Box: Load Balance Switch Conclusion

Avi’s Black Box Input Output ? Bits [packets] 1. Good scalability % throughput 3. Low power consumption 4. Sequenced packets 5. Fault tolerance (missing or failed line cards) …where they should be …in order [mostly]...on time [mostly]

Presentation Outline Motivation Avi’s “Black Box” Black Box: Load Balance Switch Conclusion

Black Box: Load-Balanced Switch Most promising architecture to fulfill the role of the magic black box Most promising architecture to fulfill the role of the magic black box 1. Has 100% throughput 2. Scalable (no central scheduler) 2. Scalable (no central scheduler) 3. Amenable to optics 3. Amenable to optics

Black Box: Load Balancing Switch Step 1: when a packet arrives to the first stage, the first switch transfers it to a VOQ. The intermediate input that packet goes to depends on current configuration of the load-balancer. The packet is put into the VOQ according to eventual output.

Black Box: Load Balancing Switch Step 2: VOQ is served by second fixed switch.

Black Box: Load Balancing Switch Step 3: the packet is transferred across the second switch to its output and then depart from system.

Black Box: Load Balancing Switch Why does the architecture guarantee the following expected outputs of the black box scenario? 1. scalability % throughput 3. low power consumption 4. sequenced packets 5. fault tolerance

Black Box: Load Balancing Switch Scalability Achieves scalability via quantity Achieves scalability via quantity 100TB/s Router accomplished via 640 line cards. 100TB/s Router accomplished via 640 line cards. Pushes the complexity into the mesh Pushes the complexity into the mesh [This is where optics are used] [This is where optics are used]

Black Box: Load Balancing Switch 100% Throughput If packet arrivals are uniform, a fixed, equal-rate switch with virtual output queues, has a guaranteed throughput of 100%. If packet arrivals are uniform, a fixed, equal-rate switch with virtual output queues, has a guaranteed throughput of 100%. Real network traffic is not uniform. Real network traffic is not uniform. So, an extra load-balancing stage is needed to spread out non- uniform traffic, making it uniform to achieve 100% throughput So, an extra load-balancing stage is needed to spread out non- uniform traffic, making it uniform to achieve 100% throughput The load-balancing device spread packets evenly to the input of the second stage: fixed, equal-rate switching. The load-balancing device spread packets evenly to the input of the second stage: fixed, equal-rate switching.

Black Box: Load Balancing Switch 100% Throughput [proof outline] The load balanced switch has 100% throughput for non uniform arrivals for the following reason: Consider the arrival process a(t) with N x N traffic Matrix Λ to the switch. This process is transformed by the sequence of permutations in the load balancer π 1 (t) into the arrival process to the second stage b(t) = π 1 (t)*a(t). The VOQs are served by the sequence of permutations in the switching stage, π 2 (t). If the inputs and outputs are not over-subscribed then the long term service opportunities exceed the number of arrivals, and hence the system achieves 100% throughput.

Switch Reconfigurations

Black Box: Load Balancing Switch Low Power Consumption Switch fabric of this architecture enables low power consumption because… Switch fabric of this architecture enables low power consumption because… 1. It is essentially transparent 2. consumes no power 3. eliminates power-hungry conversions between the electrical and optical domain. Replace fixed, equal rate switch with N2 fixed channels at rate R/N. Replace two switches with a single switch running twice as fast. …. [similar logic for meshes]

Black Box: Load Balancing Switch Sequenced packets The load-balancer spreads packets without regard to their final destination or when they depart The load-balancer spreads packets without regard to their final destination or when they depart eg. Two packets arrive at same time, they are spread to different intermediate linecards. It is possible their different intermediate linecards. It is possible their departure order will be reversed. departure order will be reversed. Solution: Full Ordered Frames First (FOFF). Solution: Full Ordered Frames First (FOFF). Geared towards this 100TB/s routing Geared towards this 100TB/s routing Bounds the difference in lengths of the second stage VOQs Bounds the difference in lengths of the second stage VOQs Resequencing in third stage buffer [because number of packets out of sequence is bounded] Resequencing in third stage buffer [because number of packets out of sequence is bounded] FOFF is run locally at linecard, using only available info at that linecard. FOFF is run locally at linecard, using only available info at that linecard.

FOFF Input I maintains N FIFO queues Q1 -> QN. An arriving packet destined to output J is placed in Qj. Input I maintains N FIFO queues Q1 -> QN. An arriving packet destined to output J is placed in Qj. Every N time slots, the input selects queue to serve for the next N time slots. First, it picks round robin from among the queues holding more than N packets. If there are no such outputs, then it picks round-robin from among the non-empty queues. Up to N packets from the same queue, and hence destined to the same output, are transferred to different intermediate line-cards in the next N time slots. A pointer keeps track of the last intermediate line card that we sent a packet to for each flow; the next packet is always sent to the next intermediate line card. Every N time slots, the input selects queue to serve for the next N time slots. First, it picks round robin from among the queues holding more than N packets. If there are no such outputs, then it picks round-robin from among the non-empty queues. Up to N packets from the same queue, and hence destined to the same output, are transferred to different intermediate line-cards in the next N time slots. A pointer keeps track of the last intermediate line card that we sent a packet to for each flow; the next packet is always sent to the next intermediate line card. If there is always at least one queue with N packets, the packets will be uniformly spread over the second stage and there will be no mis- sequencing. ALL the VOQ’s that receive packets belonging to a flow receive the same number of packets, so they will all face the same delay and won’t be missequenced. Missequencing only arises when no queue has N packets; but the amount of mis-sequencing is bounded and is corrected in the third stage using a FIXED LENGTH resequencing buffer.

Black Box: Load Balancing Switch Fault Tolerance No centralized scheduler (no single point of failure) No centralized scheduler (no single point of failure) Flexible line-card placement (failure of one linecard will not make the whole system fail) Flexible line-card placement (failure of one linecard will not make the whole system fail) Due to the switching fabric not relying on hard coded linecard placement, MEMS devices can accommodate dynamic linecard configurations. Due to the switching fabric not relying on hard coded linecard placement, MEMS devices can accommodate dynamic linecard configurations. [If there was to be a constant “churn” of linecards, we would not expect the power savings we normally achieve from having relatively static configurations in the MEMS] [If there was to be a constant “churn” of linecards, we would not expect the power savings we normally achieve from having relatively static configurations in the MEMS]

Partitioned Switch Fabric

Partitioned Switch Thm: We need at most M = L+G-1 static paths, where each path can support up to 2R, to spread traffic uniformly over any set of n <= N = G x L linecards that are present so that each pair of linecards are connected at rate 2R/n. Thm: We need at most M = L+G-1 static paths, where each path can support up to 2R, to spread traffic uniformly over any set of n <= N = G x L linecards that are present so that each pair of linecards are connected at rate 2R/n.

Hybrid Optical and Electrical

Hybrid Electro-Optical Switch Thm: There is a polynomial time algorithm that finds a static configuration for each MEMS switch, and a fixed-length sequence of permutations for the electronic crossbars to spread packets over the paths. Thm: There is a polynomial time algorithm that finds a static configuration for each MEMS switch, and a fixed-length sequence of permutations for the electronic crossbars to spread packets over the paths.

Optical Switch Fabric

Presentation Outline Motivation Avi’s “Black Box” Black Box: Load Balance Switch Conclusion

Given optical technology, we can implement a router Given optical technology, we can implement a router 1. that scales 2. has 100% throughput 3. consumes low power 4. delivers ordered packets 5. is fault tolerant. Future considerations include further power reduction: Future considerations include further power reduction: the replacement of hybrid elctro-optical switch with an all-optical fabric all-optical fabric