A Proposed Architecture for the GENI Backbone Platform

Slides:



Advertisements
Similar presentations
Ethernet Switch Features Important to EtherNet/IP
Advertisements

Nios Multi Processor Ethernet Embedded Platform Final Presentation
Chapter 1: Introduction to Scaling Networks
Engineering Patrick Crowley, John DeHart, Mart Haitjema, Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Jon Turner, Charlie Wiseman, Mike Wilson, Ken Wong,
StreamBlade SOE TM Initial StreamBlade TM Stream Offload Engine (SOE) Single Board Computer SOE-4-PCI Rev 1.2.
Supercharging PlanetLab : a high performance, Multi-Application, Overlay Network Platform Written by Jon Turner and 11 fellows. Presented by Benjamin Chervet.
Supercharging PlanetLab A High Performance,Multi-Alpplication,Overlay Network Platform Reviewed by YoungSoo Lee CSL.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
4-1 Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving side, delivers.
Embedded Transport Acceleration Intel Xeon Processor as a Packet Processing Engine Abhishek Mitra Professor: Dr. Bhuyan.
Router Architectures An overview of router architectures.
Router Architectures An overview of router architectures.
Network Management Concepts and Practice Author: J. Richard Burke Presentation by Shu-Ping Lin.
Company and Product Overview Company Overview Mission Provide core routing technologies and solutions for next generation carrier networks Founded 1996.
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical.
Paper Review Building a Robust Software-based Router Using Network Processors.
InterVLAN Routing Design and Implementation. What Routers Do Intelligent, dynamic routing protocols for packet transport Packet filtering capabilities.
ECE 526 – Network Processing Systems Design Network Processor Architecture and Scalability Chapter 13,14: D. E. Comer.
Jon Turner (and a cast of thousands) Washington University Design of a High Performance Active Router Active Nets PI Meeting - 12/01.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
Applied research laboratory David E. Taylor Users Guide: Fast IP Lookup (FIPL) in the FPX Gigabit Kits Workshop 1/2002.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Patrick Crowley and Jon Turner and John DeHart, Mart Haitjema Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Charlie Wiseman, Mike Wilson, Ken Wong, Dave.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
XStream: Rapid Generation of Custom Processors for ASIC Designs Binu Mathew * ASIC: Application Specific Integrated Circuit.
CS 4396 Computer Networks Lab Router Architectures.
16 February 2011Ian Brawn1 The High Speed Demonstrator and Slice Demonstrator Programme The Proposed High-Speed Demonstrator –Overview –Design Methodology.
What is CRKIT Framework ? Baseband Processor :  FPGA-based off-the-shelf board  Control up to 4 full-duplex wideband radios  FPGA-based System-on-Chip.
Chapter 3 Part 3 Switching and Bridging
Enhancements for Voltaire’s InfiniBand simulator
What is a Protocol A set of definitions and rules defining the method by which data is transferred between two or more entities or systems. The key elements.
Supercharged PlanetLab Platform, Control Overview
Instructor Materials Chapter 1: LAN Design
Chapter 8 Switching Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Virtual Local Area Networks (VLANs) Part I
Planning and Troubleshooting Routing and Switching
Addressing: Router Design
Aled Edwards, Anna Fischer, Antonio Lain HP Labs
Chapter 3 Part 3 Switching and Bridging
Chapter 7 Backbone Network
Programmable Logic Controllers (PLCs) An Overview.
Protocols and the TCP/IP Suite
What’s “Inside” a Router?
Data and Computer Communications by William Stallings Eighth Edition
CS 31006: Computer Networks – The Routers
Design Issues for the GENI Backbone Platform
An NP-Based Router for the Open Network Lab Overview by JST
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Supercharged PlanetLab Platform, Control Overview
Next steps for SPP & ONL 2/6/2007
Network Core and QoS.
IXP Based Router for ONL: Architecture
Radisys 7010 NPU Routing for Shared Metarouter Cards
An Architecture for a Diversified Internet
Design of a Diversified Router: Project Assignments and Status Updates
CS703 - Advanced Operating Systems
Design of a Diversified Router: November 2006 Demonstration Plans
Code Review for IPv4 Metarouter Header Format
Code Review for IPv4 Metarouter Header Format
A High Performance PlanetLab Node
IXP Based Router for ONL: Architecture
Chapter 4 Network Layer Computer Networking: A Top Down Approach 5th edition. Jim Kurose, Keith Ross Addison-Wesley, April Network Layer.
1 Multi-Protocol Label Switching (MPLS). 2 MPLS Overview A forwarding scheme designed to speed up IP packet forwarding (RFC 3031) Idea: use a fixed length.
Chapter 3 Part 3 Switching and Bridging
Project proposal: Questions to answer
Protocols and the TCP/IP Suite
EEC4113 Data Communication & Multimedia System Chapter 1: Introduction by Muhazam Mustapha, July 2010.
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Network Core and QoS.
Presentation transcript:

A Proposed Architecture for the GENI Backbone Platform Jon Turner jon.turner@wustl.edu http://www.arl.wustl.edu/~jst/

GENI Backbone Platform Flexible infrastructure for experimental networks Implements two primary abstractions metalinks – abstraction of physical links metarouters – abstraction of physical network devices Metalinks point-to-point or multipoint point-to-point links may have provisioned bandwidth built on top of substrate links Metarouters substrate platform provides generic resources variety of resource types with minimal limitations on use functionality defined by researchers may forward packets, switch TDM circuits or implement multimedia processing functions

GENI Backbone Overview substrate link metalink substrate platform metarouter substrate links may run over Ethernet, IP, MPLS, . . . metanet protocol stack

High Level Objectives Enable experimental nets and minimize obstacles focus on providing resources – architectural neutrality enable use by real end users Stability and reliability reliable core platform effective isolation of experimental networks Ease of use enable researchers to be productive without heroic efforts toolkits that facilitate use of high performance elements Scalable performance enable >100K users, wide range of metarouter capacities high ratio of processing to IO Technology diversity and adaptability variety of processing resources – add more types later

Advanced Telecom Computing Architecture New industry standard defines standard packaging enables assembly of multi-supplier systems Standard 14 slot chassis high bandwidth serial links variety of processing blades redundant switch blades integrated management Relevance to GENI flexible, open subsystems compelling research platform faster transition of research ideas into practice carrier card optional mezzanine cards power connector fabric connector optional Rear Transition Module So, let me say a little more about this, because I think it’s a really important development for the networking research community, and it’s one that many of us have not been aware of. So what is ATCA? Well, it’s a packaging standard that defines standard printed circuit board formats, common connector definitions and standard backplanes and chasses. So why, you ask, should networking researchers care about a packaging standard? Well, we should care, because it has led to the creation of a new market for intermediate board-level subsystems that can be purchased from different suppliers and assembled into systems with tremendous flexibility. The key thing to understand here is that because these subsystems are produced and sold by subsystem vendors to multiple systems companies, they must be flexible and open. This means that networking researchers can buy these components and assemble them into novel systems that they can configure and program. That is, for the first time, we have access to experimental platforms that are no less powerful than the hardware platforms produced by major systems companies, but these are under our control.

Virtualized Line Card Architecture Similar to conventional router architecture line cards connected by a switch fabric traffic makes a single pass through the switch fabric Requires fine-grained virtualization line cards must support multiple meta line cards requires intra-component resource sharing and traffic isolation Mismatch for current device technologies multi-core NPs lack memory protection mechanisms lack of tools and protection mechanisms for independent, partial FPGA designs Hard to vary ratio of processing to IO ILC2 Switch Fabric ILC1 . . . ILCn OLC1 OLC2 OLCn Input Line Cards Output Line Cards substrate . . . Processing Resources So, how can we best use technology components like these to construct a diversified router. There are several approaches one can take. The first one shown here is similar to a conventional router architecture, in the sense that it consists of line cards connected by a switch fabric, with packets passing from input line cards through the switch fabric to output line cards. To enable multiple metarouters to co-exist on such a system, we need to diversify the line cards, by equipping them with generic processing resources that can be divided up among the different meta line cards. One way to do this is to allocate the different processor cores of a network processor to different meta line cards. Unfortunately, this kind of fine-grained diversification is difficult to achieve with current network processors, which lack the protection mechanisms needed to isolate different meta line cards from one another.

Processing Pool Architecture Processing Engines (PEs) implement metarouters variety of types Line Cards terminate ext. links, mux/dmx metalinks Shared PEs include substrate component Dedicated PEs need not include substrate use switch and Line Cards for protection and isolation PEs in larger metarouters linked by metaswitch Larger metarouters may own Line Cards allows metanet to define transmission format/framing configured by lower-level transport network Line Cards PEs Switch

Ensuring Metarouter Isolation PE . . . metarouter 1 metarouter 2 LC constrained routing nonblocking switch fabric no interference Constrain routing on switch port basis. use switch with VLAN support for constrained routing substrate controls VLAN configuration Nonblocking switch fabric ensures traffic isolation. congestion at one port does not affect traffic to another traffic within clusters cannot interfere Here’s an example that illustrates the issue. Here, we have two metarouters, each of which involves several processing engines. The PEs within each metarouter communicate through the switch fabric. To isolate these PE-to-PE traffic flows from one another, we need to do two things. First, we need to constrain the routing. The 10 GE switch blades that are becoming available over the next year can route packets based on VLAN tags and these VLAN tags can be configured through an administrative interface available only to the substrates, not the metarouters. This makes it possible to ensure that a PE in one router cannot send traffic to the PE in another. The second thing we need to do is to isolate the traffic flows, but because the different metarouters occupy distinct physical ports on the router, we get this property for free, so long as the switch fabric is nonblocking. There is one last thing I’ve glossed over until now, and that has to do with the outgoing traffic streams sent by different metarouters to shared line cards. With current technology components, the router substrate cannot rate limit the PEs at the switch fabric inputs. However, we can require, as a matter of policy that metarouters rate limit these flows, and the outgoing line cards can monitor the metarouters’ traffic flows in order to ensure that the rate limits are being observed. Metarouters that violate the limits can then be disabled by the substrate, in order to protect other metarouters.

Current Development System Network Processor blades dual IXP 2850 NPs 3xRDRAM, 3xSRAM, TCAM dual 10GE interfaces 10x1GE IO interfaces General purpose blades dual Xeons, 4xGigE, disk 10 Gb/s Ethernet switch VLANs for traffic isolation

Prototype Operation One NP blade (with RTM) implements Line Card GPE NPE LC RTM Switch Lookup (2 ME) ExtRx IntTx Queue Manager Key Extract Hdr Format (1 ME) IntRx ExtTx Rate Monitor TCAM DRAM SRAM external interface switch interface ingress side egress side Lookup (1 ME) Rx Tx Queue Manager (2 ME) Key Extract Hdr Format TCAM DRAM SRAM One NP blade (with RTM) implements Line Card separate ingress/egress pipelines Second NP hosts multiple metarouter fast-paths multiple static code options for diverse metarouters configurable filters and queues GPEs host conventional OS with virtual machines

Line Card Ingress side demuxes with TCAM filters (port #s) Lookup (2 ME) ExtRx IntTx Queue Manager Key Extract Hdr Format (1 ME) IntRx ExtTx Rate Monitor TCAM DRAM SRAM external interface switch interface ingress side egress side Ingress side demuxes with TCAM filters (port #s) Egress side provides traffic isolation per interface Target 10 Gb/s line rate for 80 byte packets

NPE Hosting Multiple Metarouters Lookup (1 ME) Rx (2 ME) Tx Queue Manager Parse Hdr Format TCAM DRAM SRAM Substr. Decap Parse and Header Format include MR-specific code parse extracts header fields to form lookup key Hdr Format makes required changes to header fields Lookup block uses opaque key for TCAM lookup and returns opaque result for use by Hdr Format Multiple static code options can be supported multiple metarouters per code option each has own filters, queues and block of private memory

Possible Additional PE Types ATCA carrier card 10 GE switch with connections to each switch blade 4 mezzanine card slots with 10 GE ext. IO interface to RTM connector FPGA mezzanine card Xilinx Virtex-5 LX330 over 200K flip flops and LUT6s over 1 MB of on-chip SRAM on-board SDRAM and SRAM chips Cavium NP card up to 16 MIPs processor cores 600 MHz, dual issue per core L1 cache, shared L2 (2 MB) more conventional SMP prog. style SDRAM FPE SRAM 10 GE Switch Power Flash SDRAM FPE SRAM GLU SPI-4 Cavium NP DRAM Cavium NP

Scaling Up Baseline config (1+1) Multi-chassis direct 14 slot ATCA chassis separate blade server for GPEs 14 GPEs + 9 NPEs+ 3 LCs 10 GE inter-chassis connection Multi-chassis direct up to seven chassis pairs 98 GPEs + 63 NPEs+ 21 LCs 2-hop forwarding as needed Multi-chassis indirect up to 24 chassis pairs 336 GPEs + 216 NPEs + 72 LCs ATCA 2 Blade Server ATCA 2 Blade Server 6 24 port 10GE Switches ATCA 2 24 12 Blade Server

Summary GENI requires capable backbone platform to enable wide range of experimental research to support production use of experimental networks by large numbers of non-research users Required hardware building blocks are at hand ATCA provides useful framework powerful server blades and NP blades high performance switching components What’s still to do? software to manage/configure resources for users sample metarouter code for NPs FPGA-based processing engines tools to speed-up metarouter development demonstration of multi-PE metarouters