Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Proposed Architecture for the GENI Backbone Platform

Similar presentations


Presentation on theme: "A Proposed Architecture for the GENI Backbone Platform"— Presentation transcript:

1 A Proposed Architecture for the GENI Backbone Platform
Jon Turner

2 GENI Backbone Platform
Flexible infrastructure for experimental networks Implements two primary abstractions metalinks – abstraction of physical links metarouters – abstraction of physical network devices Metalinks point-to-point or multipoint point-to-point links may have provisioned bandwidth built on top of substrate links Metarouters substrate platform provides generic resources variety of resource types with minimal limitations on use functionality defined by researchers may forward packets, switch TDM circuits or implement multimedia processing functions

3 GENI Backbone Overview
substrate link metalink substrate platform metarouter substrate links may run over Ethernet, IP, MPLS, . . . metanet protocol stack

4 High Level Objectives Enable experimental nets and minimize obstacles
focus on providing resources – architectural neutrality enable use by real end users Stability and reliability reliable core platform effective isolation of experimental networks Ease of use enable researchers to be productive without heroic efforts toolkits that facilitate use of high performance elements Scalable performance enable >100K users, wide range of metarouter capacities high ratio of processing to IO Technology diversity and adaptability variety of processing resources – add more types later

5 Advanced Telecom Computing Architecture
New industry standard defines standard packaging enables assembly of multi-supplier systems Standard 14 slot chassis high bandwidth serial links variety of processing blades redundant switch blades integrated management Relevance to GENI flexible, open subsystems compelling research platform faster transition of research ideas into practice carrier card optional mezzanine cards power connector fabric connector optional Rear Transition Module So, let me say a little more about this, because I think it’s a really important development for the networking research community, and it’s one that many of us have not been aware of. So what is ATCA? Well, it’s a packaging standard that defines standard printed circuit board formats, common connector definitions and standard backplanes and chasses. So why, you ask, should networking researchers care about a packaging standard? Well, we should care, because it has led to the creation of a new market for intermediate board-level subsystems that can be purchased from different suppliers and assembled into systems with tremendous flexibility. The key thing to understand here is that because these subsystems are produced and sold by subsystem vendors to multiple systems companies, they must be flexible and open. This means that networking researchers can buy these components and assemble them into novel systems that they can configure and program. That is, for the first time, we have access to experimental platforms that are no less powerful than the hardware platforms produced by major systems companies, but these are under our control.

6 Virtualized Line Card Architecture
Similar to conventional router architecture line cards connected by a switch fabric traffic makes a single pass through the switch fabric Requires fine-grained virtualization line cards must support multiple meta line cards requires intra-component resource sharing and traffic isolation Mismatch for current device technologies multi-core NPs lack memory protection mechanisms lack of tools and protection mechanisms for independent, partial FPGA designs Hard to vary ratio of processing to IO ILC2 Switch Fabric ILC1 . . . ILCn OLC1 OLC2 OLCn Input Line Cards Output Line Cards substrate . . . Processing Resources So, how can we best use technology components like these to construct a diversified router. There are several approaches one can take. The first one shown here is similar to a conventional router architecture, in the sense that it consists of line cards connected by a switch fabric, with packets passing from input line cards through the switch fabric to output line cards. To enable multiple metarouters to co-exist on such a system, we need to diversify the line cards, by equipping them with generic processing resources that can be divided up among the different meta line cards. One way to do this is to allocate the different processor cores of a network processor to different meta line cards. Unfortunately, this kind of fine-grained diversification is difficult to achieve with current network processors, which lack the protection mechanisms needed to isolate different meta line cards from one another.

7 Processing Pool Architecture
Processing Engines (PEs) implement metarouters variety of types Line Cards terminate ext. links, mux/dmx metalinks Shared PEs include substrate component Dedicated PEs need not include substrate use switch and Line Cards for protection and isolation PEs in larger metarouters linked by metaswitch Larger metarouters may own Line Cards allows metanet to define transmission format/framing configured by lower-level transport network Line Cards PEs Switch

8 Ensuring Metarouter Isolation
PE . . . metarouter 1 metarouter 2 LC constrained routing nonblocking switch fabric no interference Constrain routing on switch port basis. use switch with VLAN support for constrained routing substrate controls VLAN configuration Nonblocking switch fabric ensures traffic isolation. congestion at one port does not affect traffic to another traffic within clusters cannot interfere Here’s an example that illustrates the issue. Here, we have two metarouters, each of which involves several processing engines. The PEs within each metarouter communicate through the switch fabric. To isolate these PE-to-PE traffic flows from one another, we need to do two things. First, we need to constrain the routing. The 10 GE switch blades that are becoming available over the next year can route packets based on VLAN tags and these VLAN tags can be configured through an administrative interface available only to the substrates, not the metarouters. This makes it possible to ensure that a PE in one router cannot send traffic to the PE in another. The second thing we need to do is to isolate the traffic flows, but because the different metarouters occupy distinct physical ports on the router, we get this property for free, so long as the switch fabric is nonblocking. There is one last thing I’ve glossed over until now, and that has to do with the outgoing traffic streams sent by different metarouters to shared line cards. With current technology components, the router substrate cannot rate limit the PEs at the switch fabric inputs. However, we can require, as a matter of policy that metarouters rate limit these flows, and the outgoing line cards can monitor the metarouters’ traffic flows in order to ensure that the rate limits are being observed. Metarouters that violate the limits can then be disabled by the substrate, in order to protect other metarouters.

9 Current Development System
Network Processor blades dual IXP 2850 NPs 3xRDRAM, 3xSRAM, TCAM dual 10GE interfaces 10x1GE IO interfaces General purpose blades dual Xeons, 4xGigE, disk 10 Gb/s Ethernet switch VLANs for traffic isolation

10 Prototype Operation One NP blade (with RTM) implements Line Card
GPE NPE LC RTM Switch Lookup (2 ME) ExtRx IntTx Queue Manager Key Extract Hdr Format (1 ME) IntRx ExtTx Rate Monitor TCAM DRAM SRAM external interface switch interface ingress side egress side Lookup (1 ME) Rx Tx Queue Manager (2 ME) Key Extract Hdr Format TCAM DRAM SRAM One NP blade (with RTM) implements Line Card separate ingress/egress pipelines Second NP hosts multiple metarouter fast-paths multiple static code options for diverse metarouters configurable filters and queues GPEs host conventional OS with virtual machines

11 Line Card Ingress side demuxes with TCAM filters (port #s)
Lookup (2 ME) ExtRx IntTx Queue Manager Key Extract Hdr Format (1 ME) IntRx ExtTx Rate Monitor TCAM DRAM SRAM external interface switch interface ingress side egress side Ingress side demuxes with TCAM filters (port #s) Egress side provides traffic isolation per interface Target 10 Gb/s line rate for 80 byte packets

12 NPE Hosting Multiple Metarouters
Lookup (1 ME) Rx (2 ME) Tx Queue Manager Parse Hdr Format TCAM DRAM SRAM Substr. Decap Parse and Header Format include MR-specific code parse extracts header fields to form lookup key Hdr Format makes required changes to header fields Lookup block uses opaque key for TCAM lookup and returns opaque result for use by Hdr Format Multiple static code options can be supported multiple metarouters per code option each has own filters, queues and block of private memory

13 Possible Additional PE Types
ATCA carrier card 10 GE switch with connections to each switch blade 4 mezzanine card slots with 10 GE ext. IO interface to RTM connector FPGA mezzanine card Xilinx Virtex-5 LX330 over 200K flip flops and LUT6s over 1 MB of on-chip SRAM on-board SDRAM and SRAM chips Cavium NP card up to 16 MIPs processor cores 600 MHz, dual issue per core L1 cache, shared L2 (2 MB) more conventional SMP prog. style SDRAM FPE SRAM 10 GE Switch Power Flash SDRAM FPE SRAM GLU SPI-4 Cavium NP DRAM Cavium NP

14 Scaling Up Baseline config (1+1) Multi-chassis direct
14 slot ATCA chassis separate blade server for GPEs 14 GPEs + 9 NPEs+ 3 LCs 10 GE inter-chassis connection Multi-chassis direct up to seven chassis pairs 98 GPEs + 63 NPEs+ 21 LCs 2-hop forwarding as needed Multi-chassis indirect up to 24 chassis pairs 336 GPEs NPEs + 72 LCs ATCA 2 Blade Server ATCA 2 Blade Server 6 24 port 10GE Switches ATCA 2 24 12 Blade Server

15 Summary GENI requires capable backbone platform
to enable wide range of experimental research to support production use of experimental networks by large numbers of non-research users Required hardware building blocks are at hand ATCA provides useful framework powerful server blades and NP blades high performance switching components What’s still to do? software to manage/configure resources for users sample metarouter code for NPs FPGA-based processing engines tools to speed-up metarouter development demonstration of multi-PE metarouters


Download ppt "A Proposed Architecture for the GENI Backbone Platform"

Similar presentations


Ads by Google