Presentation is loading. Please wait.

Presentation is loading. Please wait.

Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013 Fabio Bellini Network Sales Engineer EMEA WER +39 335.

Similar presentations


Presentation on theme: "Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013 Fabio Bellini Network Sales Engineer EMEA WER +39 335."— Presentation transcript:

1 Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013
Fabio Bellini Network Sales Engineer EMEA WER

2 Active Fabric Solutions

3 What is Active Fabric ? Active Fabric is a family of high-performance, cost-effective, inter-connect products purpose-built for stitching together server, storage and software elements in virtualized and cloud data centers. Multiple chassis Fully redundant meshed fabric L2/L3 Multipath active Spanning Tree Free Architecture Scaling out next generation DC infrastructure Networking intra e inter Data Centers

4 Networking that offers choice and flexibility
Distributed Servers Main Frame Chassis Core Distributed Cores

5 Traditional Network Design: Introduction
Layer 3 Core Layer 2 or 3 Aggregation Layer 2 Access

6 Traditional Network Design: Introduction
Core Aggregation Access Layer 3 Layer 2 or 3 Layer 2 VRRP removes ½ of uplink bandwidth Spanning Tree disables ½ of the uplinks 6 Confidential

7 Traditional Network Design vs Active Fabric
Layer 3 (16 x) (2 x) Spine 768 Server ports Leaf L2/L3 L2 Core Layer 2 or 3 Aggregation Layer 2 Access

8 Scale-out Layer 3 Leaf/Spine Fabric
Fabric Manager Dell Design Templates Automate documentation Automate config & deployment Validate deployment Spine Spine Spine Spine (16) (8) (2) (128) L3 L2 Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf (64) (16) 6144 Server ports 3072 Server ports 768 Server ports 1980 Server ports Fabric Design Documentation CLI Configuration Deployment Validation Expansion & Changes

9 Active Fabric Solutions Layer 3 Network Design - TODAY
Spine Layer Leaf Layer L3 L2 Implement a Layer 3 Protocol OSPF IS-IS BGP (plus an IGP) No Spanning Tree needed Full bandwidth usage via Equal Cost Multipath (ECMP) Fast link layer failover via Bi-Directional Forwarding Detection (BFD)

10 Active Fabric Solutions Layer 3 Network Design - FUTURE
Spine Layer NVO Gateway Leaf Layer L3 L2 Servers Servers Servers Servers NVO – Network Virtualization Overlay VXLAN for VMWare NVGRE for Microsoft Hyper-V

11 Virtual Layer 2 VM VM VM VM VM VM VM Segment ID 20 Segment ID 10
Spine Segment ID 10 Leaf VM VM vSwitch vSwitch VM VM VM VM VM Host Host In this model we abstract the virtual network from the physical network, just like we’ve abstracted the virtual server from the physical server – through encapsulation. Encapsulation is the fundamental enabler of virtualization. Just like a virtual machine is encapsulated into a file, we encapsulate the virtual network traffic into an IP header as it traverses the physical network from source host to destination host. This is model referred to as “Network Virtualization”. Or “Overlays”. In this model the physical network (underlay) provides an I/O fabric for the overlay. Setting up the network is a one time operation. I don’t need multi tenancy resources from the network (no VLANs). I don’t need forwarding table entries for every VM instance (no MAC forwarding). I don’t need to provision the physical network for every new service or tenant. The network orchestration tools only need to provision one network, the virtual network. This works to keep the orchestration logic and its implementation simple. Network Virtualization Overlay Tenant subnet = Software VLAN Confidential

12 Active Fabric Solutions Layer 3 Network Design - FUTURE
Spine Layer Layer 2 Overlay NVO Gateway Leaf Layer L3 L2 Servers Servers Servers Servers NVO – Network Virtualization Overlay Use existing L3 Active Fabric technology we have TODAY Build a virtual L2 infrastructure on top of it Hypervisors and Gateway operate together Virtual Servers believe they are on a L2 network

13 Active Fabric Solutions Layer 2 Network Design - TODAY
L2/L3 L2 VLT Spine Layer LAG LACP Leaf Layer L2 VLT = Virtual Link Trunk  Multi chassis LAG Dual Control Plane  L2/L3 Active Active Multi path  Standard 802.3ad LAG (LACP) Spanning Tree Free  Fast convergence relay on LACP Node Redundancy  No SPOF from access to core Scale out via mVLT (multiple VLT)  Scale based on product selection New products, higher port densities  Improve scalability

14 Virtual Link Trunking (VLT) The key to our Layer 2 Active Fabric
VLTi L2/L3 L2 Spine Layer LAG LACP Leaf Layer L2 VLTi VLTi Rack Server Access Switch Blade Server Storage iSCSI

15 Active Fabric Solutions Layer 2 Network Design – Converged -
VLT Spine Layer Leaf Layer Converged Switches at the Leaf Layer - Ethernet/FCoE/FC-

16 Active Fabric Solutions Layer 2 Network Design – Converged -
Spine Layer VLT SAN Fabric FC or FCOE iSCSI Leaf Layer Converged Switches at the Leaf Layer - Ethernet/FCoE/FC- Unified Storage Capabilities into the design – iSCSI/FC/FCoE

17 Active Fabric Solutions Layer 2 Network Design – Converged -
Spine Layer VLT SAN Fabric FC or FCOE iSCSI Leaf Layer Converged Switches at the Leaf Layer - Ethernet/FCoE/FC- Unified Storage Capabilities into the design – iSCSI/FC/FCoE If you need dense 40Gb and DCB for iSCSI at the Spine…

18 Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm -
VLT Spine Layer LAG LACP Up to 240G Leaf Layer Scale out Server Farm – Blade switch - Ethernet/FCoE/FC Scale out computational density without compromising Half the infrastructure costs (chassis & switches) Reduced cabling to ToR switches Lower power per node

19 Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm -
VLT Spine Layer Leaf Layer

20 Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm -
VLT Spine Layer Leaf Layer

21 Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm -
VLT Spine Layer LAG LACP Up to 240G Leaf Layer OS teorico 1,3:1 Operativo 1:1 VLT Domain VLT Domain VLT Domain VLT Domain LAG LACP 2 x 10G LAG LACP 2 x 10G

22 Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm -
VLTi L2/L3 L2 Spine Layer LAG LACP Leaf Layer L2 VLTi VLTi LAG LACP Up to 240G LAG LACP Up to 240G 32 Blade 64 Cpu 512 Core . . . . . . . . . . . . Thousands of Server/VM

23 Active Fabric ingredients Maximum functionality, maximum programmability
Layer 3 Multipath Layer 2 Multipath Converged LAN/SAN Software Programmability SAN LAN SAN Ethernet/iSCSI/ FCoE/FC OpenFlow REST/XML/Perl, Python OSPF/BGP/IS-IS VLT/mVLT Z9000 4810 4820T MXL S5000 Z9000 4810 4820T MXL/IOA (9.2) S5000 4810 4820T MXL/IOA S5000 Z9000 4810 4820T MXL/IOA S5000

24 Dell Networking S4810 switch SFP+ 10/40G top-of-rack switch
Proven 10/40G top-of-rack performance Low-latency 10/40 GbE 64x10GbE or 48x10GbE + 4x40GbE Layer 2 multipathing (VLT) support Stacking (to 6 nodes) DCB support, Equallogic and Compellent certified Built-in automation support (bare metal provisioning, scripting, programmatic management) Built-in virtualization support (VMware, Citrix) Dell Force10 S4810 Works with: + Better together will Dell servers & storage

25 Dell Networking S4820T switch 1/10/40G 10GBase-T top-of-rack switch
Accelerate 1G to 10G migration Fully-featured FTOS-powered top-of-rack switch 48 x 1/10G 10GBase-T ports 4 x 40G fabric uplinks (or 16 x 10G) Built-in virtualization support (VMware, Citrix) DCB support for SAN/LAN convergence (iSCSI, FCoE) Integrated automation, scripting and programmatic management Dell Force10 S4820T Works with: + Better together will Dell servers & storage

26 NEW! Dell Networking S5000 converged LAN/SAN switch
First-of-its-kind modular 1RU top-of-rack and fabric switch NEW! Pay-as-you-grow, customizable modularity powered by FTOS 10GbE; 40GbE 2, 4, 8G Fibre Channel Future-proof, multi-stage design for next-gen I/O without rip & replace Unified storage networking, with complete support for iSCSI, RoCE, and FCoE with FC fabric services Reduced management complexity, Integrated automation, scripting and software programmability Easy integration, strong interoperability with major adapter, switch, & storage solutions Dell Networking S5000 1.5X higher port density/RU than Cisco Nexus 5548 3X Brocade VDX Confidential

27 Dell Networking Z9000 High-density 10/40G fabric switch
Scaling the data center core up, down and out Dell Force10 Z9000 2.5Tbps in 2RU footprint High-density networking 32 line rate 40GbE or 128 line rate 10GbE Low power consumption 800 Watts Max (6.25W per 10GbE) 600 Watts Typical (4.68W per 10GbE) Read the report Distributed Core Architecture Product of the year award Internet Telephony

28 40GbE QSFP+ Transceiver & Cables
Dell Networking Blade MXL Hardware High performance full-featured 1/10/40GbE Layer 2 & Layer 3 switch blade Flex I/O Dell Force10 MXL Internal server facing ports Max 32 x GbE/10GbE External ports 2x40GbE fixed ports Two optional Flex I/O modules Stacking 40GbE ports OS FTOS 4-port SFP+ module 1GbE & 10GbE ports 10GbE optical & DAC copper twin-ax 4-port 10GBASE-T module 2X more than M8024-k 2-port QSFP+ module 2 x 40GbE ports 10GbE support using breakout cables Robust and scalable I/O performance, low latency and high bandwidth Support for native 40GbE ports Open standards-based feature rich Enterprise FTOS Converged Ethernet and Fiber Channel support 40GbE QSFP+ Transceiver & Cables 40GbE QSFP+ transceivers 40GbE QSFP+ to 4xSFP+ direct attach breakout cable 40GbE QSFP+ direct attach cable

29 Build an Active Fabric that fits your needs
Active Fabric Design Options Fabrics for any size data center Build an Active Fabric that fits your needs Spine Leaf Design options Small Medium Large Spine node S4810 Z9000 Leaf node Node count 4 spine/12 leaf 4 spine/32 leaf 16 spine/32 leaf Fabric interconnect 10 GbE 40 GbE Fabric capacity 3.84 Tbps 10.24 Tbps 40.96 Tbps Available 10GbE ports 3:1 oversubscription 6:1 oversubscription 2, non-blocking 29

30 Active Fabric key take away
Supports a various Interface types : Copper : 100/1000/10000 Base-T Fiber : 1G, 10G, 40G Interface Supports different types of data on the same Fabric: Ethernet, FCoE, ISCSI Data Active Fabric and grow from 10s to 100,000s end-devices with same type of equipment models Growth

31 IEEE 802.3ba Ethernet 40/100Gbps

32 Ethernet 40/100G 802.3ba-2010 Ieee 802.3 approved motions
40 and 100Gbps At least 100 mt on OM3 multimode fiber At least 150 mt on OM4 multimode fiber At least 10Km on single-mode fiber At least 40Km on single-mode fiber (100G only) At least 7 mt on copper cable assembly At least 2Km on single-mode fiber (40G only) Key project date: Study group formed in July 2006 Project authorization in December 2007 Task Force in January 2008 40/100 Standard Compleated in July ba-2010

33 40G Optical Transceiver: OM3/OM4

34 40G Ethernet Parallel Optics:

35 Ethernet 40G e 100G: OM3, OM4

36 40G eSR4 QSFP+ QSFP+ eSR4 modules meets link distance specifications for 40G ethernet applications 40G eSR4 paralle optics extended reach OM3/OM4: 300m / 400mt

37 Modules SFP/SFP+/QSFP and DAC cable
Passive Twinax SFP+ DAC (7m) QSFP+ MTP 40GE QSFP+ (50m) Active fiber 40GE QSFP+ (5m) Passive Copper AOC Multi-mode vs. single-mode Fiber A "mode" in Fiber Optic cable refers to the path in which light travels. Multi-mode cables have a larger core diameter than that of single-mode cables. This larger core diameter allows multiple pathways and several wavelengths of light to be transmitted. Multi-mode fiber is available in two sizes, 50 micron and 62.5 micron. Single-mode fiber is a type of fiber optic cable through which only one light signal can travel at a time. Because single-mode fiber is more resistant to attenuation than multi-mode fiber, it can be used in significantly longer cable runs. The core of a single-mode fiber is normally 9 microns wide. A micron is one millionth of a meter. Single-mode fiber can support Gigabit Ethernet over distances as long as 10 kilometers. 50 micron vs micron fiber Both 50 micron and 62.5 micron fiber optic cables use an LED or laser light source. They are also used in the same networking applications. The main difference between the two is that 50 micron fiber can support 3 times the bandwidth of 62.5 micron fiber. 50 micron fiber also supports longer cable runs than 62.5 micron cable. Fiber Optic Connectors There are a variety of fiber optic connectors. The most common are SC and LC Also known as SPF, Small Form Factor and Mini-GBIC SC Subscriber Connector or Standard Connector or Siemon Connector SC is a snap-in connector that is widely used for its excellent performance. It is also available in a duplex configuration. SC connectors have a mnemonic of "Square Connector“. LC Lucent Connector or Local Connector LC connectors are sometimes called "Little Connectors". They use SFP – small form-factor pluggable (SFP) - a compact, hot-pluggable transceiver LC is a small form factor connector that uses a 1.25 mm ferrule, half the size of the SC. Otherwise, it's a standard ceramic ferrule connector, easily terminated with any adhesive. Good performance, highly favored for single-mode. If you are utilizing 10GBASE-CX4 or Infiniband, you are distance limited to a maximum of 15m. The following chart summarizes the distances for all 10Gb/s applications and their associated cabling systems. Application Media Classification Max. Distance Wavelength 10GBASE-T Twisted Pair Copper Category 6/Class E UTP up to 55m 10GBASE-T Twisted Pair Copper Category 6A/Class EA UTP 100m 10GBASE-T Twisted Pair Copper Category 6A/Class EA F/UTP 100m 10GBASE-T Twisted Pair Copper Class F/Class FA 100m 10GBASE-CX4 Manufactured N/A 10-15m 10GBASE-SX 62.5 MMF 160/500 28m 850nm 10GBASE-SX 62.5 MMF 200/500 28m 850nm 10GBASE-SX 50 MMF 500/500 86m 850nm 10GBASE-SX 50 MMF 2000/ m 850nm 10GBASE-LX SMF 10km 1310nm 10GBASE-EX SMF 40km 1550nm 10GBASE-LRM All MMF 220m 1300nm 10GBASE-LX4 All MMF 300m 1310nm 10GBASE-LX4 SMF 10km 1310nm 40GE QSFP+ to 4 SFP+ (5m) passive copper breakout MTP to 4xLC optical breakout cable 5m + 100m/OM3 or 150m/OM4 DAC = Direct Attached Cable AOC = Active Optical Cable

38 Use Case Examples

39 Small Active Fabric Use Case: Scale: Customer has about 300 servers
Needs to have High Availability as the Servers are used 7/24 Need the ability to have ISSU to support SLA of % uptime Scale: 48 servers per rack Redundant connections from servers 6 racks today; expansion to 20 2x20G uplink connection VLT Spine Layer Leaf Layer

40 Medium Active Fabric Use Case: Scale: VLT Spine Layer Leaf Layer
An Enterprise Customer with HPC Pods Requires large amounts of Servers and Cores High availability uptime for Servers Large upstream pipe for data transfers Shrink the number of cables in the Data Center Scale: 10G to the servers Active-standby today A-A future 80G uplink connections NFS Storage System 4-M1000E chassis per rack=32 Chassis 16 Blades per M1000e chassis =512 Blades 12 cores per blade=6144 Cores VLT Spine Layer Leaf Layer

41 Large Active Fabric Customer Example: Use Case: Scale:
Customer has a large L3 Network Requires 10,000s servers with growth Need to support smallest O.S possible 1G and 10G servers Scale: Capable to support 1G to 10G migration Start with 10,000 servers growing to 100,000 servers Expand with little or no hits Spine Layer Leaf Layer Customer Example:

42

43 Fabio Bellini Network Sales Engineer Mobile:   


Download ppt "Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013 Fabio Bellini Network Sales Engineer EMEA WER +39 335."

Similar presentations


Ads by Google