Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013 Fabio Bellini Network Sales Engineer EMEA WER +39 335.

Slides:



Advertisements
Similar presentations
Technology alliance partner
Advertisements

Emulex Confidential 11 10GB Ethernet: More than a Big Pipe Donal Madden Emulex 1.
Connect communicate collaborate GN3plus What the network should do for clouds? Christos Argyropoulos National Technical University of Athens (NTUA) Institute.
Sales Guide for DES-3810 Series Aug 2011 D-Link HQ.
Cisco UCS Mini Martin Hegarty| Product Manager | Comstor
Brocade VDX 6746 switch module for Hitachi Cb500
Brocade VDX Switches for Hitachi Unified Compute Platform Solutions
Removing the constraints of the data center
Solutions Road Show 2014 March’ 2014 | India Amey Divekar Lead Technologist Dell Networking Right size your Data Centre Networking.
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
Cisco and NetApp Confidential. Distributed under non-disclosure only. Name Date FlexPod Entry-level Solution FlexPod Value, Sized Right for Smaller Workloads.
Session Agenda Introducing the Serverquarium for 2013.
/ 2 © Copyright scalar decisions inc. Not for redistribution outside of the intended audience. Cisco Silver Certified Partner  Unified Compute.
Networking Interconnects
LTEC 4550 Networking Components
© 2006 Cisco Systems, Inc. All rights reserved.Presentation_ID 1 Infrastructure Transformation - Nexus Technology Update Belmont Chia Consulting System.
Introduction: The Need for Networking Innovation
Virtualization driving the need for Affordable 10 GbE © 2009 Extreme Networks, Inc. All rights reserved Jan Hof Director Field-Marketing EMEA-SAM
Customer Presentation Freely distributable to customers and partners Dell Networking N-Series Family of 1 and 10 Gigabit Ethernet Switches * This document.
Microsoft Private Cloud Fast Track: The Next Generation of Private Cloud Reference Architecture Mike Truitt Sr. Product Planner Bryon Surace Sr. Program.
Solutions Road Show – 13 December 2013 | India Raghavendra S Specialist Dell Networking Solutions Right Size your Data center Networking.
Enable Multi Tenant Clouds Network Virtualization. Dynamic VM Placement. Secure Isolation. … High Scale & Low Cost Datacenters Leverage Hardware. High.
Module 9 PS-M4110 Overview <Place supporting graphic here>
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 LAN Design LAN Switching and Wireless – Chapter 1.
Dell Force10 Product Portfolio Sep 19, Global Marketing Force10 Networks Disclaimer This presentation contains references to certain features, functionality,
Chapter 1: Hierarchical Network Design
Extreme Networks Confidential and Proprietary. © 2010 Extreme Networks Inc. All rights reserved.
IPv6 Deployment Plan The Global IPv6 Summit 2001.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Virtualization Infrastructure Administration Network Jakub Yaghob.
Lenovo Networking Family Portfolio
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
© 2012 IBM Corporation IBM Flex System™ The elements of an IBM PureFlex System.
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
2960 Switches Server Farm Existing 6500 Switch Basement Floor Ground Floor Second Floor First Floor 2960 Switches EXISTING TOPOLOGY.
Data Center Bridging
Redes Centros de Computo. Chief Executive OfficerChief Information Officer.
Mike Truitt Sr. Product Planner Bryon Surace Sr. Program Manager
11 Copyright © 2009 Juniper Networks, Inc. ANDY INGRAM VP FST PRODUCT MARKETING & BUSINESS DEVELOPMENT.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Israel, August 2000 Eyal Nouri, Product Manager Optical-Based Switching Solutions Introduction to the OptiSwitch TM Solution.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
1 Using VPLS for VM mobility cern.ch cern.ch HEPIX Fall 2015.
Force10 Networks Debbie Montano Copyright 2008 Force10 Networks, Inc.
Copyright © 2015 Juniper Networks, Inc. 1 QFX5100 Line of Switches The World’s Most Nimble 10/40GbE Data Center Access Switches Speaker Name Title.
Sonoma Workshop 2008 OpenFabrics at 40 and 100 Gigabits? Bill Boas, Vice-Chair
CRICOS No J a university for the world real R Nov 2009 Andy Joyce Infrastructure Services Information Technology Services The Provision, Support.
| Basel Fabric Management with Virtual Machine Manager Philipp Witschi – Cloud Architect & Microsoft vTSP Thomas Maurer – Cloud Architect & Microsoft MVP.
Customer Business Benefit Customer Use Case Competitive Differentiation Additional Information Intelligent, VM-aware storage networking for Cisco DC/V.
Next Generation HPC architectures based on End-to-End 40GbE infrastructures Fabio Bellini Networking Specialist | Dell.
Juniper 40 and 100G LaN/WAn Robert Marcoux – Systems Architect
EX SERIES SWITCHES KEEPING IT SIMPLE Ing. Stephen Attard Computime Ltd Senior Network Engineer.
ICX 7750 Distributed Chassis for Campus Aggregation/Core
Cisco Confidential 1 © 2010 Cisco and/or its affiliates. All rights reserved. Fiber Channel over Ethernet Marco Voi – Cisco Systems – Workshop CCR INFN.
FlexPod Converged Solution. FlexPod is… A prevalidated flexible, unified platform featuring: Cisco Unified Computing System™ Programmable infrastructure.
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
© 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP ProCurve 2910 Series Switches.
The Efficient Fabric Presenter Name Title.
Optimizing the Data Centre Physical Layer in the Era of the Cloud
Implementing Cisco Data Center Unified Computing
Instructor Materials Chapter 1: LAN Design
FlexPod Update.
TCS Proof Of Concept Test VDX NOS 4.1 Virtual Fabrics/VLAN Translation
Flex System Enterprise Chassis
Media Conversion Solution
NPAR Dell - QLogic October 2011.
IS3120 Network Communications Infrastructure
Marrying OpenStack and Bare-Metal Cloud
NTHU CS5421 Cloud Computing
Presentation transcript:

Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013 Fabio Bellini Network Sales Engineer EMEA WER +39 335 7781550 Fabio_Bellini@dell.com

Active Fabric Solutions

What is Active Fabric ? Active Fabric is a family of high-performance, cost-effective, inter-connect products purpose-built for stitching together server, storage and software elements in virtualized and cloud data centers. Multiple chassis Fully redundant meshed fabric L2/L3 Multipath active Spanning Tree Free Architecture Scaling out next generation DC infrastructure Networking intra e inter Data Centers

Networking that offers choice and flexibility Distributed Servers Main Frame Chassis Core Distributed Cores

Traditional Network Design: Introduction Layer 3 Core Layer 2 or 3 Aggregation Layer 2 Access

Traditional Network Design: Introduction Core Aggregation Access Layer 3 Layer 2 or 3 Layer 2 VRRP removes ½ of uplink bandwidth Spanning Tree disables ½ of the uplinks 6 Confidential

Traditional Network Design vs Active Fabric Layer 3 (16 x) (2 x) Spine 768 Server ports Leaf L2/L3 L2 Core Layer 2 or 3 Aggregation Layer 2 Access

Scale-out Layer 3 Leaf/Spine Fabric Fabric Manager Dell Design Templates Automate documentation Automate config & deployment Validate deployment Spine Spine Spine Spine (16) (8) (2) (128) L3 L2 Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf (64) (16) 6144 Server ports 3072 Server ports 768 Server ports 1980 Server ports Fabric Design Documentation CLI Configuration Deployment Validation Expansion & Changes

Active Fabric Solutions Layer 3 Network Design - TODAY Spine Layer Leaf Layer L3 L2 Implement a Layer 3 Protocol OSPF IS-IS BGP (plus an IGP) No Spanning Tree needed Full bandwidth usage via Equal Cost Multipath (ECMP) Fast link layer failover via Bi-Directional Forwarding Detection (BFD)

Active Fabric Solutions Layer 3 Network Design - FUTURE Spine Layer NVO Gateway Leaf Layer L3 L2 Servers Servers Servers Servers NVO – Network Virtualization Overlay VXLAN for VMWare NVGRE for Microsoft Hyper-V

Virtual Layer 2 VM VM VM VM VM VM VM Segment ID 20 Segment ID 10 Spine Segment ID 10 Leaf VM VM vSwitch vSwitch VM VM VM VM VM Host Host In this model we abstract the virtual network from the physical network, just like we’ve abstracted the virtual server from the physical server – through encapsulation. Encapsulation is the fundamental enabler of virtualization. Just like a virtual machine is encapsulated into a file, we encapsulate the virtual network traffic into an IP header as it traverses the physical network from source host to destination host. This is model referred to as “Network Virtualization”. Or “Overlays”. In this model the physical network (underlay) provides an I/O fabric for the overlay. Setting up the network is a one time operation. I don’t need multi tenancy resources from the network (no VLANs). I don’t need forwarding table entries for every VM instance (no MAC forwarding). I don’t need to provision the physical network for every new service or tenant. The network orchestration tools only need to provision one network, the virtual network. This works to keep the orchestration logic and its implementation simple. Network Virtualization Overlay Tenant subnet = Software VLAN Confidential

Active Fabric Solutions Layer 3 Network Design - FUTURE Spine Layer Layer 2 Overlay NVO Gateway Leaf Layer L3 L2 Servers Servers Servers Servers NVO – Network Virtualization Overlay Use existing L3 Active Fabric technology we have TODAY Build a virtual L2 infrastructure on top of it Hypervisors and Gateway operate together Virtual Servers believe they are on a L2 network

Active Fabric Solutions Layer 2 Network Design - TODAY L2/L3 L2 VLT Spine Layer LAG LACP Leaf Layer L2 VLT = Virtual Link Trunk  Multi chassis LAG Dual Control Plane  L2/L3 Active Active Multi path  Standard 802.3ad LAG (LACP) Spanning Tree Free  Fast convergence relay on LACP Node Redundancy  No SPOF from access to core Scale out via mVLT (multiple VLT)  Scale based on product selection New products, higher port densities  Improve scalability

Virtual Link Trunking (VLT) The key to our Layer 2 Active Fabric VLTi L2/L3 L2 Spine Layer LAG LACP Leaf Layer L2 VLTi VLTi Rack Server Access Switch Blade Server Storage iSCSI

Active Fabric Solutions Layer 2 Network Design – Converged - VLT Spine Layer Leaf Layer Converged Switches at the Leaf Layer - Ethernet/FCoE/FC-

Active Fabric Solutions Layer 2 Network Design – Converged - Spine Layer VLT SAN Fabric FC or FCOE iSCSI Leaf Layer Converged Switches at the Leaf Layer - Ethernet/FCoE/FC- Unified Storage Capabilities into the design – iSCSI/FC/FCoE

Active Fabric Solutions Layer 2 Network Design – Converged - Spine Layer VLT SAN Fabric FC or FCOE iSCSI Leaf Layer Converged Switches at the Leaf Layer - Ethernet/FCoE/FC- Unified Storage Capabilities into the design – iSCSI/FC/FCoE If you need dense 40Gb and DCB for iSCSI at the Spine…

Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer LAG LACP Up to 240G Leaf Layer Scale out Server Farm – Blade switch - Ethernet/FCoE/FC Scale out computational density without compromising Half the infrastructure costs (chassis & switches) Reduced cabling to ToR switches Lower power per node

Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer Leaf Layer

Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer Leaf Layer

Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer LAG LACP Up to 240G Leaf Layer OS teorico 1,3:1 Operativo 1:1 VLT Domain VLT Domain VLT Domain VLT Domain LAG LACP 2 x 10G LAG LACP 2 x 10G

Active Fabric Solutions Layer 2/3 Network Design – Scale out Server Farm - VLTi L2/L3 L2 Spine Layer LAG LACP Leaf Layer L2 VLTi VLTi LAG LACP Up to 240G LAG LACP Up to 240G 32 Blade 64 Cpu 512 Core . . . . . . . . . . . . Thousands of Server/VM

Active Fabric ingredients Maximum functionality, maximum programmability Layer 3 Multipath Layer 2 Multipath Converged LAN/SAN Software Programmability SAN LAN SAN Ethernet/iSCSI/ FCoE/FC OpenFlow REST/XML/Perl, Python OSPF/BGP/IS-IS VLT/mVLT Z9000 4810 4820T MXL S5000 Z9000 4810 4820T MXL/IOA (9.2) S5000 4810 4820T MXL/IOA S5000 Z9000 4810 4820T MXL/IOA S5000

Dell Networking S4810 switch SFP+ 10/40G top-of-rack switch Proven 10/40G top-of-rack performance Low-latency 10/40 GbE 64x10GbE or 48x10GbE + 4x40GbE Layer 2 multipathing (VLT) support Stacking (to 6 nodes) DCB support, Equallogic and Compellent certified Built-in automation support (bare metal provisioning, scripting, programmatic management) Built-in virtualization support (VMware, Citrix) Dell Force10 S4810 Works with: + Better together will Dell servers & storage

Dell Networking S4820T switch 1/10/40G 10GBase-T top-of-rack switch Accelerate 1G to 10G migration Fully-featured FTOS-powered top-of-rack switch 48 x 1/10G 10GBase-T ports 4 x 40G fabric uplinks (or 16 x 10G) Built-in virtualization support (VMware, Citrix) DCB support for SAN/LAN convergence (iSCSI, FCoE) Integrated automation, scripting and programmatic management Dell Force10 S4820T Works with: + Better together will Dell servers & storage

NEW! Dell Networking S5000 converged LAN/SAN switch First-of-its-kind modular 1RU top-of-rack and fabric switch NEW! Pay-as-you-grow, customizable modularity powered by FTOS 10GbE; 40GbE 2, 4, 8G Fibre Channel Future-proof, multi-stage design for next-gen I/O without rip & replace Unified storage networking, with complete support for iSCSI, RoCE, and FCoE with FC fabric services Reduced management complexity, Integrated automation, scripting and software programmability Easy integration, strong interoperability with major adapter, switch, & storage solutions Dell Networking S5000 1.5X higher port density/RU than Cisco Nexus 5548 3X Brocade VDX 6720-24 Confidential

Dell Networking Z9000 High-density 10/40G fabric switch Scaling the data center core up, down and out Dell Force10 Z9000 2.5Tbps in 2RU footprint High-density networking 32 line rate 40GbE or 128 line rate 10GbE Low power consumption 800 Watts Max (6.25W per 10GbE) 600 Watts Typical (4.68W per 10GbE) Read the report Distributed Core Architecture www.force10networks.com/tollyreport Product of the year award Internet Telephony

40GbE QSFP+ Transceiver & Cables Dell Networking Blade MXL Hardware High performance full-featured 1/10/40GbE Layer 2 & Layer 3 switch blade Flex I/O Dell Force10 MXL Internal server facing ports Max 32 x GbE/10GbE External ports 2x40GbE fixed ports Two optional Flex I/O modules Stacking 40GbE ports OS FTOS 4-port SFP+ module 1GbE & 10GbE ports 10GbE optical & DAC copper twin-ax 4-port 10GBASE-T module 2X more than M8024-k 2-port QSFP+ module 2 x 40GbE ports 10GbE support using breakout cables Robust and scalable I/O performance, low latency and high bandwidth Support for native 40GbE ports Open standards-based feature rich Enterprise FTOS Converged Ethernet and Fiber Channel support 40GbE QSFP+ Transceiver & Cables 40GbE QSFP+ transceivers 40GbE QSFP+ to 4xSFP+ direct attach breakout cable 40GbE QSFP+ direct attach cable

Build an Active Fabric that fits your needs Active Fabric Design Options Fabrics for any size data center Build an Active Fabric that fits your needs Spine Leaf Design options Small Medium Large Spine node S4810 Z9000 Leaf node Node count 4 spine/12 leaf 4 spine/32 leaf 16 spine/32 leaf Fabric interconnect 10 GbE 40 GbE Fabric capacity 3.84 Tbps 10.24 Tbps 40.96 Tbps Available 10GbE ports 576 @ 3:1 oversubscription 1,536 @ 6:1 oversubscription 2,048 non-blocking 29

Active Fabric key take away Supports a various Interface types : Copper : 100/1000/10000 Base-T Fiber : 1G, 10G, 40G Interface Supports different types of data on the same Fabric: Ethernet, FCoE, ISCSI Data Active Fabric and grow from 10s to 100,000s end-devices with same type of equipment models Growth

IEEE 802.3ba Ethernet 40/100Gbps

Ethernet 40/100G 802.3ba-2010 Ieee 802.3 approved motions 40 and 100Gbps At least 100 mt on OM3 multimode fiber At least 150 mt on OM4 multimode fiber At least 10Km on single-mode fiber At least 40Km on single-mode fiber (100G only) At least 7 mt on copper cable assembly At least 2Km on single-mode fiber (40G only) Key project date: Study group formed in July 2006 Project authorization in December 2007 Task Force in January 2008 40/100 Standard Compleated in July 2010 802.3ba-2010

40G Optical Transceiver: OM3/OM4

40G Ethernet Parallel Optics:

Ethernet 40G e 100G: OM3, OM4

40G eSR4 QSFP+ QSFP+ eSR4 modules meets link distance specifications for 40G ethernet applications 40G eSR4 paralle optics extended reach OM3/OM4: 300m / 400mt

Modules SFP/SFP+/QSFP and DAC cable Passive Twinax SFP+ DAC (7m) QSFP+ MTP 40GE QSFP+ (50m) Active fiber 40GE QSFP+ (5m) Passive Copper AOC Multi-mode vs. single-mode Fiber A "mode" in Fiber Optic cable refers to the path in which light travels. Multi-mode cables have a larger core diameter than that of single-mode cables. This larger core diameter allows multiple pathways and several wavelengths of light to be transmitted. Multi-mode fiber is available in two sizes, 50 micron and 62.5 micron. Single-mode fiber is a type of fiber optic cable through which only one light signal can travel at a time. Because single-mode fiber is more resistant to attenuation than multi-mode fiber, it can be used in significantly longer cable runs. The core of a single-mode fiber is normally 9 microns wide. A micron is one millionth of a meter. Single-mode fiber can support Gigabit Ethernet over distances as long as 10 kilometers. 50 micron vs. 62.5 micron fiber Both 50 micron and 62.5 micron fiber optic cables use an LED or laser light source. They are also used in the same networking applications. The main difference between the two is that 50 micron fiber can support 3 times the bandwidth of 62.5 micron fiber. 50 micron fiber also supports longer cable runs than 62.5 micron cable. Fiber Optic Connectors There are a variety of fiber optic connectors. The most common are SC and LC Also known as SPF, Small Form Factor and Mini-GBIC SC Subscriber Connector or Standard Connector or Siemon Connector SC is a snap-in connector that is widely used for its excellent performance. It is also available in a duplex configuration. SC connectors have a mnemonic of "Square Connector“. LC Lucent Connector or Local Connector LC connectors are sometimes called "Little Connectors". They use SFP – small form-factor pluggable (SFP) - a compact, hot-pluggable transceiver LC is a small form factor connector that uses a 1.25 mm ferrule, half the size of the SC. Otherwise, it's a standard ceramic ferrule connector, easily terminated with any adhesive. Good performance, highly favored for single-mode. If you are utilizing 10GBASE-CX4 or Infiniband, you are distance limited to a maximum of 15m. The following chart summarizes the distances for all 10Gb/s applications and their associated cabling systems. Application Media Classification Max. Distance Wavelength 10GBASE-T Twisted Pair Copper Category 6/Class E UTP up to 55m 10GBASE-T Twisted Pair Copper Category 6A/Class EA UTP 100m 10GBASE-T Twisted Pair Copper Category 6A/Class EA F/UTP 100m 10GBASE-T Twisted Pair Copper Class F/Class FA 100m 10GBASE-CX4 Manufactured N/A 10-15m 10GBASE-SX 62.5 MMF 160/500 28m 850nm 10GBASE-SX 62.5 MMF 200/500 28m 850nm 10GBASE-SX 50 MMF 500/500 86m 850nm 10GBASE-SX 50 MMF 2000/500 300m 850nm 10GBASE-LX SMF 10km 1310nm 10GBASE-EX SMF 40km 1550nm 10GBASE-LRM All MMF 220m 1300nm 10GBASE-LX4 All MMF 300m 1310nm 10GBASE-LX4 SMF 10km 1310nm 40GE QSFP+ to 4 SFP+ (5m) passive copper breakout MTP to 4xLC optical breakout cable 5m + 100m/OM3 or 150m/OM4 DAC = Direct Attached Cable AOC = Active Optical Cable

Use Case Examples

Small Active Fabric Use Case: Scale: Customer has about 300 servers Needs to have High Availability as the Servers are used 7/24 Need the ability to have ISSU to support SLA of 99.999% uptime Scale: 48 servers per rack Redundant connections from servers 6 racks today; expansion to 20 2x20G uplink connection VLT Spine Layer Leaf Layer

Medium Active Fabric Use Case: Scale: VLT Spine Layer Leaf Layer An Enterprise Customer with HPC Pods Requires large amounts of Servers and Cores High availability uptime for Servers Large upstream pipe for data transfers Shrink the number of cables in the Data Center Scale: 10G to the servers Active-standby today A-A future 80G uplink connections NFS Storage System 4-M1000E chassis per rack=32 Chassis 16 Blades per M1000e chassis =512 Blades 12 cores per blade=6144 Cores VLT Spine Layer Leaf Layer

Large Active Fabric Customer Example: Use Case: Scale: Customer has a large L3 Network Requires 10,000s servers with growth Need to support smallest O.S possible 1G and 10G servers Scale: Capable to support 1G to 10G migration Start with 10,000 servers growing to 100,000 servers Expand with little or no hits Spine Layer Leaf Layer Customer Example:

Fabio Bellini Network Sales Engineer Mobile:  +39 335 7781550 Email:  fabio_bellini@dell.com