100GbE Switches - Do the Math

Slides:



Advertisements
Similar presentations
Network Devices Repeaters, hubs, bridges, switches, routers, NICs.
Advertisements

Asaf SOMEKH, Oct 15 th, 2013 Evolving Peering with a New Router Architecture Jean-David LEHMANN-CHARLEY Compass-EOS RIPE 67, Athens
CCNA3: Switching Basics and Intermediate Routing v3.0 CISCO NETWORKING ACADEMY PROGRAM Switching Concepts Introduction to Ethernet/802.3 LANs Introduction.
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
Bringing Together Linux-based Switches and Neutron
Introducing New Additions to ProSafe Advanced Smart Switch Family: GS724TR and GS748TR (ProSafe 24 and 48-port Gigabit Smart Switches with Static Routing)
L3 + VXLAN Made Practical
Sales Guide for DES-3810 Series Aug 2011 D-Link HQ.
Cisco UCS Mini Martin Hegarty| Product Manager | Comstor
Brocade VDX 6746 switch module for Hitachi Cb500
Campus LAN Overview. Objectives Identify the technical considerations in campus LAN design Identify the business considerations in campus LAN design Describe.
Introduction: The Need for Networking Innovation
Chapter Six NetworkingHardware. Agenda Questions about Ch. 11 Midterm Exam Ch.6 Cable kit.
Common Devices Used In Computer Networks
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Basic Network Gear Created by Alex Schatz. Hub A hub is a very basic internetworking device. Hubs connect multiple machines together and allow them to.
LAN Switching and Wireless – Chapter 1
1 LAN design- Chapter 1 CCNA Exploration Semester 3 Modified by Profs. Ward and Cappellino.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
11 Copyright © 2009 Juniper Networks, Inc. ANDY INGRAM VP FST PRODUCT MARKETING & BUSINESS DEVELOPMENT.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
1 Using VPLS for VM mobility cern.ch cern.ch HEPIX Fall 2015.
LAN Switching Concepts. Overview Ethernet networks used to be built using repeaters. When the performance of these networks began to suffer because too.
Network design Topic 4 LAN design. Agenda Modular design Hierarchal model Campus network design Design considerations Switch features.
Copyright © 2015 Juniper Networks, Inc. 1 QFX5100 Line of Switches The World’s Most Nimble 10/40GbE Data Center Access Switches Speaker Name Title.
Advanced Computer Networks Lecturer: E EE Eng. Ahmed Hemaid Office: I 114.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
Next Generation HPC architectures based on End-to-End 40GbE infrastructures Fabio Bellini Networking Specialist | Dell.
© ExplorNet’s Centers for Quality Teaching and Learning 1 Select appropriate hardware for building networks. Objective Course Weight 2%
2014 Redefining the Data Center: White-Box Networking Jennifer Casella October 9, 2014 #GHC
ICX 7750 Distributed Chassis for Campus Aggregation/Core
Connectors, Repeaters, Hubs, Bridges, Switches, Routers, NIC’s
Chapter-1 LAN Design. Introduction Hierarchical network When building a LAN that satisfies the needs of a small- or medium-sized business, your plan.
Network Devices Repeaters, hubs, bridges, switches, routers, NIC’s.
© 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP ProCurve 2910 Series Switches.
Shaopeng, Ho Architect of Chinac Group
Youngstown State University Cisco Regional Academy
LHCb and InfiniBand on FPGA
Instructor Materials Chapter 1: LAN Design
Data Center Routing Challenges - LinkedIn
Organizations Are Embracing New Opportunities
Media Conversion Solution
Data Center Network Architectures
OptiView™ XG Network Analysis Tablet
Local Area Networks Honolulu Community College
Media Conversion Solution
Sebastian Solbach Consulting Member of Technical Staff
VoIP Phones - New era of communication
OCP: High Performance Computing Project
A Solution for Every Network
Network Packet Brokers
Chapter III Desktop Imaging Systems & Issues
Indoor GPON ONT September 2010.
A Solution for Every Network
Marrying OpenStack and Bare-Metal Cloud
The Stanford Clean Slate Program
Today’s Business Pain Points
Bridges and Extended LANs
NTHU CS5421 Cloud Computing
Ethernet Solutions for Optical Networks
Network Devices Hub Definition:
A Solution for Every Network
IBM Power Systems.
RDMA over Commodity Ethernet at Scale
SERVER INNOVATION ACCELERATES IT TRANSFORMATION
Requirements Definition
Connectors, Repeaters, Hubs, Bridges, Switches, Routers, NIC’s
Host and Small Network Relaying Howard C. Berkowitz
Nolan Leake Co-Founder, Cumulus Networks Paul Speciale
Cluster Computers.
Presentation transcript:

100GbE Switches - Do the Math Open Compute Project Engineering Workshop June 2016

What’s Good For Hyperscale, Works at Smaller Scale Monolithic proprietary Chassis  Leaf Spine No reason to pay more and use a proprietary, vendor specific hardware Six-pack? Leaf-Spine in a box Switch-switch mLAG  OSPF or BGP, depends on scale Ability to scale without changing network design, multi-vendor Stacking  Managing switches just like managing servers Ability to scale without dependency on vendor’s proprietary features Reducing control plain? Avoiding “noisy” OSPF? Use BGP Hard to manage, don’t have many Linux developers? Variety of tools making it very simple

What’s Good For Hyperscale Works at Smaller Scale Single speed cable  break out cables One port down  4 server down? 10-1000s Years MTBF passive DAC, mLAG in a rack… More expensive? Actually not, break out cables save money Many different switches  one or two building blocks Using a 32 x 100GbE as a multi-purpose switch, saving $ using a 48+4/6/8 ToR 10GbE  40GbE or 25GbE  100GbE  10GbE  100GbE 10GbE servers with 100GbE uplinks saves $

Similar Infrastructure From 10-40GbE to 10-100GbE Network Network 25% Higher Bandwidth 40GbE 100GbE Similar Connectors Similar Infrastructure 10GbE 10GbE 10GbE >10GbE 28 racks 28 racks Similar Cost / Power Compute Nodes Compute Nodes Compute Nodes Compute Nodes 25% more bandwidth Leaf Spine Less spine switches Less uplinks – better bandwidth

How Does 100GbE Saves Money When Running 10GbE Hosts Why does it work? 100GbE is priced ~1.5X than 40GbE 10GbE  100GbE ToR Saves $ Want to optimize? 16 x QSFP with split cables Save $ on Leaf-Spine AoC (<2.5X) Opex: Less space, power 50% optics Additional saving at higher scale Additional savings with lower oversubscription

100GbE is Happening!!!

Medallia Runs 40GbE Servers to 100GbE Fabrics in Production "For us, Cumulus is about choice and transparency. There is a large number of whitebox and britebox vendors of Cumulus compatible switches, which allows us to pick the best and most capable hardware that fits our needs and our budget. For the 100GbE generation, that choice was clear. As we run distributed real-time analytics, we need predictable and repeatable port-to-port performance without any surprises or exceptions. After extensive testing, the clear winner for us was the Mellanox Spectrum.“ Thorvald Natvig, Medallia lead architect 

University of Cambridge Runs 40100GbE in Production University of Cambridge has done the math and selected the Mellanox Spectrum SN2700 Ethernet switches for its OpenStack-based scientific research cloud. Why? 100GbE is there to unleash the capabilities of the NexentaEdge Software Defined Storage solution which can easily stress a 10/40GbE network.

SysEleven Runs 100GbE “But what is now a switch or router? Actually, this is pretty much a PC with many network interfaces. Do not you believe me? I have here a pic from the interior of exactly such a modern Mellanox switches” SysEleven successfully ran 32K routes over Spectrum + Cumulus Linux, they used Bird - their preferred L3 stack

Performance Considerations Buffering is key at these speeds Huge buffers come at high price and might create unnecessary networking delays Buffer should be adequate to deal with DC workloads Single shared buffer eliminate fairness issues Storage is getting faster every day with fast SSD and post-SSD technologies ToR should run reasonable cut through latency in order not to become the bottleneck Spine switches handle a lot of traffic and must not slow down the network Spine should run at full line rate when there is no congestion

Do The Math and Don’t Forget… Compare the entire 10/40GbE fabric cost to 10/100GbE alternative Cost includes all components: cables, optics, support, licenses Break-out cables reduce cost by 35-25% QSFP-to-4xSFP cable costs less than 4 standard cables 100GbE Optics cost less than 2.5x 40GbE Optics True for transceivers as well AoCs Measure the value you get with 100% standard protocols Avoid “vendor specific” features

Do The Math and Don’t Forget… Not all 100GbE switch solutions are equal Understand the differences between various 100GbE switch solutions in the market The following performance analysis provides a good view of the market’s available options $/GbE is the best way to measure network efficiency 100GbE is significantly more cost effective, Hyperscale network design concepts makes sense even at small scale deployments Web Scale IT is a force multiplier, regardless of data center footprint

10GbE is Common, But 25/50GbE is Happening! Mellanox Testing 10/25/40/50 GbE Quanta Testing – Mellanox 40GbE vs. Intel 10GbE 2x more throughput, 1.8x more IOPS with 25GbE Up to 5x better reads with 40GbE Up to 70 Gb/s with 100GbE

More Software options Every Day… Supported Operating Systems Coming Soon (public) More coming soon

Switch is Indeed a Server with Many Ports!!! Basics - Kernel 4.4 System – Kernel 4.6 MAC addresses, IP addresses, MTU, admin-status (UP/DOWN), operation status (RUNNING) Introducing devlink Port-splitter ethtool: port counters, FW version, LEDs QoS – Kernel 4.7 VLAN filtering (vconfig) DCB and port QoS mapping Promiscuous mode L3 - Kernel 4.8/4.9 Temperature and fans IPv4/v6 Routing Layer 2 – Kernel 4.5 ECMP Bridges and flooding control FDB, FDB learning and ageing time 802.1Q bridges and PVID 802.1D bridges Bond offload Team offload STP LLDP IGMP snooping

Open Ethernet Spectrum Switch Portfolio SN2700 – 32x100GbE (64x50GbE) Ideal 100GbE ToR / Aggregation SN2410 – 8x100GbE + 48x25GbE 25GbE  100GbE ToR Our switch portfolio is built for agility and performance. We can build the world’s largest and strongest data center solutions with our HW and SW capabilities while allowing the easiest price entry point into the network with the SX1012 and nearly unlimited growth potential with the capacity of 4Tb/s of our ASIC. SN2100 – 16x100GbE ports Ideal storage/Database Switch Highest 25GbE Density per rack unit

Thank You