Marrying OpenStack and Bare-Metal Cloud

Slides:



Advertisements
Similar presentations
Chapter 1: Introduction to Scaling Networks
Advertisements

Mapping Service Templates to Concrete Network Semantics Some Ideas.
2  Industry trends and challenges  Windows Server 2012: Beyond virtualization  Complete virtualization platform  Improved scalability and performance.
Cloud Computing: Theirs, Mine and Ours Belinda G. Watkins, VP EIS - Network Computing FedEx Services March 11, 2011.
Copyright © 2004 Juniper Networks, Inc. Proprietary and Confidentialwww.juniper.net 1 E-VPN and Data Center R. Aggarwal
Deployment of MPLS VPN in Large ISP Networks
Bringing Together Linux-based Switches and Neutron
L3 + VXLAN Made Practical
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. | Oracle’s Next-Generation SDN Platform Andrew Thomas Architect Corporate Architecture.
VMware vCloud Director and how it works David Hill, vExpert 2012, VCP, VCAP-DCD Senior Solutions Architect.
Introduction into VXLAN Russian IPv6 day June 6 th, 2012 Frank Laforsch Systems Engineer, EMEA
Brocade VDX 6746 switch module for Hitachi Cb500
Virtualization of Fixed Network Functions on the Oracle Fabric Krishna Srinivasan Director, Product Management Oracle Networking Savi Venkatachalapathy.
© Wiley Inc All Rights Reserved. CCNA: Cisco Certified Network Associate Study Guide CHAPTER 8: Virtual LANs (VLANs)
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
Microsoft Virtual Academy Module 4 Creating and Configuring Virtual Machine Networks.
MPLS And The Data Center Adrian Farrel Old Dog Consulting / Juniper Networks
Mike Freedman Fall 2012 COS 561: Advanced Computer Networks Enterprise Configuration.
Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access.
We will be covering VLANs this week. In addition we will do a practical involving setting up a router and how to create a VLAN.
Sybex CCNA Chapter 9: VLAN’s Instructor & Todd Lammle.
International infrastructure outsource models Eckart Zollner Safnog 2 April 2015.
Data Center Network Redesign using SDN
Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access.
Networking in the cloud: An SDN primer Ben Cherian Chief Strategy Midokura.
MDC-B350: Part 1 Room: You are in it Time: Now What we introduced in SP1 recap How to setup your datacenter networking from scratch What’s new in R2.
Hubs to VLANs Cisco Networking Academy Program © Cisco Systems, Inc From Hubs to VLANs.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
Enable Multi Tenant Clouds Network Virtualization. Dynamic VM Placement. Secure Isolation. … High Scale & Low Cost Datacenters Leverage Hardware. High.
Inter VLAN routing W.lilakiatsakun. What is inter VLAN routing.
VXLAN Nexus 9000 Module 6 – MP-BGP EVPN - Design
1 © OneCloud and/or its affiliates. All rights reserved. VXLAN Overview Module 4.
Switching Topic 2 VLANs.
A Deep Dive on the vSphere Distributed Switch Jason Nash VCDX #49, vExpert Director, Datacenter Practice Varrow.
Chapter 4 Version 1 Virtual LANs. Introduction By default, switches forward broadcasts, this means that all segments connected to a switch are in one.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
1 Copyright © 2009 Juniper Networks, Inc. E-VPN for NVO Use of Ethernet Virtual Private Network (E-VPN) as the carrier-grade control plane.
@projectcalico Sponsored by Simple, Secure, Scalable networking for the virtualized datacentre UKNOF 33 Ed 19 th January 2016.
EVPN: Or how I learned to stop worrying and love the BGP Tom Dwyer, JNCIE-ENT #424 Clay Haynes, JNCIE-SEC # 69 JNCIE-ENT # 492.
“Your application performance is only as good as your network” (4)
Inter VLAN routing Ferry Astika Saputra.
REMOTE MANAGEMENT OF SYSTEM
fd.io vpp and containers
100GbE Switches - Do the Math
Instructor Materials Chapter 1: LAN Design
CIS 700-5: The Design and Implementation of Cloud Networks
SECURITY ZONES.
ExamUnion CCIE Data Center V2.0 Exam
Sebastian Solbach Consulting Member of Technical Staff
The NPD Group - Enterprise DC Agenda
9/15/2018 8:14 PM SAC-442T Building Secure, Scalable Multi-Tenant Clouds using Hyper-V Network Virtualization Murari Sridharan Yu-Shun Wang Principal.
Deploy OpenStack with Ubuntu Autopilot
The Evolution of the Data Center
Aled Edwards, Anna Fischer, Antonio Lain HP Labs
IS3120 Network Communications Infrastructure
Network+ Guide to Networks 6th Edition
Link State on Data Center Fabrics
Microsoft Virtual Academy
NTHU CS5421 Cloud Computing
Workload Optimized OpenStack made easy
Kireeti Kompella Juniper Networks
See your OpenStack Network Like Never Before
Attilla de Groot | Sr. Systems Engineer, HCIE #3494 | Cumulus Networks
Top #1 in China Top #3 in the world
OpenStack Summit Berlin – November 14, 2018
MICROSOFT NETWORK VIRTUALIZATION
Nolan Leake Co-Founder, Cumulus Networks Paul Speciale
Lecture 8, Computer Networks (198:552)
Tim Strakh CEO, IEOFIT CCIE RS, CCIE Sec CCIE Voice, CCIE DC
Using OpenDaylight in Hybrid Cloud: issues or challenges
Presentation transcript:

Marrying OpenStack and Bare-Metal Cloud Pete Lumbis & David Iles May 2018

Our Speakers Pete Lumbis David Iles Technical Marketing Engineer Cumulus Networks David Iles Senior Director – Ethernet Switching Mellanox Technologies

Multi-Tenant Networks Single tenant private cloud is easy Single ownership Single security policy Multi-tenant is more challenging Possible competitor tenants (mixing these is real bad) Shared infrastructure Multiple security policies

Multi-Tenant Networks VLANs Isolate traffic with network tags (802.1q) over shared physical links Only Layer 3 Routers can cross VLANs ML2 can provision VLANs on compute or physical switches ML2 VLAN Provisioning

VLANs Are Not Your Friend Pros: Solve multi tenancy networking

VLANs Are Not Your Friend Pros: Solve multi tenancy networking Cons: Single network path

VLANs Are Not Your Friend Pros: Solve multi tenancy networking Cons: Single network path Low scalability Scale-Up networking, not scale-out Limited VLAN range (4096 at best)

VLANs Are Not Your Friend Pros: Solve multi tenancy networking Cons: Single network path Low scalability Scale-Up networking, not scale-out Limited VLAN range (4096 at best) Large blast Radius Impacted Server NIC Failure

Beyond VLANs: VxLAN Same L2 extension and isolation as VLANs (VxLAN ID) Operate over L3 network Small blast radius Enables scale-out networking Extremely high bandwidth network Smaller ML2 Footprint ML2 VxLAN Provisioning

Compute Based VxLAN VxLAN from compute to compute No VxLAN based switches BGP to the server for simplicity Same config, troubleshooting for both network and compute No Layer 2 BGP advertises compute endpoint (VTEP) ML2 provisions VxLANs per tenant ML2 VxLAN Provisioning

Compute Based VxLAN – The Ugly VxLAN capable NICs required Most servers have them, but not all. Performance hit without NIC offload BGP to the server is scary It’s not really, but not everyone likes the idea Network doesn’t want server folks touching their network Server folks don’t want to learn BGP Complicated solution for smaller deployments Difficult for Ironic (bare metal)

Network Design Recap VLAN Based VxLAN Compute + Network Based Bad, don’t do it Even at a few racks VxLAN Compute + Network Based Great, but still requires network ML2 Loss of a switch == loss of a lot of Neutron state Still requires VxLAN NICs VxLAN Compute Only Best solution if you are okay with BGP on servers Not the easiest for Ironic

Network Design Needs Stability Simplicity Flexibility Scalability It isn’t cloud if one host brings down the others Simplicity I don’t need the Avengers to run the infrastructure Flexibility Works for bare metal and virtual workloads Scalability Maybe not 1000s but 100s of tenants can be required Affordable VxLAN NICs may not be an option

The Answer… VLAN based compute + VxLAN based network Wait…. + VxLAN based network But it’s not Hierarchical Port Binding HPB manages both VxLAN and VLANs ML2 only drives compute VLANs Network is pre-provisioned Offload VxLAN to the network No need for special NICs Localized VLANs easily and safely scales to 100s of tenants Larger scale requires more complex solutions

VLAN + VxLAN VxLANs pre-configured between all switches Compute facing network ports have VLAN trunks configured ML2 provisions VLANs on compute nodes as needed Every VLAN is mapped to an identical VxLAN Tunnel (VNI) ML2 VLAN Provisioning VxLANs VLANs

VxLAN Without a Controller? Switches need VxLAN knowledge WHO: else is a VxLAN end point (VTEP) WHAT: VxLAN tunnels exist on each VTEP WHERE: do MAC addresses live Option 1: VxLAN Controllers Openstack ML2 OpenContrail/Tungsten Fabric Open Daylight Option 2: EVPN Switches exchange data without a controller Relies on extensions to BGP Multi-vendor (Cumulus, Cisco, Arista, Juniper) VxLAN Information Controller VxLAN Information

EVPN: A Closer Look A B

EVPN: A Closer Look Switch build BGP relationships BGP A B

EVPN: A Closer Look Switch build BGP relationships Switch learns MAC addresses from servers Just like a normal switch I know about the MAC for server B! I know about the MAC for server A! A B

EVPN: A Closer Look Switch build BGP relationships Switch learns MAC addresses from servers Just like a normal switch MAC information exchanged via BGP Come to me to reach MAC A Come to me to reach MAC B A B

EVPN: A Closer Look Switch build BGP relationships Switch learns MAC addresses from servers Just like a normal switch MAC information exchanged via BGP Data sent via VxLAN based on BGP information From: Switch A To: Switch B (VxLAN) From: Server A To: Server B A B

EVPN: A Closer Look Switch build BGP relationships Switch learns MAC addresses from servers Just like a normal switch MAC information exchanged via BGP Data sent via VxLAN based on BGP information Since BGP is used, no shared failures like L2 A B

BaGPipe – A Potential Future Neutron work with BGP called BaGPipe Two Goals: Inter-DC VPNs Layer 3 to the compute node Can be done today with Free Range Routing + Neutron Networking What was described earlier BaGPipe would have Neutron control BGP + VxLAN Today’s solution Neutron only controls VxLAN Nearly identical to EVPN on the server Extremely early days, but value is clear Early for BaGPipe as well

How do we make it work with next gen workloads? Machine Learning and NVME Fabrics Next Generation Storage: Machine Learning Applications All Flash GPU accelerated PCIe attached NVME drives PCIe attached GPU’s RDMA over Ethernet – RoCE RDMA over Ethernet - RoCE Both must run over an Overlay network RoCE + VXLAN ML2 VxLAN Provisioning

EVPN Gotcha’s License-free Features: BGP, VXLAN, ZTP, EVPN VXLAN Routing in Hardware No Loopback cables VTEP Scale Many switches max out at 128 VTEPs ROCE over VXLAN NVME over Fabric Machine Learning

Questions??