Download presentation
Published byDulcie Stewart Modified over 9 years ago
1
VXLAN Nexus 9000 Essentials for the Data Center Karim Afifi
Karim Afifi, 3xCCIE N [DC, Sec, R&S] Network Automation, Device Programmability, Embedded Management, and DevOps Enthusiast
2
VXLAN Nexus 9000 Agenda Module Title 1 Introduction 1-01
Today’s Data Center Challenges and Need for Change 1-02 Market Transitions 1-03 Industry Offerings Application Requirements 1-04 2 Hardware Architecture 2-01 Modular Switch Portfolio 2-02 Chassis Components Line-Cards 2-03 Fixed Switch Portfolio 3 NXOS Features and Enhancements 3-01 Features 4 VXLAN Primer 4-01 Overlay Benefits 4-02 Designing Overlays 4-03 VXLAN Operation VXLAN Implementation 5 VXLAN Deep Dive 5-01 VXLAN Routing 5-02 VXLAN Bridging 5-03 VXLAN Packet Flow Lab-01 Configure VxLAN bridging Module Title 6 VXLAN New Enhancement/Features 6-01 BGP EVPN Control Plane for VXLAN Lab-02 VXLAN with MP-BGP EVPN Control Plane 6-02 VXLAN Multihoming 6-03 Distributed Anycast Gateway Lab-03 Distributed Anycast Gateway 6-04 ARP Suppression Lab-04 Arp Suppression and Port Local vlan 6-05 Ingress Replication 6-06 Local Scoping of VLANs 6-07 VXLAN Inter-POD 6-08 VXLAN DCI Integration Lab-05 VXLAN inter Tenant connectivity 7 VXLAN Use Case/Design 7-01 Hardware based VTEP for VXLAN 7-02 Software based VTEP for VXLAN 7-03 Interop betn software and hardware VTEP 8 Solution Whiteboarding 9 Portfolio Positioning 10 Competitive Positioning 11 NXOS Mode Road Map 1 - Does introduction topics looks good or need some change? 3 – NXOS Features need any addition?
3
Agenda Understand today’s Data Center challenges Overlays in DC
VXLAN Overview VXLAN Deepdive VXLAN MP-BGP EVPN Control-Plane VXLAN features of Cisco Nexus 9000 VXLAN Designs Agenda
4
Datacenter Challenges and Evolution
5
Data Center Architecture – Life used to be easy
The Data Center Switching Design was based on the hierarchical switching we used everywhere Three tiers: Access, Aggregation and Core L2/L3 boundary at the aggregation Add in services and you were done What has changed? Most everything Hypervisors Cloud – IaaS, PaaS, SaaS MSDC Ultra Low Latency Core Layer 3 Aggregation Layer 2 Services Access
6
L2 Requires a loop free topology
11 Physical Links 5 Logical Links S2 S1 S3 Spanning Tree Protocol (STP) typically used to build this tree Tree topology implies: Wasted bandwidth → increased oversubscription Timer based Convergence Protocol Failure
7
Layer 2 in the Data Center
Layer 2 required by data center applications Clusters Server Virtualization (VM Mobility) Ideally a Layer 2 segment should be everywhere – same DC or across Severe limitations with how it operates
8
Inadequate number of VLANs
VLANs are used to provide isolation and/or limit the broadcast domain 12 bits (4096 values) used to identify VLAN in the dot1.q header 4094 usable VLANs in a Layer-2 domain too few for a multi tenant environment. Dest MAC SRC MAC 802.1q Type /Len Data FCS 4 bytes CoS 3 bits Format 1 bit VLAN ID 12 bits 0x8100
9
MAC Address Scaling MAC addresses encode no location or network hierarchy Default forwarding behavior in bridged network is flood Does not scale – every switch learns every MAC Problem is exponentially bigger with VM sprawl MAC Table A MAC Table A Layer 2 Domain MAC Table A MAC Table A MAC Table A MAC Table A
10
Spine-Leaf Architecture Responding to Increasing Application Demands
Moving to Spine/Leaf construct East / West Traffic effectively handled No longer limited to two aggregation boxes Layer 2 can be anywhere Create routed paths between “access”(leaf) and “core”(spine) Services bolted on to Leaf switches Automation/orchestration is removing human error Routed Domain As L3 pushes to the edge so do the services that used to be at the aggregation layer potentially. Fast convergence L2 / L3 boundary Access layer connectivity point: STP root, loop-free features Service insertion point Network policy control point: default GW, DHCP Relay, ACLs L2 Domain
11
Trend: Flexible Data Center Fabrics
Create Virtual Networks on top of an efficient IP network Workload Mobility and Placement East/West Traffic Considerations Multi tenancy / Segmentation Scale Automation & Programmability L2 + L3 Connectivity Physical + Virtual Open Hosts VM OS Virtual Physical
12
Data Center Architecture There is No ‘Single Design’ Anymore
Spectrum of Design Evolution slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8 There is no single design any longer that fits every scenario. Typically each application has it’s own set of requirements for both the network and server. Every time a new application has to be placed into the data center, a customer has to look at his or her options, run a proof of concept and after months of testing, finally procure the equipment and install it. This increases the demand on the staff and takes them away from other important day to day activities. The pod designs, such as FlexPod helps alleviate this by having a infrastructure with bolt on components that can be added with confidence that the design has been tested and approved by various vendors. Ultra Low Latency Nexus 9k/7k/ 3k & UCS 10G Edge moving to 40G HPC/GRID Nexus 2k, 3k, 5k, 6k, 7k,9k & UCS 10G moving to 40G,100G Virtualized Data Center 1G Edge moving to 10G Nexus 2k, 5k, 6k, 7k,9k & UCS MSDC 1G Edge moving to 10G Nexus 2k, 3k, 5k, 6k, 7k,9k & UCS
13
Overlays in the DC Cisco Live 2014 4/26/2017
Preeti- this slide should be changed to the template what you are using.
14
Cisco Live 2014 4/26/2017 Overlay Networks An overlay network is a computer network, which is built on the top of another network (Underlay). Nodes in the overlay can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. Let us take the Internet as an example. The reality is that the Internet itself is nothing more than an “overlay network” on top of a solid optical infrastructure. The majority of paths in the Internet are now formed over a DWDM infrastructure that creates a virtual (wavelength based) topology between routers and utilizes several forms of switching to interconnect routers together. That is, if you are lucky. Because there is also a large amount of paths in the Internet that are still overlaid over SONET/SDH TDM networks that provide TDM paths to interconnect routers. Therefore, pretty much every router path in the Internet is an “overlaid” path. Source: Wikipedia
15
Physical Switch Network
You start with a Physical Switch Network Physical Devices and Physical Connections
16
Overlay Overlay Network Then you add an overlay
Overlay provides base for logical network
17
Overlay Network (Cont’d)
Logical “switch” devices overlay the physical network They define their own topology Underlying physical network carries data traffic for overlay network Overlay Network #1
18
Multiple Overlay Network
Multiple “overlay” networks can co-exist at the same time Overlays provides logical network constructs for different tenants (customers) Overlay Network #1
19
Overlay examples Fabric Path OTV LISP
20
Benefits of Overlays Simplified management: Use a single point of management to provide network resources for multitenant clouds without the need to change the physical network. Multitenancy at scale: Provide scalable Layer 2 networks for a multitenant cloud that extends beyond 4000 VLANs. This capability is very important for private and public cloud hosted environments. Workload-anywhere capability (mobility and reachability): Optimally use server resources by placing the workload anywhere and moving the workload anywhere in the server farm as needed. Forwarding-topology flexibility: Add arbitrary forwarding topologies on top of a fixed routed underlay topology.
21
Why Do We Need Overlays? Location and Identity Separation
Traditional Behaviour Loc/ID “Overloaded” Semantic IP core When the Device Moves, It Gets a New IPv4 or IPv6 Address for Its New Identity and Location Device IPv4 or IPv6 Address Represents Identity and Location Overlay Behaviour Loc/ID “Split” IP core When the Device Moves, Keeps Its IPv4 or IPv6 Address. It Has the Same Identity Device IPv4 or IPv6 Address Represents Identity Only. Its Location Is Here! Only the Location Changes
22
Robust Underlay/Fabric Flexible Overlay Virtual Network
Cisco Live 2014 4/26/2017 Modern DC Fabric Seek well integrated best in class Overlays and Underlays Robust Underlay/Fabric High Capacity Resilient Fabric Intelligent Packet Handling Programmable & Manageable Flexible Overlay Virtual Network Mobility – Track end-point attach at edges Scale – Reduce core state Distribute and partition state to network edge Flexibility/Programmability Reduced number of touch points
23
Control Plane Learning
Overlay Attributes Service Edge Device Signalling Layer 2 Service Layer 3 Service Host Overlays Network Overlays Data Plane Learning Control Plane Learning
24
Types of Overlay Service
Layer 2 Overlays Emulate a LAN segment Transport Ethernet Frames (IP and non-IP) Single subnet mobility (L2 domain) Exposure to open L2 flooding Useful in emulating physical topologies Layer 3 Overlays Abstract IP based connectivity Transport IP Packets Full mobility regardless of subnets Contain network related failures (floods) Useful in abstracting connectivity and policy Introduce L2 and L3 gateways Hybrid L2/L3 Overlays offer the best of both domains
25
Control Plane Learning
Overlay Edge Devices Service Edge Device Signalling Layer 2 Service Layer 3 Service Host Overlays Network Overlays Data Plane Learning Control Plane Learning
26
Overlay Network Evolution: Edge Devices
Network Overlays Host Overlays Hybrid Overlays Physical VM OS Virtual App OS Virtual Physical Router/switch end-points Protocols for resiliency/loops Traditional VPNs OTV, VPLS, LISP, FP Virtual end-points only Single admin domain VXLAN, NVGRE, STT Physical and Virtual Resiliency + Scale X-Organization/ Federation Open Standards Tunnel End-points
27
Host vs. Network Based Overlays
Host Based Overlays Network Based Overlays Centralized Control and point of Management Distributed Control VTEP is initiated at the Server VTEP is initiated at the Edge of Network Server performs encapsulation and forwarding Edge switch performs encapsulation and forwarding End Points are Virtual End Points are Physical Switches/Routers Nexus 1000V, CSR 1000V, OVS Nexus 3K/5K/6K/7K, Nexus 9K standalone, ASR 1K/9K Hybrid Overlays Combination of both Host and Network based overlays where Virtual and Physical worlds interconnect
28
Overlay Signalling Evolution
Service Edge Device Signalling Layer 2 Service Layer 3 Service Host Overlays Network Overlays Data Plane Learning Control Plane Learning
29
Overlay Signalling Service Discovery
Edge devices in an overlay need to discover each other Address Advertising and Tunnel Mapping Edge devices must exchange host reachability information Map end-point to location Tunnel Management Maintain and manage connections between edge devices Signaling Functions Auto-provisioning / Service-discovery Address Advertising & Tunnel Mapping Tunnel Management Control Plane Data Plane Overlay Signalling Types Data Plane Control Plane
30
Overlay Signalling Data Plane Learning
Based on gleaning information from data plane events (Arp,dhcp etc.,) Example: Source Learning on bridges Requires a flood facility for data plane events to propagate: Multicast tree Unicast replication group at the head-end
31
Overlay Signalling Control Plane Provides: Service Discovery
Address Advertising/Mapping Tunnel Management Extensions for multi-homing and advanced services can be provided Protocol or Controller: Routing Protocol amongst Edge Devices BGP, IS-IS, LISP Central database on a Controller Distributed Virtual Switches (OVS, N1Kv/VSM) Push or Pull: Push all information to all Edge Devices BGP, IS-IS, Controllers Pull and cache on ED LISP, DNS, Controllers Protocols vs. Controllers Push vs. Pull
32
Which Encapsulation? VXLAN NVGRE LISP MPLS FabricPath
33
Cisco Live 2014 4/26/2017 VXLAN Preeti- this slide should be changed to the template what you are using.
34
Why VXLAN? “Standards” based Overlay
Leverages Layer-3 routing – proven, stable and scalable ECMP – all links forwarding Increased Layer-2 name space to 16M identifier Integration of Physical and Virtual Multi-tenancy
35
DC Overlays Virtual eXtensible LAN (VXLAN-RFC7348)
Ethernet Header Payload FCS Outer Ethernet Outer IP Outer UDP VXLAN Inner Ethernet Payload New FCS 8 Bytes Flags Reserved Instance ID Reserved Virtual eXtensible LAN (VXLAN) is a Layer 2 overlay scheme over a Layer 3 network. A 24-bit VXLAN Segment ID or VXLAN Network Identifier (VNI) is included in the encapsulation to provide up to 16M VXLAN segments for traffic isolation/segmentation, in contrast to the 4K segments achievable with VLANs. Each of these segments represents a unique Layer 2 broadcast domain, and can be administered in such a way that it can uniquely identify a given tenant’s address space or subnet… 1 Byte Outer UDP Destination Port = VXLAN (4789) Outer UDP Source Port = Hash of Inner Frame Headers Rsvd 1 Rsvd
36
What is VXLAN? VXLAN is a Layer 2 overlay scheme over a Layer 3 network. Provides a means to "stretch" a Layer 2 network. Provides 24 bits for L2 Segment ID which equates to 16 million unique L2 segments VLAN 10 VLAN 20 Layer 3 Network
37
Data Center “Fabric” Journey
STP VPC MAN/WAN FabricPath VXLAN MAN/WAN FabricPath /BGP MAN/WAN VXLAN /EVPN
38
Cisco VXLAN Portfolio Scale Secure Multi-tenancy Workload Mobility
Cisco VXLAN Solutions Scale Secure Multi-tenancy Workload Mobility Workload Anywhere ASR1000 CSR1000 Nexus 1000 Nexus 2000 Nexus 3100 Nexus 5600 Nexus 7000 Nexus 9000 ASR9000 L2 Gateway L3 Gateway BGP EVPN Control Plane Anycast Gateway Head End Replication
39
Thank You
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.