Presentation is loading. Please wait.

Presentation is loading. Please wait.

Open-Source Multi-Purpose Leaf-Spine Fabric Saurav Das.

Similar presentations


Presentation on theme: "Open-Source Multi-Purpose Leaf-Spine Fabric Saurav Das."— Presentation transcript:

1 Open-Source Multi-Purpose Leaf-Spine Fabric Saurav Das

2 Open-Source Multi-Purpose Leaf-Spine Fabric White Box Open Source SDN-based Bare-metal ONOS Controller Cluster Fabric Control App

3 L2/L3 Leaf-Spine DC Fabric Bare-Metal hardware Classic-SDN control Completely open-source software {Classic SDN + Bare-Metal Switches + Open-source} Benefits Simpler and more flexible control Rapidly introduce new functions and features for customizations Lower Total Cost of Ownership (TCO) Eliminate commercial licenses & vendor lock-in Leverage growing open-source community Value Proposition (1/2) Unique combination of these properties

4 Trellis Leaf-Spine DC Fabric underlay Virtual Network overlay Unified SDN Control over undelay & overlay Value Proposition 2/2 Unique combination of these properties Trellis Benefits Common SDN control over underlay and overlay networks enable simple, efficient implementations of Distributed Virtual Routing for tenant networks Optimized delivery of Multicast traffic streams and many more resource optimizations as well as new capabilities (to be demonstrated in near future) Trellis has been an important component of the CORD platform (http://opencord.org) (http://opencord.org) and has been demonstrated as part of various CORD POCs and will rapidly evolve with the CORD platform and its various domains of use.

5 Fabric Rationale Why DC Fabrics? Why Bare-metals? Why SDN? Open source components Fabric Operation Basic features Challenges overcome Fabric Application Introducing Trellis Fabric Roadmap Outline

6 ToR switches Aggregation Core Cisco “normal” design Spanning tree used 1 + 1 redundancy Why DC Leaf-Spine Fabrics? Old design not good for east-west (dominant) traffic in modern data-centers

7 Why Bare Metal Switches? DRAMCPU Power supply + fans Linux xChip Optional Remote API ONL, SwitchLight, PicOS, FBOSS, SONIC, OpenSwitch, Cumulus, Pigeon, FastPath, SnapRoute API/Driver OF-DPA, OpenNSL, SAI, P4 ONIE Boot Loader Accton Delta Dell Wedge Quanta Interface Masters Celestica Cost 400,000 servers 20,000 switches $5k vendor switch = $100M $1k bare-metal switch = $20M Savings in 10 data centers = $800M Control Tailor network to applications Proprietary behavior Quickly debug No vendor lock-in

8 © 2013 SDN Academy, LLC™. All Rights Reserved.8 1.Simpler Control with Greater Flexibility Networks work because we can master complexity, but what we should be doing is extracting simplicity, with the right abstractions 2.Speed of Innovation, Ease of Service Insertion & Faster Time to Market Does not involve changing/creating a fully distributed protocol 3.Lower Total Cost of Ownership (TCO) Lower Opex – easier to manage, troubleshoot, emulate Lower Capex – replacing proprietary hardware Why SDN? Benefits of Classic SDN

9 White Box SDN Switch Accton 6712 White Box SDN Switch Accton 6712 Leaf Switch 24 x 40G ports downlink to servers via 4 X 10G breakout DAC 8 x 40G ports uplink to different spine switches ECMP across all uplink ports GE mgmt. White Box SDN Switch Accton 6712 Spine Switch 32 x 40G ports downlink to leaf switches 40G QSFP+/DAC GE mgmt. BRCM ASIC OF-DPA Indigo OF Agent OF-DPA API OpenFlow 1.3 OCP: Open Compute Project ONL: Open Network Linux ONIE: Open Network Install Environment BRCM: Broadcom Merchant Silicon ASICs OF-DPA: OpenFlow Datapath Abstraction Leaf/Spine Switch Software Stack to controller OCP Software - ONL ONIE OCP Software - ONL ONIE OCP Bare Metal Hardware Open Hardware & Software Stacks BRCM SDK API

10 ONOS ONF’s Project Atrium Stack vlan x vlan y vlan z E-BGP BRCM ASIC OF-DPA Indigo OF Agent OF-DPA API OpenFlow 1.3 OCP Software - ONL ONIE OCP Software - ONL ONIE OCP Bare Metal Hardware BRCM SDK API OCP: Open Compute Project; ONL: Open Network Linux; ONIE: Open Network Install Env; BRCM: Broadcom Merchant Silicon ASICs; OF-DPA: OpenFlow Datapath Abstraction OFDPA Driver Peering Application Quagga BGP Fabric Control Application

11 Fabric Rationale Why DC Fabrics? Why Bare-metals? Why SDN? Open source components Fabric Operation Basic features Challenges overcome Fabric Application Introducing Trellis Fabric Roadmap Outline

12 L2 bridged IPv4 unicast / MPLS SR Fabric Operation

13 Ingress Port Table Phy Por t Vlan Table Termin- ation MAC Table Multi- cast Routin g Table Unicast Routin g Table MPLS Table Bridging Table ACL Policy Table L2 Flood Group L3 ECMP Group Phy Por t MPLS Label Group MPLS Label Group L3 Mcast Group L2 Interface Group L2 Interface Group Vlan 1 Table MPLS L2 Port Table * Simplified view Allows programming of all flow-tables & port-groups via OpenFlow 1.3 Why OF-DPA? OF 1.0 L2 Interface Group Phy Por t L2 Interface Group OF 1.3 Achieves Dataplane scale 13 Challenges: Data Plane Scale & Performance

14 Challenges: Control Plane HA, Scale & Performance Switch SDN Controller Classic-SDN Myths: 1. Dataplane packets need to go to controller Reality: Application designs mode of operation! Fabric control application designed such that dataplane packets never have to go to the controller. 2. Controllers are out-of-the-network, like management stations Reality: Controllers are Network Elements (NEs)! Almost all NE redundany is 1:1 or 1+1 (2-way redundancy) ONOS does much, much more 3-way, 5-way, 7-way redundancy Bonus: scales the same way

15 ONOS N-Way Redundancy ONOS Instance 1 ONOS Instance 2 ONOS Instance 3 ONOS Instance 4 ONOS Instance 5 M M B B B B M M B B B B M M M M B B B B B B B B M M M M B B B B B B B B B B B B M M M M B B B B Switches simultaneously connect to several controller instances. only 1 controller instance is master, several other instances are backups Mastership is decided by controllers Switches have no say Controller instances simultaneously connect to several switches. Any controller instance can be master or backup for any switch Spreading mastership over controller instances contributes to scale M M B B = Master = Backup

16 ONOS Instance 1 ONOS Instance 2 ONOS Instance 3 ONOS Instance 4 ONOS Instance 5 M M B B M M B B M M B B B B M M B B M M B B Switches simultaneously connect to several controller instances. only 1 controller instance is master, several other instances are backups Mastership is decided by controllers Switches have no say Controller instances simultaneously connect to several switches. Any controller instance can be master or backup for any switch Spreading mastership over controller instances contributes to scale M M B B = Master = Backup = Retry R R R R R R R R M M M M R R R R M M Losing controller instances redistributes switch mastership Switches continue to retry lost connections Management watchdog can reboot lost controller instances ONOS N-Way Redundancy

17 Fabric Rationale Why DC Fabrics? Why Bare-metals? Why SDN? Open source components Fabric Operation Basic features Challenges overcome Fabric Application Introducing Trellis Fabric Roadmap Outline

18 What is Trellis? 1. DC Leaf-Spine Fabric Underlay 2. Virtual Network Overlay 3. Unified SDN Control over underlay & overlay ONOS Controller Cluster & Apps ONOS Controller Cluster & Apps First DC network infrastructure of its kind due to a unique combination of these three components Trellis is the enabling Network Infrastructure for CORD (next slide) Common control over underlay & overlay networks enable simple, efficient implementations of: Distributed Virtual Routing for tenant networks (slide 19) Optimized delivery of multicast traffic streams (slide 20)

19 CORD Architecture Resd, Entr, Mobil eAcces s Resd, Entr, Mobil eAcces s Access Links Metro Router Metro Router ONOS Controller Cluster vRouter Control vOLT Control Overlay Control Underlay Control Multicast Control XOS vSG VNF OV S 3 White Box Open Source SDN-based Bare-metal White Box

20 ONOS Controller Cluster OLT To residential subscribers VNF CORD System: vRouter Underlay Control OSPF I-BGP vRouter Quagga (or other) Router Upstream Metro network ONOS Controller Cluster XOS Overlay Control OVS VNF OVS CORD vRouter  Implemented as a big distributed router  Presents entire CORD infrastructure as a single router to outside world 20

21 ONOS Controller Cluster OLT To residential subscribers vSG IPv4 multicast video CORD System: Multicast Underlay Control Multicast Control IGMP Snooping PIM-SSM Router Upstream Metro network XOS CORD Multicast video streams never need to go through any software switch or VNF 21

22 Fabric Rationale Why DC Fabrics? Why Bare-metals? Why SDN? Open source components Fabric Operation Basic features Challenges overcome Fabric Application Introducing Trellis Fabric Roadmap Outline

23 Fabric Roadmap Dual Homing Servers to 2 ToRs ToRs to Access equipment ToRs to upstream routers VxLAN Hardware VTEP Overlay network support for bare-metal servers & appliances Support for E-Line services using MPLS Pseudowires Other Features: ISSU, ARP Delegation, sFlow support, IPv6 support, Scale tests, Troubleshooting tools, new platforms

24 Collaborations & Community Open-source Collaboration ONF ON.Lab Broadcom Edge-core Get Involved https://github.com/onfsdn/atrium-docs/wiki/ONOS-Based-Atrium-Leaf-Spine-Fabric-16A (Feb) https://github.com/onfsdn/atrium-docs/wiki/ONOS-Based-Atrium-Leaf-Spine-Fabric-16A https://wiki.onosproject.org/display/ONOS/CORD%3A+Leaf- Spine+Fabric+with+Segment+Routing (July) https://wiki.onosproject.org/display/ONOS/CORD%3A+Leaf- Spine+Fabric+with+Segment+Routing More collaborators coming on board: Ciena, Dell, SnapRoute, other vendors & carriers Contribute Download and Test for lab and production trials Suggest or add new features Add new platforms

25 Summary L2/L3 spine-leaf fabric built on bare-metal hw and open-source software Preferred architecture in modern DCs: horizontal scaling & high bisection bandwidth Designed to scale up to 16 racks (~ 40 hardware switches) Low cost ~ $5k per switch SDN control plane - no distributed protocols Proactive mode of operation Highly available & scalable – ONOS N-way redundancy & scale Modern ASIC data plane - 1.28 Tbps switching bandwitdth for each switch Broadcom’s OF-DPA + OpenFlow 1.3 use of multi-tables & port-groups = dataplane scale Trellis: DC Fabric Underlay + Virtual Network Overlay + Unified SDN Control First open-source DC network infrastructure of its kind Common SDN control provides optimized services: vRouter & Multicast More examples to follow Trellis is the enabling network infrastructure for CORD


Download ppt "Open-Source Multi-Purpose Leaf-Spine Fabric Saurav Das."

Similar presentations


Ads by Google