Stanford University Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott Shenker And contributions by many others
Executive Summary The network industry is starting to restructure The trend: “Software Defined Networks” Separation of control from datapath Faster evolution of the network It has started in large data centers It may spread to WAN, campus, enterprise, home and cellular networks GENI is putting SDN into hands of researchers
What’s the problem?
Cellular industry Recently made transition to IP Billions of mobile users Need to securely extract payments and hold users accountable IP sucks at both, yet hard to change How can they fix IP to meet their needs?
Telco Operators Global IP traffic growing 40-50% per year End-customer monthly bill remains unchanged Therefore, CAPEX and OPEX need to reduce 40-50% per Gb/s per year But in practice, reduces by ~20% per year How can they stay in business? How can they differentiate their service?
Trend #1 (Logical) centralization of control
Already happening Enterprise WiFi Telco backbone networks Set power and channel centrally Route flows centrally, cache decisions in APs CAPWAP etc. Telco backbone networks Calculate routes centrally Cache routes in routers
Experiment: Stanford campus How hard is it to centrally control all flows? 2006 35,000 users 10,000 new flows/sec 137 network policies 2,000 switches 2,000 switch CPUs
How many $400 PCs to centralize all routing and all 137 policies? Controllers Ethernet Switch Ethernet Switch Ethernet Switch Host A Host B Ethernet Switch [Ethane, Sigcomm ‘07]
Answer: less than one
If you can centralize control, eventually you will If you can centralize control, eventually you will. With replication for fault-tolerance and performance scaling.
How will the network be structured?
Specialized Packet Forwarding Hardware The Current Network Routing, management, mobility management, access control, VPNs, … Feature Feature Million of lines of source code 5900 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Billions of gates Bloated Power Hungry Vertically integrated Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … Looks like the mainframe industry in the 1980s
Restructured Network Network OS Feature Operating System Specialized Packet Forwarding Hardware Operating System Feature Specialized Packet Forwarding Hardware Operating System Feature Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware Feature Operating System Specialized Packet Forwarding Hardware
Trend #2 Software-Defined Network
The “Software-defined Network” 2. At least one Network OS probably many. Open- and closed-source 3. Well-defined open API Feature Feature Network OS 1. Open interface to packet forwarding OpenFlow Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
OpenFlow Basics Narrow, vendor-agnostic interface to control switches, routers, APs, basestations. 17
Step 1: Separate Control from Datapath Network OS OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
Step 2: Cache flow decisions in datapath “If header = x, send to port 4” Network OS “If header = y, overwrite header with z, send to ports 5,6” “If header = ?, send to me” Flow Table OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
Plumbing Primitives Match arbitrary bits in headers: Actions: Match on any header; or user-defined header Allows any flow granularity Actions: Forward to port(s), drop, send to controller Overwrite header with mask, push or pop Forward at specific bit-rate Data Header e.g. Match: 1000x01xx0101001x
Ethernet Switch/Router
Control Path (Software) Data Path (Hardware)
OpenFlow Controller Control Path OpenFlow Data Path (Hardware) OpenFlow Protocol (SSL) Control Path OpenFlow Data Path (Hardware)
The “Software Defined Network” 2. At least one Network OS probably many. Open- and closed-source 3. Well-defined open API Feature Feature Network OS 1. Open interface to packet forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
Network OS Several commercial Network OS in development Research Commercial deployments late 2010 Research Research community mostly uses NOX Open-source available at: http://noxrepo.org Expect new research OS’s late 2010
Software Defined Networks in Data Centers
Example: New Data Center Cost 200,000 servers Fanout of 20 10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M Control More flexible control Quickly improve and innovate Enables “cloud networking” Several large data centers will use SDN.
Data Center Networks Existing Solutions Software Defined Network Tend to increase hardware complexity Unable to cope with virtualization and multi-tenancy Software Defined Network OpenFlow-enabled vSwitch Open vSwitch http://openvswitch.org Network optimized for data center owner Several commercial products under development
Software Defined Networks on College Campuses
What we are doing at Stanford Defining the OpenFlow Spec Check out http://OpenFlow.org Open weekly meetings at Stanford Enabling researchers to innovate Add OpenFlow to commercial switches, APs, … Deploy on college campuses “Slice” network to allow many experiments
Virtualization or “Slicing” Layer Isolated “slices” Network Operating System 1 Network Operating System 2 Network Operating System 3 Network Operating System 4 Feature OpenFlow Virtualization or “Slicing” Layer OpenFlow Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
Some research examples
FlowVisor Creates Virtual Networks OpenFlow Wireless Experiment PlugNServe Load-balancer OpenPipes Experiment OpenFlow Protocol OpenFlow Switch OpenFlow Protocol FlowVisor Policy #1 Multiple, isolated slices in the same physical network OpenFlow Switch OpenFlow Switch
Demo Infrastructure with Slicing
Application-specific Load-balancing Goal: Minimize http response time over campus network Approach: Route over path to jointly minimize <path latency, server latency> Internet Network OS Load-Balancer “Pick path & server” OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
Intercontinental VM Migration Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.
Converging Packet and Circuit Networks Goal: Common control plane for “Layer 3” and “Layer 1” networks Approach: Add OpenFlow to all switches; use common network OS Feature Feature NOX OpenFlow Protocol OpenFlow Protocol WDM Switch IP Router IP Router TDM Switch WDM Switch [Supercomputing 2009 Demo] [OFC 2010]
ElasticTree Goal: Reduce energy usage in data center networks Approach: Reroute traffic Shut off links and switches to reduce power Network OS DC Manager “Pick paths” [NSDI 2010]
X ElasticTree Goal: Reduce energy usage in data center networks Approach: Reroute traffic Shut off links and switches to reduce power X Network OS DC Manager “Pick paths” [NSDI 2010]
Executive Summary The network industry is starting to restructure The trend: “Software Defined Networks” Separation of control from datapath Faster evolution of the network It has started in large data centers It may spread to WAN, campus, enterprise, home and cellular networks GENI is putting SDN into hands of researchers
Thank you