Download presentation
Presentation is loading. Please wait.
Published byAntony Benson Modified over 6 years ago
1
Stanford University Software Defined Networks and OpenFlow SDN CIO Summit Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott Shenker And contributions by many others
2
Executive Summary The network industry is starting to restructure
The trend: “Software Defined Networks” Separation of control from datapath Faster evolution of the network It has started in large data centers It may spread to WAN, campus, enterprise, home and cellular networks GENI is putting SDN into hands of researchers
3
What’s the problem?
4
Cellular industry Recently made transition to IP
Billions of mobile users Need to securely extract payments and hold users accountable IP sucks at both, yet hard to change How can they fix IP to meet their needs?
5
Telco Operators Global IP traffic growing 40-50% per year
End-customer monthly bill remains unchanged Therefore, CAPEX and OPEX need to reduce 40-50% per Gb/s per year But in practice, reduces by ~20% per year How can they stay in business? How can they differentiate their service?
6
Trend #1 (Logical) centralization of control
7
Already happening Enterprise WiFi Telco backbone networks
Set power and channel centrally Route flows centrally, cache decisions in APs CAPWAP etc. Telco backbone networks Calculate routes centrally Cache routes in routers
8
Experiment: Stanford campus How hard is it to centrally control all flows?
2006 35,000 users 10,000 new flows/sec 137 network policies 2,000 switches 2,000 switch CPUs
9
How many $400 PCs to centralize all routing and all 137 policies?
Controllers Ethernet Switch Ethernet Switch Ethernet Switch Host A Host B Ethernet Switch [Ethane, Sigcomm ‘07]
10
Answer: less than one
11
If you can centralize control, eventually you will
If you can centralize control, eventually you will. With replication for fault-tolerance and performance scaling.
12
How will the network be structured?
13
Specialized Packet Forwarding Hardware
The Current Network Routing, management, mobility management, access control, VPNs, … Feature Feature Million of lines of source code 5900 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Billions of gates Bloated Power Hungry Vertically integrated Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … Looks like the mainframe industry in the 1980s
14
Restructured Network Network OS Feature Operating System
Specialized Packet Forwarding Hardware Operating System Feature Specialized Packet Forwarding Hardware Operating System Feature Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware Feature Operating System Specialized Packet Forwarding Hardware
15
Trend #2 Software-Defined Network
16
The “Software-defined Network”
2. At least one Network OS probably many. Open- and closed-source 3. Well-defined open API Feature Feature Network OS 1. Open interface to packet forwarding OpenFlow Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
17
OpenFlow Basics Narrow, vendor-agnostic interface to control switches, routers, APs, basestations. 17
18
Step 1: Separate Control from Datapath
Network OS OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
19
Step 2: Cache flow decisions in datapath
“If header = x, send to port 4” Network OS “If header = y, overwrite header with z, send to ports 5,6” “If header = ?, send to me” Flow Table OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
20
Plumbing Primitives Match arbitrary bits in headers: Actions:
Match on any header; or user-defined header Allows any flow granularity Actions: Forward to port(s), drop, send to controller Overwrite header with mask, push or pop Forward at specific bit-rate Data Header e.g. Match: 1000x01xx x
21
Ethernet Switch/Router
22
Control Path (Software)
Data Path (Hardware)
23
OpenFlow Controller Control Path OpenFlow Data Path (Hardware)
OpenFlow Protocol (SSL) Control Path OpenFlow Data Path (Hardware)
24
The “Software Defined Network”
2. At least one Network OS probably many. Open- and closed-source 3. Well-defined open API Feature Feature Network OS 1. Open interface to packet forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
25
Network OS Several commercial Network OS in development Research
Commercial deployments late 2010 Research Research community mostly uses NOX Open-source available at: Expect new research OS’s late 2010
26
Software Defined Networks in Data Centers
27
Example: New Data Center
Cost 200,000 servers Fanout of 20 10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M Control More flexible control Quickly improve and innovate Enables “cloud networking” Several large data centers will use SDN.
28
Data Center Networks Existing Solutions Software Defined Network
Tend to increase hardware complexity Unable to cope with virtualization and multi-tenancy Software Defined Network OpenFlow-enabled vSwitch Open vSwitch Network optimized for data center owner Several commercial products under development
29
Software Defined Networks on College Campuses
30
What we are doing at Stanford
Defining the OpenFlow Spec Check out Open weekly meetings at Stanford Enabling researchers to innovate Add OpenFlow to commercial switches, APs, … Deploy on college campuses “Slice” network to allow many experiments
31
Virtualization or “Slicing” Layer
Isolated “slices” Network Operating System 1 Network Operating System 2 Network Operating System 3 Network Operating System 4 Feature OpenFlow Virtualization or “Slicing” Layer OpenFlow Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
32
Some research examples
33
FlowVisor Creates Virtual Networks
OpenFlow Wireless Experiment PlugNServe Load-balancer OpenPipes Experiment OpenFlow Protocol OpenFlow Switch OpenFlow Protocol FlowVisor Policy #1 Multiple, isolated slices in the same physical network OpenFlow Switch OpenFlow Switch
34
Demo Infrastructure with Slicing
35
Application-specific Load-balancing
Goal: Minimize http response time over campus network Approach: Route over path to jointly minimize <path latency, server latency> Internet Network OS Load-Balancer “Pick path & server” OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
36
Intercontinental VM Migration
Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.
37
Converging Packet and Circuit Networks
Goal: Common control plane for “Layer 3” and “Layer 1” networks Approach: Add OpenFlow to all switches; use common network OS Feature Feature NOX OpenFlow Protocol OpenFlow Protocol WDM Switch IP Router IP Router TDM Switch WDM Switch [Supercomputing 2009 Demo] [OFC 2010]
38
ElasticTree Goal: Reduce energy usage in data center networks
Approach: Reroute traffic Shut off links and switches to reduce power Network OS DC Manager “Pick paths” [NSDI 2010]
39
X ElasticTree Goal: Reduce energy usage in data center networks
Approach: Reroute traffic Shut off links and switches to reduce power X Network OS DC Manager “Pick paths” [NSDI 2010]
40
Executive Summary The network industry is starting to restructure
The trend: “Software Defined Networks” Separation of control from datapath Faster evolution of the network It has started in large data centers It may spread to WAN, campus, enterprise, home and cellular networks GENI is putting SDN into hands of researchers
41
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.