Download presentation
Presentation is loading. Please wait.
Published byAubrie Rose Modified over 6 years ago
1
OpenFlow in Service Provider Networks AT&T Tech Talks October 2010
Rob Sherwood Saurav Das Yiannis Yiakoumis
2
Talk Overview Motivation What is OpenFlow Deployments
OpenFlow in the WAN Combined Circuit/Packet Switching Demo Future Directions
3
Specialized Packet Forwarding Hardware
We have lost our way Routing, management, mobility management, access control, VPNs, … App App App Million of lines of source code 5400 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware 500M gates 10Gbytes RAM Bloated Power Hungry
4
Router L3 VPN anycast NAT IPV6 multicast Mobile IP L2 VPN VLAN MPLS
iBGP, eBGP IPSec Authentication, Security, Access Control Multi layer multi region Software Control Router Hardware Datapath Firewall L3 VPN anycast IPV6 NAT multicast Mobile IP HELLO OSPF-TE HELLO L2 VPN RSVP-TE VLAN MPLS HELLO Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a “mainframe-mentality”
5
Glacial process of innovation made worse by captive standards process
Deployment Idea Standardize Wait 10 years Driven by vendors Consumers largely locked out Glacial innovation
6
New Generation Providers Already Buy into It
In a nutshell Driven by cost and control Started in data centers…. What New Generation Providers have been Doing Within the Datacenters Buy bare metal switches/routers Write their own control/management applications on a common platform 6 6
7
Change is happening in non-traditional markets
Network Operating System App Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App Operating System Specialized Packet Forwarding Hardware
8
The “Software-defined Network”
2. At least one good operating system Extensible, possibly open-source 3. Well-defined open API App App App Network Operating System 1. Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware
9
Trend Virtualization or “Slicing” Virtualization layer
Controller 1 App Controller 2 Virtualization or “Slicing” OpenFlow NOX (Network OS) Network OS App App App Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Virtualization layer x86 (Computer) Computer Industry Network Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation 9
10
What is OpenFlow?
11
Short Story: OpenFlow is an API
Control how packets are forwarded Implementable on COTS hardware Make deployed networks programmable not just configurable Makes innovation easier Result: Increased control: custom forwarding Reduced cost: API increased competition
12
Ethernet Switch/Router
13
Control Path (Software)
Data Path (Hardware)
14
OpenFlow Controller Control Path OpenFlow Data Path (Hardware)
OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Data Path (Hardware)
15
OpenFlow Firmware Controller PC OpenFlow Flow Table Abstraction
Software Layer Flow Table MAC src dst IP Src Dst TCP sport dport Action Hardware Layer * port 1 port 1 port 2 port 3 port 4
16
OpenFlow Basics Flow Table Entries
Rule Action Stats Packet + byte counters Forward packet to port(s) Encapsulate and forward to controller Drop packet Send to normal processing pipeline Modify Fields Now I’ll describe the API that tries to meet these goals. Switch Port VLAN ID MAC src MAC dst Eth type IP Src IP Dst IP Prot TCP sport TCP dport + mask what fields to match 16
17
Examples Switching Flow Switching Firewall Switch Port MAC src dst Eth
type VLAN ID IP Src Dst Prot TCP sport dport Action * * 00:1f:.. * * * * * * * port6 Flow Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port3 00:20.. 00:1f.. 0800 vlan1 4 17264 80 port6 Firewall Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Forward * * * * * * * * * 22 drop
18
Examples Routing VLAN Switching Switch Port MAC src dst Eth type VLAN
ID IP Src Dst Prot TCP sport dport Action * * * * * * * * * port6 VLAN Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port6, port7, port9 * * 00:1f.. * vlan1 * * * * *
19
OpenFlow Usage Dedicated OpenFlow Network
Controller Aaron’s code PC OpenFlow Switch OpenFlow Protocol Rule Action Statistics OpenFlow Switch OpenFlow Switch Rule Action Statistics Rule Action Statistics OpenFlowSwitch.org 19 19
20
Network Design Decisions
Forwarding logic (of course) Centralized vs. distributed control Fine vs. coarse grained rules Reactive vs. Proactive rule creation Likely more: open research area
21
Centralized vs Distributed Control
Centralized Control Distributed Control Controller Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch
22
Flow Routing vs. Aggregation Both models are possible with OpenFlow
Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone
23
Reactive vs. Proactive Both models are possible with OpenFlow
First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules
24
OpenFlow Application: Network Slicing
Divide the production network into logical slices each slice/service controls its own packet forwarding users pick which slice controls their traffic: opt-in existing production services run in their own slice e.g., Spanning tree, OSPF/BGP Enforce strong isolation between slices actions in one slice do not affect another Allows the (logical) testbed to mirror the production network real hardware, performance, topologies, scale, users Prototype implementation: FlowVisor very text heavy; think about a picture (what!?) "systems solution to a networking problem (this is why it's at OSDI)"
25
Add a Slicing Layer Between Planes
Slice 2 Controller Slice 3 Controller Slice 1 Controller Slice Policies "Each slice runs its own, custom control plane process and generates its own rules" TODO: de-ugly-fy Control/Data Protocol Rules Excepts Data Plane
26
Network Slicing Architecture
A network slice is a collection of sliced switches/routers Data plane is unmodified Packets forwarded with no performance penalty Slicing with existing ASIC Transparent slicing layer each slice believes it owns the data path enforces isolation between slices i.e., rewrites, drops rules to adhere to slice police forwards exceptions to correct slice(s)
27
Slicing Policies The policy specifies resource limits for each slice:
Link bandwidth Maximum number of forwarding rules Topology Fraction of switch/router CPU FlowSpace: which packets does the slice control?
28
FlowSpace: Maps Packets to Slices
"flowspace is a way of thinking about classes of packets" "each slice has forwarding control of a specific set of packets, as specified by packet header fields" "that is, all packets in a given flow are controlled by the same slice" "each flow is controlled by exactly one slice" (ignoring monitoring slices for the purpose of the talk) "in practice, flow spaces are described using ordered ACL-like rules"
29
Real User Traffic: Opt-In
Allow users to Opt-In to services in real-time Users can delegate control of individual flows to Slices Add new FlowSpace to each slice's policy Example: "Slice 1 will handle my HTTP traffic" "Slice 2 will handle my VoIP traffic" "Slice 3 will handle everything else" Creates incentives for building high-quality services
30
FlowVisor Implemented on OpenFlow
Server Servers Custom Control Plane Switch/ Router OpenFlow Firmware Data Path Controller FlowVisor OpenFlow Controller Network OpenFlow Protocol Stub Control Plane OpenFlow Firmware Data Plane Data Path Switch/ Router Switch/ Router
31
FlowVisor Message Handling
OpenFlow Firmware Data Path Alice Controller Bob Cathy FlowVisor Rule Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Exception Packet Packet
32
OpenFlow Deployments
33
OpenFlow has been prototyped on….
Ethernet switches HP, Cisco, NEC, Quanta, + more underway IP routers Cisco, Juniper, NEC Switching chips Broadcom, Marvell Transport switches Ciena, Fujitsu WiFi APs and WiMAX Basestations Most (all?) hardware switches now based on Open vSwitch…
34
Deployment: Stanford Our real, production network 15 switches, 35 APs
25+ users 1+ year of use my personal and web-traffic! Same physical network hosts Stanford demos 7 different demos
35
Demo Infrastructure with Slicing
36
Deployments: GENI
37
(Public) Industry Interest
Google has been a main proponent of new OpenFlow 1.1 WAN features ECMP, MPLS-label matching MPLS LDP-OpenFlow speaking router: NANOG50 NEC has announced commercial products Initially for datacenters, talking to providers Ericsson “MPLS Openflow and the Split Router Architecture: A Research Approach“ at MPLS2010
38
OpenFlow in the WAN
39
CAPEX: 30-40% OPEX: 60-70% … and yet service providers own & operate 2 such networks : IP and Transport
40
Motivation IP & Transport Networks are separate
C IP/MPLS D IP/MPLS managed and operated independently resulting in duplication of functions and resources in multiple layers and significant capex and opex burdens … well known C C D D IP/MPLS C IP/MPLS D GMPLS C C C D D D C D D C D D D D
41
Motivation IP & Transport Networks do not interact IP links are static
IP/MPLS D IP/MPLS IP links are static and supported by static circuits or lambdas in the Transport network C C D D IP/MPLS C IP/MPLS D GMPLS C C C D D D C D D C D D D D
42
What does it mean for the IP network?
IP backbone network design Router connections hardwired by lambdas 4X to 10X over-provisioned Peak traffic protection IP DWDM Big Problem More over-provisioned links Bigger Routers How is this scalable?? *April, 02
43
Bigger Routers? How is this scalable??
Dependence on large Backbone Routers Expensive Power Hungry Juniper TX8/T640 TX8 Cisco CRS-1 How is this scalable??
44
Functionality Issues! Dependence on large Backbone Routers
Complex & Unreliable Network World 05/16/2007 Dependence on packet-switching Traffic-mix tipping heavily towards video Questionable if per-hop packet-by-packet processing is a good idea Dependence on over-provisioned links Over-provisioning masks packet switching simply not very good at providing bandwidth, delay, jitter and loss guarantees
45
How can Optics help? Optical Switches Dynamic Circuit Switching
10X more capacity per unit volume (Gb/s/m3) 10X less power consumption 10X less cost per unit capacity (Gb/s) Five 9’s availability Dynamic Circuit Switching Recover faster from failures Guaranteed bandwidth & Bandwidth-on-demand Good for video flows Guaranteed low latency & jitter-free paths Help meet SLAs – lower need for over-provisioned IP links
46
Motivation IP & Transport Networks do not interact IP links are static
IP/MPLS D IP/MPLS IP links are static and supported by static circuits or lambdas in the Transport network C C D D IP/MPLS C IP/MPLS D GMPLS C C C D D D C D D C D D D D
47
What does it mean for the Transport network?
IP Without interaction with a higher layer there is really no need to support dynamic services and thus no need for an automated control plane and so the Transport network remains manually controlled via NMS/EMS and circuits to support a service take days to provision DWDM Without visibility into higher layer services the Transport network reduces to a bandwidth-seller The Internet can help… wide variety of services different requirements that can take advantage of dynamic circuit characteristics *April, 02
48
What is needed manage and operate commonly
… Converged Packet and Circuit Networks manage and operate commonly benefit from both packet and circuit switches benefit from dynamic interaction between packet switching and dynamic-circuit-switching … Requires a common way to control a common way to use
49
We believe true convergence will come about from architectural change!
But … Convergence is hard … mainly because the two networks have very different architecture which makes integrated operation hard … and previous attempts at convergence have assumed that the networks remain the same … making what goes across them bloated and complicated and ultimately un-usable We believe true convergence will come about from architectural change!
50
Flow Network IP/MPLS IP/MPLS IP/MPLS IP/MPLS UCP C D C C D D C D GMPLS
51
pac.c Research Goal: Packet and Circuit Flows Commonly Controlled & Managed Simple, network of Flow Switches Simple, Unified, Automated Control Plane Flow Network … that switch at different granularities: packet, time-slot, lambda & fiber
52
… a common way to control
Packet Flows Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action Exploit the cross-connect table in circuit switches Circuit Flows In Port Lambda Starting Time-Slot VCG 52 Signal Type Out Port Lambda Starting Time-Slot VCG 52 Signal Type The Flow Abstraction presents a unifying abstraction … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network
53
… a common way to use Unified Architecture NETWORK OPERATING SYSTEM
Variable Bandwidth Packet Links Dynamic Optical Bypass Unified Recovery Application-Aware QoS Traffic Engineering Networking Applications Unified Control Plane NETWORK OPERATING SYSTEM VIRTUALIZATION (SLICING) PLANE Unifying Abstraction OpenFlow Protocol Packet Switch Circuit Switch Packet Switch Underlying Data Plane Switching Packet & Circuit Switch Packet & Circuit Switch
54
Example Application ..via Variable Bandwidth Packet Links
Congestion Control ..via Variable Bandwidth Packet Links
55
OpenFlow Demo at SC09
56
Selective Switch (WSS)
Lab Demo with Wavelength Switches OpenFlow Controller OpenFlow Protocol GE E-O NetFPGA based OpenFlow packet switch NF1 GE O-E NF2 25 km SMF to OSA AWG WSS based OpenFlow circuit switch 1X9 Wavelength Selective Switch (WSS) GE to DWDM SFP convertor λ nm λ nm Video Clients Video Server
57
Lab Demo with Wavelength Switches
Openflow Circuit Switch 25 km SMF OpenFlow packet switch GE-Optical Mux/Demux
58
OpenFlow Enabled Converged Packet and Circuit Switched Network
Stanford University and Ciena Corporation Demonstrate a converged network, where OpenFlow is used to control both packet and circuit switches. Dynamically define flow granularity to aggregate traffic moving towards the network core. Provide differential treatment to different types of aggregated packet flows in the circuit network: VoIP : Routed over minimum delay dynamic-circuit path Video: Variable-bandwidth, jitter free path bypassing intermediate packet switches HTTP: Best-effort over static-circuits Many more new capabilities become possible in a converged network
59
OpenFlow Enabled Converged Packet and Circuit Switched Network
SAN FRANCISCO HOUSTON NEW YORK Controller OpenFlow Protocol Aggregated packet flows Web traffic in static predefined circuits Video traffic in dynamic, jitter-free, variable-bandwidth circuits VoIP traffic in dynamic, minimum propagation delay paths OpenFlow Enabled Converged Packet and Circuit Switched Network
60
Demo Video
61
Issues with GMPLS GMPLS original goal: UCP across packet & circuit (2000) Today – the idea is dead Packet vendors and ISPs are not interested Transport n/w SPs view it as a signaling tool available to the mgmt system for provisioning private lines (not related to the Internet) After 10 yrs of development, next-to-zero significant deployment as UCP GMPLS Issues
62
Issues with GMPLS Issues are when considered as a unified architecture and control plane control plane complexity escalates when unifying across packets and circuits because it makes basic assumption that the packet network remains same: IP/MPLS network – many years of legacy L2/3 baggage and that the transport network remain same - multiple layers and multiple vendor domains use of fragile distributed routing and signaling protocols with many extensions, increasing switch cost & complexity, while decreasing robustness does not take into account the conservative nature of network operation can IP networks really handle dynamic links? Do transport network service providers really want to give up control to an automated control plane? does not provide easy path to control plane virtualization
63
Conclusions Current networks are complicated OpenFlow is an API
Interesting apps include network slicing Nation-wide academic trials underway OpenFlow has potential for Service Providers Custom control for Traffic Engineering Combined Packet/Circuit switched networks Thank you!
64
Conclusions Current networks are complicated OpenFlow is an API
Interesting apps include network slicing Nation-wide academic trials underway OpenFlow has potential for Service Providers Custom control for Traffic Engineering Combined Packet/Circuit switched networks Thank you!
65
Backup
66
Practical Considerations
It is well known that Transport Service Providers dislike giving up manual control of their networks to an automated control plane no matter how intelligent that control plane may be how to convince them? It is also well known that converged operation of packet & circuit networks is a good idea for those that own both types of networks – eg AT&T, Verizon BUT what about those who own only packet networks –eg Google they do not wish to buy circuit switches We believe the answer to both lies in virtualization (or slicing)
67
Basic Idea: Unified Virtualization
OpenFlow Protocol C FLOWVISOR OpenFlow Protocol CK P
68
Deployment Scenario: Different SPs
ISP ‘A’ Client Controller Private Line Client Controller ISP ‘B’ Client Controller C C C OpenFlow Protocol Under Transport Service Provider (TSP) control FLOWVISOR OpenFlow Protocol D CK P Isolated Client Network Slices D Single Physical Infrastructure of Packet & Circuit Switches D
69
Demo Topology PKT PKT PKT PKT
App App App App App App TSP’s NMS/EMS ISP# 1’s NetOS ISP# 2’s NetOS TSP’s FlowVisor PKT E T H PKT E T H P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H PKT E T H Transport Service Provider’s (TSP) virtualized network Internet Service Provider’s (ISP# 1) OF enabled network with slice of TSP’s network PKT E T H PKT E T H Internet Service Provider’s (ISP# 2) OF enabled network with another slice of TSP’s network TSP’s private line customer
70
Demo Methodology We will show:
TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS. The FlowVisor will manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches } NMS/EMS can be used to manually provision circuits for Private Line customers Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices. ISP#1 is free to do whatever it wishes within its slice eg. use an automated control plane (like OpenFlow) bring up and tear-down links as dynamically as it wants ISP#2 is free to do the same within its slice Neither can control anything outside its slice, nor interfere with other slices TSP can still use NMS/EMS for the rest of its network
71
ISP #1’s Business Model ISP# 1 pays for a slice = { bandwidth + TSP switching resources } Part of the bandwidth is for static links between its edge packet switches (like ISPs do today) and some of it is for redirecting bandwidth between the edge switches (unlike current practice) The sum of both static bandwidth and redirected bandwidth is paid for up-front. The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.
72
ISP# 1’s network PKT PKT PKT Packet (virtual) topology
H PKT E T H PKT E T H Packet (virtual) topology ..and spare bandwidth in the slice PKT E T H Notice the spare interfaces P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H Actual topology PKT E T H
73
ISP# 1’s network PKT PKT PKT Packet (virtual) topology PKT PKT
H PKT E T H PKT E T H Packet (virtual) topology PKT E T H P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H Actual topology PKT E T H ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!
74
ISP #1’s Business Model Rationale
Q. Why have spare interfaces on the edge switches? Why not use them all the time? A. Spare interfaces on the edge switches cost less than bandwidth in the core sharing expensive core bandwidth between cheaper edge ports is more cost-effective for the ISP gives the ISP flexibility in using dynamic circuits to create new packet links where needed, when needed The comparison is between (in the simple network shown) 3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth vs. 6 static links = 4 ports/edge switch + static core bandwidth as the number of edge switches increase, the gap increases
75
ISP #2’s Business Model ISP# 2 pays for a slice = { bandwidth + TSP switching resources } Only the bandwidth for static links between its edge packet switches is paid for up-front. Extra bandwidth is paid for on a pay-per-use basis TSP switching resources are required to provision/tear-down extra bandwidth Extra bandwidth is not guaranteed
76
ISP# 2’s network PKT PKT PKT Packet (virtual) topology
H PKT E T H PKT E T H Packet (virtual) topology Only static link bw paid for up-front PKT E T H P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H Actual topology PKT E T H ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!!
77
ISP #2’s Business Model Rationale
Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G) Again it is for cost-efficiency reasons. ISP’s today would pay for the 10G in the core up-front and then run their links at 10% utilization. Instead they could pay for say 2.5G or 5G in the core, and ramp up when they need to or scale back when they don’t – pay per use.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.