OpenFlow in Service Provider Networks AT&T Tech Talks October 2010

Slides:



Advertisements
Similar presentations
OpenFlow and Software Defined Networks. Outline o The history of OpenFlow o What is OpenFlow? o Slicing OpenFlow networks o Software Defined Networks.
Advertisements

All Rights Reserved © Alcatel-Lucent 2009 Enhancing Dynamic Cloud-based Services using Network Virtualization F. Hao, T.V. Lakshman, Sarit Mukherjee, H.
OpenFlow in Service Provider Networks AT&T Tech Talks October 2010
Towards Software Defined Cellular Networks
Unifying Packet & Circuit Networks with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University Huawei, Feb 3 rd 2010
Why SDN and MPLS? Saurav Das, Ali Reza Sharafat, Guru Parulkar, Nick McKeown Clean Slate CTO Summit 9 th November, 2011.
An Overview of Software-Defined Network Presenter: Xitao Wen.
OpenFlow Costin Raiciu Using slides from Brandon Heller and Nick McKeown.
Can the Production Network Be the Testbed? Rob Sherwood Deutsche Telekom Inc. R&D Lab Glen Gibb, KK Yap, Guido Appenzeller, Martin Cassado, Nick McKeown,
Mobile Communication and Internet Technologies
Software-Defined Networking, OpenFlow, and how SPARC applies it to the telecommunications domain Pontus Sköldström - Wolfgang John – Elisa Bellagamba November.
OpenFlow : Enabling Innovation in Campus Networks SIGCOMM 2008 Nick McKeown, Tom Anderson, et el. Stanford University California, USA Presented.
Virtualization and OpenFlow Nick McKeown Nick McKeown VISA Workshop, Sigcomm 2009 Supported by NSF, Stanford Clean.
Flowspace revisited OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot L4 sport L4 dport Rule Action.
Professor Yashar Ganjali Department of Computer Science University of Toronto
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Can the Production Network Be the Testbed? Rob Sherwood Deutsche Telekom Inc. R&D Lab Glen Gibb, KK Yap, Guido Appenzeller, Martin Cassado, Nick McKeown,
An Overview of Software-Defined Network
Saurav Das, Guru Parulkar & Nick McKeown Stanford University European Conference on Optical Communications (ECOC) 18 th Sept, 2012 Why OpenFlow/SDN Can.
Virtualizing the Transport Network Why it matters & how OpenFlow can help Saurav Das OFELIA Workshop, ECOC 18 th Sept, 2011.
An Overview of Software-Defined Network Presenter: Xitao Wen.
Software-defined Networks October 2009 With Martin Casado and Scott Shenker And contributions from many others.
Professor Yashar Ganjali Department of Computer Science University of Toronto
Application-Aware Aggregation & Traffic Engineering in a Converged Packet-Circuit Network Saurav Das, Yiannis Yiakoumis, Guru Parulkar Nick McKeown Stanford.
Formal checkings in networks James Hongyi Zeng with Peyman Kazemian, George Varghese, Nick McKeown.
Networking in the cloud: An SDN primer Ben Cherian Chief Strategy Midokura.
Information-Centric Networks10b-1 Week 13 / Paper 1 OpenFlow: enabling innovation in campus networks –Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru.
OpenFlow: Enabling Technology Transfer to Networking Industry Nikhil Handigol Nikhil Handigol Cisco Nerd.
Introduction to SDN & OpenFlow Based on Tutorials from: Srini Seetharaman, Deutsche Telekom Innovation Center FloodLight Open Flow Controller, floodlight.openflowhub.org.
Software-Defined Networks Jennifer Rexford Princeton University.
Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar Stanford University In collaboration with Martin Casado and Scott.
Brent Salisbury CCIE#11972 Network Architect University of Kentucky 9/22/ OpenStack & OpenFlow Demo.
Aaron Gember Aditya Akella University of Wisconsin-Madison
Software Defined-Networking. Network Policies Access control: reachability – Alice can not send packets to Bob Application classification – Place video.
OpenFlow: Enabling Innovation in Campus Networks
Aditya Akella (Based on slides from Aaron Gember and Nick McKeown)
CS : Software Defined Networks 3rd Lecture 28/3/2013
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
A Simple Unified Control Plane for Packet and Circuit Networks Saurav Das, Guru Parulkar, Nick McKeown Stanford University.
Unifying Packet & Circuit Networks with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University BIPN, Nov 30 th 2009
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
Information-Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics.
3.6 Software-Defined Networks and OpenFlow
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
SDN basics and OpenFlow. Review some related concepts SDN overview OpenFlow.
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Network customization
Instructor Materials Chapter 7: Network Evolution
Multi Node Label Routing – A layer 2.5 routing protocol
Software defined networking: Experimental research on QoS
Architecture and Algorithms for an IEEE 802
ETHANE: TAKING CONTROL OF THE ENTERPRISE
Week 6 Software Defined Networking (SDN): Concepts
SDN basics and OpenFlow
Software Defined Networking (SDN)
Chapter 7 Backbone Network
Stanford University Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott.
The Stanford Clean Slate Program
Software Defined Networking (SDN)
Software Defined Networking
Handout # 18: Software-Defined Networking
NTHU CS5421 Cloud Computing
Ethernet Solutions for Optical Networks
1 Multi-Protocol Label Switching (MPLS). 2 MPLS Overview A forwarding scheme designed to speed up IP packet forwarding (RFC 3031) Idea: use a fixed length.
EE 122: Lecture 22 (Overlay Networks)
Chapter 5 Network Layer: The Control Plane
An Introduction to Software Defined Networking and OpenFlow
Chapter 4: outline 4.1 Overview of Network layer data plane
Presentation transcript:

OpenFlow in Service Provider Networks AT&T Tech Talks October 2010 Rob Sherwood Saurav Das Yiannis Yiakoumis

Talk Overview Motivation What is OpenFlow Deployments OpenFlow in the WAN Combined Circuit/Packet Switching Demo Future Directions

Specialized Packet Forwarding Hardware We have lost our way Routing, management, mobility management, access control, VPNs, … App App App Million of lines of source code 5400 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware 500M gates 10Gbytes RAM Bloated Power Hungry

Router L3 VPN anycast NAT IPV6 multicast Mobile IP L2 VPN VLAN MPLS iBGP, eBGP IPSec Authentication, Security, Access Control Multi layer multi region Software Control Router Hardware Datapath Firewall L3 VPN anycast IPV6 NAT multicast Mobile IP HELLO OSPF-TE HELLO L2 VPN RSVP-TE VLAN MPLS HELLO Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a “mainframe-mentality”

Glacial process of innovation made worse by captive standards process Deployment Idea Standardize Wait 10 years Driven by vendors Consumers largely locked out Glacial innovation

New Generation Providers Already Buy into It In a nutshell Driven by cost and control Started in data centers…. What New Generation Providers have been Doing Within the Datacenters Buy bare metal switches/routers Write their own control/management applications on a common platform 6 6

Change is happening in non-traditional markets Network Operating System App Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App Operating System Specialized Packet Forwarding Hardware

The “Software-defined Network” 2. At least one good operating system Extensible, possibly open-source 3. Well-defined open API App App App Network Operating System 1. Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware

Trend Virtualization or “Slicing” Virtualization layer Controller 1 App Controller 2 Virtualization or “Slicing” OpenFlow NOX (Network OS) Network OS App App App Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Virtualization layer x86 (Computer) Computer Industry Network Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation 9

What is OpenFlow?

Short Story: OpenFlow is an API Control how packets are forwarded Implementable on COTS hardware Make deployed networks programmable not just configurable Makes innovation easier Result: Increased control: custom forwarding Reduced cost: API  increased competition

Ethernet Switch/Router

Control Path (Software) Data Path (Hardware)

OpenFlow Controller Control Path OpenFlow Data Path (Hardware) OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Data Path (Hardware)

OpenFlow Firmware Controller PC OpenFlow Flow Table Abstraction Software Layer Flow Table MAC src dst IP Src Dst TCP sport dport Action Hardware Layer * 5.6.7.8 port 1 port 1 port 2 port 3 port 4 5.6.7.8 1.2.3.4

OpenFlow Basics Flow Table Entries Rule Action Stats Packet + byte counters Forward packet to port(s) Encapsulate and forward to controller Drop packet Send to normal processing pipeline Modify Fields Now I’ll describe the API that tries to meet these goals. Switch Port VLAN ID MAC src MAC dst Eth type IP Src IP Dst IP Prot TCP sport TCP dport + mask what fields to match 16

Examples Switching Flow Switching Firewall Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * 00:1f:.. * * * * * * * port6 Flow Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port3 00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 Firewall Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Forward * * * * * * * * * 22 drop

Examples Routing VLAN Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * * * * * 5.6.7.8 * * * port6 VLAN Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port6, port7, port9 * * 00:1f.. * vlan1 * * * * *

OpenFlow Usage Dedicated OpenFlow Network Controller Aaron’s code PC OpenFlow Switch OpenFlow Protocol Rule Action Statistics OpenFlow Switch OpenFlow Switch Rule Action Statistics Rule Action Statistics OpenFlowSwitch.org 19 19

Network Design Decisions Forwarding logic (of course) Centralized vs. distributed control Fine vs. coarse grained rules Reactive vs. Proactive rule creation Likely more: open research area

Centralized vs Distributed Control Centralized Control Distributed Control Controller Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch

Flow Routing vs. Aggregation Both models are possible with OpenFlow Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone

Reactive vs. Proactive Both models are possible with OpenFlow First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules

OpenFlow Application: Network Slicing Divide the production network into logical slices each slice/service controls its own packet forwarding users pick which slice controls their traffic: opt-in existing production services run in their own slice e.g., Spanning tree, OSPF/BGP Enforce strong isolation between slices actions in one slice do not affect another          Allows the (logical) testbed to mirror the production network real hardware, performance, topologies, scale, users Prototype implementation: FlowVisor very text heavy; think about a picture (what!?) "systems solution to a networking problem (this is why it's at OSDI)"

Add a Slicing Layer Between Planes Slice 2 Controller Slice 3 Controller Slice 1 Controller Slice Policies "Each slice runs its own, custom control plane process and generates its own rules" TODO: de-ugly-fy Control/Data Protocol Rules Excepts Data Plane

Network Slicing Architecture A network slice is a collection of sliced switches/routers Data plane is unmodified Packets forwarded with no performance penalty Slicing with existing ASIC Transparent slicing layer each slice believes it owns the data path enforces isolation between slices i.e., rewrites, drops rules to adhere to slice police forwards exceptions to correct slice(s)

Slicing Policies The policy specifies resource limits for each slice: Link bandwidth Maximum number of forwarding rules Topology Fraction of switch/router CPU FlowSpace: which packets does the slice control?

FlowSpace: Maps Packets to Slices "flowspace is a way of thinking about classes of packets" "each slice has forwarding control of a specific set of packets, as specified by packet header fields" "that is, all packets in a given flow are controlled by the same slice" "each flow is controlled by exactly one slice" (ignoring monitoring slices for the purpose of the talk) "in practice, flow spaces are described using ordered ACL-like rules"

Real User Traffic: Opt-In Allow users to Opt-In to services in real-time Users can delegate control of individual flows to Slices Add new FlowSpace to each slice's policy Example: "Slice 1 will handle my HTTP traffic" "Slice 2 will handle my VoIP traffic" "Slice 3 will handle everything else" Creates incentives for building high-quality services

FlowVisor Implemented on OpenFlow Server Servers Custom Control Plane Switch/ Router OpenFlow Firmware Data Path Controller FlowVisor OpenFlow Controller Network OpenFlow Protocol Stub Control Plane OpenFlow Firmware Data Plane Data Path Switch/ Router Switch/ Router

FlowVisor Message Handling OpenFlow Firmware Data Path Alice Controller Bob Cathy FlowVisor Rule Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Exception Packet Packet

OpenFlow Deployments

OpenFlow has been prototyped on…. Ethernet switches HP, Cisco, NEC, Quanta, + more underway IP routers Cisco, Juniper, NEC Switching chips Broadcom, Marvell Transport switches Ciena, Fujitsu WiFi APs and WiMAX Basestations Most (all?) hardware switches now based on Open vSwitch…

Deployment: Stanford Our real, production network 15 switches, 35 APs 25+ users 1+ year of use my personal email and web-traffic! Same physical network hosts Stanford demos 7 different demos

Demo Infrastructure with Slicing

Deployments: GENI

(Public) Industry Interest Google has been a main proponent of new OpenFlow 1.1 WAN features ECMP, MPLS-label matching MPLS LDP-OpenFlow speaking router: NANOG50 NEC has announced commercial products Initially for datacenters, talking to providers Ericsson “MPLS Openflow and the Split Router Architecture: A Research Approach“ at MPLS2010

OpenFlow in the WAN

CAPEX: 30-40% OPEX: 60-70% … and yet service providers own & operate 2 such networks : IP and Transport

Motivation IP & Transport Networks are separate C IP/MPLS D IP/MPLS managed and operated independently resulting in duplication of functions and resources in multiple layers and significant capex and opex burdens … well known C C D D IP/MPLS C IP/MPLS D GMPLS C C C D D D C D D C D D D D

Motivation IP & Transport Networks do not interact IP links are static IP/MPLS D IP/MPLS IP links are static and supported by static circuits or lambdas in the Transport network C C D D IP/MPLS C IP/MPLS D GMPLS C C C D D D C D D C D D D D

What does it mean for the IP network? IP backbone network design Router connections hardwired by lambdas 4X to 10X over-provisioned Peak traffic protection IP DWDM Big Problem More over-provisioned links Bigger Routers How is this scalable?? *April, 02

Bigger Routers? How is this scalable?? Dependence on large Backbone Routers Expensive Power Hungry Juniper TX8/T640 TX8 Cisco CRS-1 How is this scalable??

Functionality Issues! Dependence on large Backbone Routers Complex & Unreliable Network World 05/16/2007 Dependence on packet-switching Traffic-mix tipping heavily towards video Questionable if per-hop packet-by-packet processing is a good idea Dependence on over-provisioned links Over-provisioning masks  packet switching simply not very good at providing bandwidth, delay, jitter and loss guarantees

How can Optics help? Optical Switches Dynamic Circuit Switching 10X more capacity per unit volume (Gb/s/m3) 10X less power consumption 10X less cost per unit capacity (Gb/s) Five 9’s availability Dynamic Circuit Switching Recover faster from failures Guaranteed bandwidth & Bandwidth-on-demand Good for video flows Guaranteed low latency & jitter-free paths Help meet SLAs – lower need for over-provisioned IP links

Motivation IP & Transport Networks do not interact IP links are static IP/MPLS D IP/MPLS IP links are static and supported by static circuits or lambdas in the Transport network C C D D IP/MPLS C IP/MPLS D GMPLS C C C D D D C D D C D D D D

What does it mean for the Transport network? IP Without interaction with a higher layer there is really no need to support dynamic services and thus no need for an automated control plane and so the Transport network remains manually controlled via NMS/EMS and circuits to support a service take days to provision DWDM Without visibility into higher layer services the Transport network reduces to a bandwidth-seller The Internet can help… wide variety of services different requirements that can take advantage of dynamic circuit characteristics *April, 02

What is needed manage and operate commonly … Converged Packet and Circuit Networks manage and operate commonly benefit from both packet and circuit switches benefit from dynamic interaction between packet switching and dynamic-circuit-switching … Requires a common way to control a common way to use

We believe true convergence will come about from architectural change! But … Convergence is hard … mainly because the two networks have very different architecture which makes integrated operation hard … and previous attempts at convergence have assumed that the networks remain the same … making what goes across them bloated and complicated and ultimately un-usable We believe true convergence will come about from architectural change!

Flow Network IP/MPLS IP/MPLS IP/MPLS IP/MPLS UCP C D C C D D C D GMPLS

pac.c Research Goal: Packet and Circuit Flows Commonly Controlled & Managed Simple, network of Flow Switches Simple, Unified, Automated Control Plane Flow Network … that switch at different granularities: packet, time-slot, lambda & fiber

… a common way to control Packet Flows Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action Exploit the cross-connect table in circuit switches Circuit Flows In Port Lambda Starting Time-Slot VCG 52 Signal Type Out Port Lambda Starting Time-Slot VCG 52 Signal Type The Flow Abstraction presents a unifying abstraction … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network

… a common way to use Unified Architecture NETWORK OPERATING SYSTEM Variable Bandwidth Packet Links Dynamic Optical Bypass Unified Recovery Application-Aware QoS Traffic Engineering Networking Applications Unified Control Plane NETWORK OPERATING SYSTEM VIRTUALIZATION (SLICING) PLANE Unifying Abstraction OpenFlow Protocol Packet Switch Circuit Switch Packet Switch Underlying Data Plane Switching Packet & Circuit Switch Packet & Circuit Switch

Example Application ..via Variable Bandwidth Packet Links Congestion Control ..via Variable Bandwidth Packet Links

OpenFlow Demo at SC09

Selective Switch (WSS) Lab Demo with Wavelength Switches OpenFlow Controller OpenFlow Protocol GE E-O NetFPGA based OpenFlow packet switch NF1 GE O-E NF2 25 km SMF to OSA AWG WSS based OpenFlow circuit switch 1X9 Wavelength Selective Switch (WSS) GE to DWDM SFP convertor λ1 1553.3 nm λ2 1554.1 nm 192.168.3.10 192.168.3.12 192.168.3.15 Video Clients Video Server

Lab Demo with Wavelength Switches Openflow Circuit Switch 25 km SMF OpenFlow packet switch GE-Optical Mux/Demux

OpenFlow Enabled Converged Packet and Circuit Switched Network Stanford University and Ciena Corporation Demonstrate a converged network, where OpenFlow is used to control both packet and circuit switches. Dynamically define flow granularity to aggregate traffic moving towards the network core. Provide differential treatment to different types of aggregated packet flows in the circuit network: VoIP : Routed over minimum delay dynamic-circuit path Video: Variable-bandwidth, jitter free path bypassing intermediate packet switches HTTP: Best-effort over static-circuits Many more new capabilities become possible in a converged network

OpenFlow Enabled Converged Packet and Circuit Switched Network SAN FRANCISCO HOUSTON NEW YORK Controller OpenFlow Protocol Aggregated packet flows Web traffic in static predefined circuits Video traffic in dynamic, jitter-free, variable-bandwidth circuits VoIP traffic in dynamic, minimum propagation delay paths OpenFlow Enabled Converged Packet and Circuit Switched Network

Demo Video

Issues with GMPLS GMPLS original goal: UCP across packet & circuit (2000) Today – the idea is dead Packet vendors and ISPs are not interested Transport n/w SPs view it as a signaling tool available to the mgmt system for provisioning private lines (not related to the Internet) After 10 yrs of development, next-to-zero significant deployment as UCP GMPLS Issues 

Issues with GMPLS Issues are when considered as a unified architecture and control plane control plane complexity escalates when unifying across packets and circuits because it makes basic assumption that the packet network remains same: IP/MPLS network – many years of legacy L2/3 baggage and that the transport network remain same - multiple layers and multiple vendor domains use of fragile distributed routing and signaling protocols with many extensions, increasing switch cost & complexity, while decreasing robustness does not take into account the conservative nature of network operation can IP networks really handle dynamic links? Do transport network service providers really want to give up control to an automated control plane? does not provide easy path to control plane virtualization

Conclusions Current networks are complicated OpenFlow is an API Interesting apps include network slicing Nation-wide academic trials underway OpenFlow has potential for Service Providers Custom control for Traffic Engineering Combined Packet/Circuit switched networks Thank you!

Conclusions Current networks are complicated OpenFlow is an API Interesting apps include network slicing Nation-wide academic trials underway OpenFlow has potential for Service Providers Custom control for Traffic Engineering Combined Packet/Circuit switched networks Thank you!

Backup

Practical Considerations It is well known that Transport Service Providers dislike giving up manual control of their networks to an automated control plane no matter how intelligent that control plane may be how to convince them? It is also well known that converged operation of packet & circuit networks is a good idea for those that own both types of networks – eg AT&T, Verizon BUT what about those who own only packet networks –eg Google they do not wish to buy circuit switches We believe the answer to both lies in virtualization (or slicing)

Basic Idea: Unified Virtualization OpenFlow Protocol C FLOWVISOR OpenFlow Protocol CK P

Deployment Scenario: Different SPs ISP ‘A’ Client Controller Private Line Client Controller ISP ‘B’ Client Controller C C C OpenFlow Protocol Under Transport Service Provider (TSP) control FLOWVISOR OpenFlow Protocol D CK P Isolated Client Network Slices D Single Physical Infrastructure of Packet & Circuit Switches D

Demo Topology PKT PKT PKT PKT App App App App App App TSP’s NMS/EMS ISP# 1’s NetOS ISP# 2’s NetOS TSP’s FlowVisor PKT E T H PKT E T H P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H PKT E T H Transport Service Provider’s (TSP) virtualized network Internet Service Provider’s (ISP# 1) OF enabled network with slice of TSP’s network PKT E T H PKT E T H Internet Service Provider’s (ISP# 2) OF enabled network with another slice of TSP’s network TSP’s private line customer

Demo Methodology We will show: TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS. The FlowVisor will manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches } NMS/EMS can be used to manually provision circuits for Private Line customers Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices. ISP#1 is free to do whatever it wishes within its slice eg. use an automated control plane (like OpenFlow) bring up and tear-down links as dynamically as it wants ISP#2 is free to do the same within its slice Neither can control anything outside its slice, nor interfere with other slices TSP can still use NMS/EMS for the rest of its network

ISP #1’s Business Model ISP# 1 pays for a slice = { bandwidth + TSP switching resources } Part of the bandwidth is for static links between its edge packet switches (like ISPs do today) and some of it is for redirecting bandwidth between the edge switches (unlike current practice) The sum of both static bandwidth and redirected bandwidth is paid for up-front. The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.

ISP# 1’s network PKT PKT PKT Packet (virtual) topology H PKT E T H PKT E T H Packet (virtual) topology ..and spare bandwidth in the slice PKT E T H Notice the spare interfaces P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H Actual topology PKT E T H

ISP# 1’s network PKT PKT PKT Packet (virtual) topology PKT PKT H PKT E T H PKT E T H Packet (virtual) topology PKT E T H P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H Actual topology PKT E T H ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!

ISP #1’s Business Model Rationale Q. Why have spare interfaces on the edge switches? Why not use them all the time? A. Spare interfaces on the edge switches cost less than bandwidth in the core sharing expensive core bandwidth between cheaper edge ports is more cost-effective for the ISP gives the ISP flexibility in using dynamic circuits to create new packet links where needed, when needed The comparison is between (in the simple network shown) 3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth vs. 6 static links = 4 ports/edge switch + static core bandwidth as the number of edge switches increase, the gap increases

ISP #2’s Business Model ISP# 2 pays for a slice = { bandwidth + TSP switching resources } Only the bandwidth for static links between its edge packet switches is paid for up-front. Extra bandwidth is paid for on a pay-per-use basis TSP switching resources are required to provision/tear-down extra bandwidth Extra bandwidth is not guaranteed

ISP# 2’s network PKT PKT PKT Packet (virtual) topology H PKT E T H PKT E T H Packet (virtual) topology Only static link bw paid for up-front PKT E T H P K T E H S O N D M P K T E H S O N D M P K T H E N O S D M PKT E T H Actual topology PKT E T H ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!!

ISP #2’s Business Model Rationale Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G) Again it is for cost-efficiency reasons. ISP’s today would pay for the 10G in the core up-front and then run their links at 10% utilization. Instead they could pay for say 2.5G or 5G in the core, and ramp up when they need to or scale back when they don’t – pay per use.