Pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University

Slides:



Advertisements
Similar presentations
1 Building a Fast, Virtualized Data Plane with Programmable Hardware Bilal Anwer Nick Feamster.
Advertisements

1 Resonance: Dynamic Access Control in Enterprise Networks Ankur Nayak, Alex Reimers, Nick Feamster, Russ Clark School of Computer Science Georgia Institute.
Identifying MPLS Applications
OpenFlow and Software Defined Networks. Outline o The history of OpenFlow o What is OpenFlow? o Slicing OpenFlow networks o Software Defined Networks.
Chapter 1: Introduction to Scaling Networks
All Rights Reserved © Alcatel-Lucent 2009 Enhancing Dynamic Cloud-based Services using Network Virtualization F. Hao, T.V. Lakshman, Sarit Mukherjee, H.
OpenFlow in Service Provider Networks AT&T Tech Talks October 2010
Towards Software Defined Cellular Networks
1 IU Campus GENI/Openflow Experience Matt Davy Quilt Meeting, July 22nd 2010.
Packet and Circuit Convergence with OpenFlow Stanford Clean Slate Program Funded by Cisco, Deutsche Telekom, DoCoMo, Ericsson,
Experimental Demonstration of OpenFlow Control of Packet & Circuit Switches Vinesh Gudla, Saurav Das, Anujit Shastri, Guru Parulkar, Nick McKeown, Leonid.
Unifying Packet & Circuit Networks with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University Huawei, Feb 3 rd 2010
Why SDN and MPLS? Saurav Das, Ali Reza Sharafat, Guru Parulkar, Nick McKeown Clean Slate CTO Summit 9 th November, 2011.
OpenFlow overview Joint Techs Baton Rouge. Classic Ethernet Originally a true broadcast medium Each end-system network interface card (NIC) received every.
An Overview of OpenFlow Andrew Williams. Agenda What is OpenFlow? OpenFlow-enabled Projects Plans for a large-scale OpenFlow deployment through GENI OpenFlow.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
OpenFlowSwitch.org Enterprise GENI Nick McKeown Stanford OpenFlow team: Guido Appenzeller, Glen Gibb, David Underhill, David Erickson,
An Overview of Software-Defined Network Presenter: Xitao Wen.
OpenFlow Costin Raiciu Using slides from Brandon Heller and Nick McKeown.
Mobile Communication and Internet Technologies
Software-Defined Networking, OpenFlow, and how SPARC applies it to the telecommunications domain Pontus Sköldström - Wolfgang John – Elisa Bellagamba November.
Why can’t I innovate in my wiring closet? Nick McKeown MIT, April 17, 2008 The Stanford Clean Slate Program
OpenFlow : Enabling Innovation in Campus Networks SIGCOMM 2008 Nick McKeown, Tom Anderson, et el. Stanford University California, USA Presented.
Virtualization and OpenFlow Nick McKeown Nick McKeown VISA Workshop, Sigcomm 2009 Supported by NSF, Stanford Clean.
Flowspace revisited OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot L4 sport L4 dport Rule Action.
Professor Yashar Ganjali Department of Computer Science University of Toronto
Software Defined Networks Saurav Das Guru Parulkar Nick McKeown With contributions from many others… A Presentation to the OIF 12 th July, 2011.
An Overview of Software-Defined Network
Saurav Das, Guru Parulkar & Nick McKeown Stanford University European Conference on Optical Communications (ECOC) 18 th Sept, 2012 Why OpenFlow/SDN Can.
Virtualizing the Transport Network Why it matters & how OpenFlow can help Saurav Das OFELIA Workshop, ECOC 18 th Sept, 2011.
Reinventing Internet Infrastructure with OpenFlow and Software Defined Networking Stanford Clean Slate Program Funded by.
An Overview of Software-Defined Network Presenter: Xitao Wen.
Software-defined Networks October 2009 With Martin Casado and Scott Shenker And contributions from many others.
Professor Yashar Ganjali Department of Computer Science University of Toronto
Application-Aware Aggregation & Traffic Engineering in a Converged Packet-Circuit Network Saurav Das, Yiannis Yiakoumis, Guru Parulkar Nick McKeown Stanford.
Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, Jonathan Turner, SIGCOM CCR, 2008 Presented.
Information-Centric Networks10b-1 Week 13 / Paper 1 OpenFlow: enabling innovation in campus networks –Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru.
OpenFlow: Enabling Technology Transfer to Networking Industry Nikhil Handigol Nikhil Handigol Cisco Nerd.
Introduction to SDN & OpenFlow Based on Tutorials from: Srini Seetharaman, Deutsche Telekom Innovation Center FloodLight Open Flow Controller, floodlight.openflowhub.org.
Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar Stanford University In collaboration with Martin Casado and Scott.
Brent Salisbury CCIE#11972 Network Architect University of Kentucky 9/22/ OpenStack & OpenFlow Demo.
The Stanford Clean Slate Program POMI2020 Mobility Nick McKeown
Aaron Gember Aditya Akella University of Wisconsin-Madison
OpenFlow: Enabling Innovation in Campus Networks
Aditya Akella (Based on slides from Aaron Gember and Nick McKeown)
CS : Software Defined Networks 3rd Lecture 28/3/2013
Multi-Protocol Label Switching University of Southern Queensland.
A Simple Unified Control Plane for Packet and Circuit Networks Saurav Das, Guru Parulkar, Nick McKeown Stanford University.
OpenFlow:Enabling Innovation in Campus Network
Unifying Packet & Circuit Networks with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University BIPN, Nov 30 th 2009
SDN and Openflow. Motivation Since the invention of the Internet, we find many innovative ways to use the Internet – Google, Facebook, Cloud computing,
Closed2Open Networking Linux Day 2015 Napoli, October Antonio Pescapè,
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
Information-Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics.
Presenter : Weerawardhana J.L.M.N. Department of Computer Engineering, University of Peradeniya.
3.6 Software-Defined Networks and OpenFlow
Software Defined Networking and OpenFlow Geddings Barrineau Ryan Izard.
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
SDN basics and OpenFlow. Review some related concepts SDN overview OpenFlow.
Software defined networking: Experimental research on QoS
Week 6 Software Defined Networking (SDN): Concepts
OpenFlow in Service Provider Networks AT&T Tech Talks October 2010
Stanford University Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott.
The Stanford Clean Slate Program
Software Defined Networking (SDN)
Software Defined Networking
Handout # 18: Software-Defined Networking
Chapter 4: outline 4.1 Overview of Network layer data plane
Presentation transcript:

pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University Ciena India, April 2 nd

Internet has many problems Plenty of evidence and documentation Internet’s “root cause problem” It is Closed for Innovations 2

Million of lines of source code 5400 RFCsBarrier to entry 500M gates 10Gbytes RAM BloatedPower Hungry We have lost our way Specialized Packet Forwarding Hardware Operating System Operating System App Routing, management, mobility management, access control, VPNs, …

Software Control Router Hardware Datapath Authentication, Security, Access Control HELLO MPLS NAT IPV6 anycast multicast Mobile IP L3 VPN L2 VPN VLAN OSPF-TE RSVP-TE HELLO Firewa ll Multi layer multi region iBGP, eBGP IPSec Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a “mainframe-mentality”

Deployment IdeaStandardize Wait 10 years Glacial process of innovation made worse by captive standards process Driven by vendors Consumers largely locked out Glacial innovation

Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p Network Operating System App Change is happening in non-traditional markets

App Simple Packet Forwarding Hardware App Simple Packet Forwarding Hardware Network Operating System 1. Open interface to hardware 3. Well-defined open API 2. At least one good operating system Extensible, possibly open-source The “Software-defined Network”

Windows (OS) Windows (OS) Windows (OS) Windows (OS) Linux Mac OS Mac OS x86 (Computer) x86 (Computer) Windows (OS) Windows (OS) App Linux Mac OS Mac OS Mac OS Mac OS Virtualization layer App Controller 1 App Controller 2 Controller 2 Virtualization or “Slicing” App OpenFlow Controller 1 NOX (Network OS) NOX (Network OS) Controller 2 Controller 2 Network OS Trend Computer IndustryNetwork Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation

The Flow Abstraction Rule (exact & wildcard) ActionStatistics Rule (exact & wildcard) ActionStatistics Rule (exact & wildcard) ActionStatistics Rule (exact & wildcard) Default ActionStatistics Exploit the flow table in switches, routers, and chipsets Flow 1. Flow 2. Flow 3. Flow N. e.g. Port, VLAN ID, L2, L3, L4, … e.g. unicast, mcast, map-to-queue, drop Count packets & bytes Expiration time/count

10 Controller OpenFlow Switch Flow Table Flow Table Secure Channel Secure Channel OpenFlow Protocol SSL hw sw OpenFlow Switching Add/delete flow entry Encapsulated packets Controller discovery A Flow is any combination of above fields described in the Rule

Controller Flow Example OpenFlow Protocol RuleActionStatisticsRuleActionStatisticsRuleActionStatistics A Flow is the fundamental unit of manipulation within a switch Routing

OpenFlow is Backward Compatible Ethernet Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * 00:1f:.. *******port6 Application Firewall * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ********22drop IP Routing * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * * *** ***port6

OpenFlow allows layers to be combined VLAN + App * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ***vlan1****80 port6, port7 Flow Switching port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 00:1f vlan port600:2e.. port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action port 1000:2e.. Port + Ethernet + IP * * ***

A Clean Slate Approach 14 Goal: Put an Open platform in hands of researchers/students to test new ideas at scale Approach: 1. Define OpenFlow feature 2. Work with vendors to add OpenFlow to their switches 3. Deploy on college campus networks 4. Create experimental open-source software - researchers can build on each other’s work

OpenFlow Hardware Cisco Catalyst 6k NEC IP8800 HP Procurve 5400 Juniper MX-series WiMax (NEC) WiFi Quanta LB4G Ciena CoreDirector Arista 7100 series (Fall 2009) (Fall 2009)

OpenFlow Deployments Stanford Deployments – Wired: CS Gates building, EE CIS building, EE Packard building (soon) – WiFi: 100 OpenFlow APs across SoE – WiMAX: OpenFlow service in SoE Other deployments – Internet2 – JGN2plus, Japan – research groups have switches Research and Production Deployments on commercial hardware Juniper, HP, Cisco, NEC, (Quanta), …

UW Stanford Univ Wisconsin Indiana Univ Rutgers Princeton Clemson Georgia Tech Internet2 NLR Nationwide OpenFlow Trials Production deployments before end of 2010 Production deployments before end of 2010

D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS Motivation are separate networks managed and operated independently resulting in duplication of functions and resources in multiple layers and significant capex and opex burdens … well known IP & Transport Networks (Carrier’s view)

Motivation … Convergence is hard … mainly because the two networks have very different architecture which makes integrated operation hard … and previous attempts at convergence have assumed that the networks remain the same … making what goes across them bloated and complicated and ultimately un-usable We believe true convergence will come about from architectural change! We believe true convergence will come about from architectural change!

Flow Network D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS UCP

Flow Network … that switch at different granularities: packet, time-slot, lambda & fiber Simple, Unified, Automated Control Plane Simple,networkof FlowSwitches Research Goal: Packet and Circuit Flows Commonly Controlled & Managed pac.c

22 OpenFlow & Circuit Switches Exploit the cross-connect table in circuit switches Packet Flows Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 22 Circuit Flows Out Port Out Lambda Starting Time-Slot Signal Type VCG 22 In Port In Lambda Starting Time-Slot Signal Type VCG The Flow Abstraction presents a unifying abstraction … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network

IN OUT GE ports TDM ports Packet Switch Fabric Packet Switch Fabric OpenFlow (software) OpenFlow (software) RAS RAS IP VLAN2, P1 VLAN2 VCG 3 OpenFlow (software) OpenFlow (software) VLAN VLAN2, P2 VLAN7VCG5 Packet Switch Fabric IP TCP 80 + VLAN7, P2 TDM Circuit Switch Fabric VCG5 VCG3 P1 VC4 1 P2 VC4 4 P1 VC4 10 VCG5 P3 STS192 1 pac.c Example

Unified Architecture OPENFLOW Protocol Packet Switch Circuit Switch Packet & Circuit Switch NETWORK OPERATING SYSTEM Underlying Data Plane Switching App Unified Control Plane Unifying Abstraction Networking Applications

Example Network Services Static “VLANs” New routing protocol: unicast, multicast, multipath, load-balancing Network access control Mobile VM management Mobility and handoff management Energy management Packet processor (in controller) IPvX Network measurement and visualization … 25

Congestion Control QoS 26 Converged packets & dynamic circuits opens up new capabilities Network Recovery Traffic Engineering Power Mgmt VPNs Discovery Routing

Congestion Control Example Application..via Variable Bandwidth Packet Links

OpenFlow Demo at SC09 We demonstrated ‘Variable Bandwidth Packet Links’ at SuperComputing 2009 Joint demo with Ciena Corp. Ciena CoreDirector switches packet (Ethernet) and circuit switching (SONET TDM) fabrics and interfaces native support of OpenFlow for both switching technologies Network OS controls both switching fabrics Network Application establishes packet & circuit flows and modifies circuit bandwidth in response to packet flow needs

OpenFlow Demo at SC09

Video ClientsVideo Server OpenFlow Testbed λ nm λ nm OpenFlow Controller OpenFlow Protocol GE to DWDM SFP convertor GE O-E NF2 GE E-O NetFPGA based OpenFlow packet switch NF1 25 km SMF to OSA AWG WSS based OpenFlow circuit switch 1X9 Wavelength Selective Switch (WSS)

Openflow Circuit Switch 25 km SMF OpenFlow packet switch GE-Optical Mux/Demux Lab Demo with Wavelength Switches

pac.c next step : A larger demonstration of capabilities enabled by converged networks

Demo Goals

Demo Topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH

Demo Methodology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH

Step 1: Aggregation into Fixed Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Aggregation Into static ckts … for best-effort traffic: http, smtp, ftp etc.

Step 2: Aggregation into Dynamic Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Streaming video flow Initially muxed into static ckts Increasing streaming video traffic

Step 2: Aggregation into Dynamic Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH..leads to video flows being aggregated..& packed into a dynamically created circuit..that bypasses intermediate packet switch

Step 2: Aggregation into Dynamic Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH.. even greater increase in video traffic.. results in dynamic increase of circuit bandwidth

Step 3: Fine-grained control PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH.. VoIP flows.. aggregated over dynamic low-b/w circuit with min propagation delay

Step 3: Fine-grained control PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH.. decreasing video traffic.. removal of dynamic circuit

Step 4: Network Recovery PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Circuit flow recovery, via 1.previously allocated backup circuit (protection) or 2.dynamically created circuit (restoration) Packet flow recovery via rerouting

Demo References

pac.c business models

It is well known that Transport Service Providers dislike giving up manual control of their networks to an automated control plane no matter how intelligent that control plane may be how to convince them? It is also well known that converged operation of packet & circuit networks is a good idea for those that own both types of networks – eg AT&T, Verizon BUT what about those who own only packet networks –eg Google they do not wish to buy circuit switches how to convince them? We believe the answer to both lies in virtualization (or slicing) Demo Motivation

Demo Goals

OpenFlow Protocol CCC FLOWVISOR OpenFlow Protocol CK P P P P Basic Idea: Unified Virtualization

OpenFlow Protocol CCC FLOWVISOR OpenFlow Protocol CK P P P P ISP ‘A’ Client Controller Private Line Client Controller ISP ‘B’ Client Controller Under Transport Service Provider (TSP) control Isolated Client Network Slices Single Physical Infrastructure of Packet & Circuit Switches Deployment Scenario: Different SPs

Demo Topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ISP# 1’s NetOS App PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ISP# 2’s NetOS App PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM Transport Service Provider’s (TSP) virtualized network Internet Service Provider’s (ISP# 1) OF enabled network with slice of TSP’s network Internet Service Provider’s (ISP# 2) OF enabled network with another slice of TSP’s network TSP’s private line customer

Demo Methodology We will show: 1.TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS. a)The FlowVisor will manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches } b)NMS/EMS can be used to manually provision circuits for Private Line customers 2.Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices. 1.ISP#1 is free to do whatever it wishes within its slice a)eg. use an automated control plane (like OpenFlow) b)bring up and tear-down links as dynamically as it wants 2.ISP#2 is free to do the same within its slice 3.Neither can control anything outside its slice, nor interfere with other slices 4.TSP can still use NMS/EMS for the rest of its network

ISP #1’s Business Model ISP# 1 pays for a slice = { bandwidth + TSP switching resources } 1.Part of the bandwidth is for static links between its edge packet switches (like ISPs do today) 2.and some of it is for redirecting bandwidth between the edge switches (unlike current practice) 3.The sum of both static bandwidth and redirected bandwidth is paid for up-front. 4.The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.

ISP# 1’s network PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Packet (virtual) topology Actual topology Notice the spare interfaces..and spare bandwidth in the slice

ISP# 1’s network PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Packet (virtual) topology Actual topology ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!

ISP #1’s Business Model Rationale Q. Why have spare interfaces on the edge switches? Why not use them all the time? A. Spare interfaces on the edge switches cost less than bandwidth in the core 1.sharing expensive core bandwidth between cheaper edge ports is more cost-effective for the ISP 2.gives the ISP flexibility in using dynamic circuits to create new packet links where needed, when needed 3.The comparison is between (in the simple network shown) a)3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth b)vs. 6 static links = 4 ports/edge switch + static core bandwidth c)as the number of edge switches increase, the gap increases

ISP #2’s Business Model ISP# 2 pays for a slice = { bandwidth + TSP switching resources } 1.Only the bandwidth for static links between its edge packet switches is paid for up-front. 2.Extra bandwidth is paid for on a pay-per-use basis 3.TSP switching resources are required to provision/tear- down extra bandwidth 4.Extra bandwidth is not guaranteed

ISP# 2’s network Packet (virtual) topology Actual topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!! Only static link bw paid for up-front

ISP #2’s Business Model Rationale Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G) A.Again it is for cost-efficiency reasons. 1.ISP’s today would pay for the 10G in the core up-front and then run their links at 10% utilization. 2.Instead they could pay for say 2.5G or 5G in the core, and ramp up when they need to or scale back when they don’t – pay per use.

Demonstrating Isolation Actual topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM Private line customer ISP# 2’s NetOS The switches inform the ISP# 2’s controller, that the non-guaranteed extra bandwidth is no longer available on this link (may be available elsewhere) TSP provisions private line and uses up all the spare bw on the link ISP #2 can still vary bw on this link FlowVisor would block ISP#2’s attempts on this link

FlowVisor Technical Report openflow-tr flowvisor.pdf Demo References Use of spare interfaces (for ISP# 1)– OFC 2002 paper Variable bandwidth packet links (for ISP# 2) demo-at-sc09/

Summary OpenFlow is a large clean-slate program with many motivations and goals convergence of packet & circuit networks is one such goal OpenFlow simplifies and unifies across layers and technologies packet and circuit infrastructures electronics and photonics and enables new capabilities in converged networks with real circuits or virtual circuits