Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University

Similar presentations


Presentation on theme: "Pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University"— Presentation transcript:

1 pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University http://www.openflowswitch.org/wk/index.php/PAC.C Ciena India, April 2 nd 2010 http://openflowswitch.org

2 Internet has many problems Plenty of evidence and documentation Internet’s “root cause problem” It is Closed for Innovations 2

3 Million of lines of source code 5400 RFCsBarrier to entry 500M gates 10Gbytes RAM BloatedPower Hungry We have lost our way Specialized Packet Forwarding Hardware Operating System Operating System App Routing, management, mobility management, access control, VPNs, …

4 Software Control Router Hardware Datapath Authentication, Security, Access Control HELLO MPLS NAT IPV6 anycast multicast Mobile IP L3 VPN L2 VPN VLAN OSPF-TE RSVP-TE HELLO Firewa ll Multi layer multi region iBGP, eBGP IPSec Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a “mainframe-mentality”

5 Deployment IdeaStandardize Wait 10 years Glacial process of innovation made worse by captive standards process Driven by vendors Consumers largely locked out Glacial innovation

6 Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p Network Operating System App Change is happening in non-traditional markets

7 App Simple Packet Forwarding Hardware App Simple Packet Forwarding Hardware Network Operating System 1. Open interface to hardware 3. Well-defined open API 2. At least one good operating system Extensible, possibly open-source The “Software-defined Network”

8 Windows (OS) Windows (OS) Windows (OS) Windows (OS) Linux Mac OS Mac OS x86 (Computer) x86 (Computer) Windows (OS) Windows (OS) App Linux Mac OS Mac OS Mac OS Mac OS Virtualization layer App Controller 1 App Controller 2 Controller 2 Virtualization or “Slicing” App OpenFlow Controller 1 NOX (Network OS) NOX (Network OS) Controller 2 Controller 2 Network OS Trend Computer IndustryNetwork Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation

9 The Flow Abstraction Rule (exact & wildcard) ActionStatistics Rule (exact & wildcard) ActionStatistics Rule (exact & wildcard) ActionStatistics Rule (exact & wildcard) Default ActionStatistics Exploit the flow table in switches, routers, and chipsets Flow 1. Flow 2. Flow 3. Flow N. e.g. Port, VLAN ID, L2, L3, L4, … e.g. unicast, mcast, map-to-queue, drop Count packets & bytes Expiration time/count

10 10 Controller OpenFlow Switch Flow Table Flow Table Secure Channel Secure Channel OpenFlow Protocol SSL hw sw OpenFlow Switching Add/delete flow entry Encapsulated packets Controller discovery A Flow is any combination of above fields described in the Rule

11 Controller Flow Example OpenFlow Protocol RuleActionStatisticsRuleActionStatisticsRuleActionStatistics A Flow is the fundamental unit of manipulation within a switch Routing

12 OpenFlow is Backward Compatible Ethernet Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * 00:1f:.. *******port6 Application Firewall * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ********22drop IP Routing * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * * *** 5.6.7. 8 ***port6

13 OpenFlow allows layers to be combined VLAN + App * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ***vlan1****80 port6, port7 Flow Switching port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 00:1f.. 0800vlan11.2.3.45.6.7.841726480port600:2e.. port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 08005.6.7.8 4port 1000:2e.. Port + Ethernet + IP * * ***

14 A Clean Slate Approach 14 Goal: Put an Open platform in hands of researchers/students to test new ideas at scale Approach: 1. Define OpenFlow feature 2. Work with vendors to add OpenFlow to their switches 3. Deploy on college campus networks 4. Create experimental open-source software - researchers can build on each other’s work

15 OpenFlow Hardware Cisco Catalyst 6k NEC IP8800 HP Procurve 5400 Juniper MX-series WiMax (NEC) WiFi Quanta LB4G Ciena CoreDirector Arista 7100 series (Fall 2009) (Fall 2009)

16 OpenFlow Deployments Stanford Deployments – Wired: CS Gates building, EE CIS building, EE Packard building (soon) – WiFi: 100 OpenFlow APs across SoE – WiMAX: OpenFlow service in SoE Other deployments – Internet2 – JGN2plus, Japan – 10-15 research groups have switches Research and Production Deployments on commercial hardware Juniper, HP, Cisco, NEC, (Quanta), …

17 UW Stanford Univ Wisconsin Indiana Univ Rutgers Princeton Clemson Georgia Tech Internet2 NLR Nationwide OpenFlow Trials Production deployments before end of 2010 Production deployments before end of 2010

18 D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS Motivation are separate networks managed and operated independently resulting in duplication of functions and resources in multiple layers and significant capex and opex burdens … well known IP & Transport Networks (Carrier’s view)

19 Motivation … Convergence is hard … mainly because the two networks have very different architecture which makes integrated operation hard … and previous attempts at convergence have assumed that the networks remain the same … making what goes across them bloated and complicated and ultimately un-usable We believe true convergence will come about from architectural change! We believe true convergence will come about from architectural change!

20 Flow Network D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS UCP

21 Flow Network … that switch at different granularities: packet, time-slot, lambda & fiber Simple, Unified, Automated Control Plane Simple,networkof FlowSwitches Research Goal: Packet and Circuit Flows Commonly Controlled & Managed pac.c

22 22 OpenFlow & Circuit Switches Exploit the cross-connect table in circuit switches Packet Flows Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 22 Circuit Flows Out Port Out Lambda Starting Time-Slot Signal Type VCG 22 In Port In Lambda Starting Time-Slot Signal Type VCG The Flow Abstraction presents a unifying abstraction … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network

23 IN OUT GE ports TDM ports Packet Switch Fabric Packet Switch Fabric OpenFlow (software) OpenFlow (software) RAS RAS IP 11.12.0.0 + VLAN2, P1 VLAN2 VCG 3 OpenFlow (software) OpenFlow (software) VLAN 1025 + VLAN2, P2 VLAN7VCG5 Packet Switch Fabric IP 11.13.0.0 TCP 80 + VLAN7, P2 TDM Circuit Switch Fabric VCG5 VCG3 P1 VC4 1 P2 VC4 4 P1 VC4 10 VCG5 P3 STS192 1 pac.c Example

24 Unified Architecture OPENFLOW Protocol Packet Switch Circuit Switch Packet & Circuit Switch NETWORK OPERATING SYSTEM Underlying Data Plane Switching App Unified Control Plane Unifying Abstraction Networking Applications

25 Example Network Services Static “VLANs” New routing protocol: unicast, multicast, multipath, load-balancing Network access control Mobile VM management Mobility and handoff management Energy management Packet processor (in controller) IPvX Network measurement and visualization … 25

26 Congestion Control QoS 26 Converged packets & dynamic circuits opens up new capabilities Network Recovery Traffic Engineering Power Mgmt VPNs Discovery Routing

27 Congestion Control Example Application..via Variable Bandwidth Packet Links

28 OpenFlow Demo at SC09 We demonstrated ‘Variable Bandwidth Packet Links’ at SuperComputing 2009 Joint demo with Ciena Corp. Ciena CoreDirector switches packet (Ethernet) and circuit switching (SONET TDM) fabrics and interfaces native support of OpenFlow for both switching technologies Network OS controls both switching fabrics Network Application establishes packet & circuit flows and modifies circuit bandwidth in response to packet flow needs http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/

29 OpenFlow Demo at SC09

30 Video ClientsVideo Server OpenFlow Testbed 192.168.3.12 192.168.3.10 λ1 1553.3 nm λ2 1554.1 nm 192.168.3.15 OpenFlow Controller OpenFlow Protocol GE to DWDM SFP convertor GE O-E NF2 GE E-O NetFPGA based OpenFlow packet switch NF1 25 km SMF to OSA AWG WSS based OpenFlow circuit switch 1X9 Wavelength Selective Switch (WSS)

31 Openflow Circuit Switch 25 km SMF OpenFlow packet switch GE-Optical Mux/Demux Lab Demo with Wavelength Switches

32 pac.c next step : A larger demonstration of capabilities enabled by converged networks

33 Demo Goals

34 Demo Topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH

35 Demo Methodology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH

36 Step 1: Aggregation into Fixed Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Aggregation Into static ckts … for best-effort traffic: http, smtp, ftp etc.

37 Step 2: Aggregation into Dynamic Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Streaming video flow Initially muxed into static ckts Increasing streaming video traffic

38 Step 2: Aggregation into Dynamic Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH..leads to video flows being aggregated..& packed into a dynamically created circuit..that bypasses intermediate packet switch

39 Step 2: Aggregation into Dynamic Circuits PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH.. even greater increase in video traffic.. results in dynamic increase of circuit bandwidth

40 Step 3: Fine-grained control PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH.. VoIP flows.. aggregated over dynamic low-b/w circuit with min propagation delay

41 Step 3: Fine-grained control PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH.. decreasing video traffic.. removal of dynamic circuit

42 Step 4: Network Recovery PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH NETWORK OPERATING SYSTEM App PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Circuit flow recovery, via 1.previously allocated backup circuit (protection) or 2.dynamically created circuit (restoration) Packet flow recovery via rerouting

43 Demo References

44 pac.c business models

45 It is well known that Transport Service Providers dislike giving up manual control of their networks to an automated control plane no matter how intelligent that control plane may be how to convince them? It is also well known that converged operation of packet & circuit networks is a good idea for those that own both types of networks – eg AT&T, Verizon BUT what about those who own only packet networks –eg Google they do not wish to buy circuit switches how to convince them? We believe the answer to both lies in virtualization (or slicing) Demo Motivation

46 Demo Goals

47 OpenFlow Protocol CCC FLOWVISOR OpenFlow Protocol CK P P P P Basic Idea: Unified Virtualization

48 OpenFlow Protocol CCC FLOWVISOR OpenFlow Protocol CK P P P P ISP ‘A’ Client Controller Private Line Client Controller ISP ‘B’ Client Controller Under Transport Service Provider (TSP) control Isolated Client Network Slices Single Physical Infrastructure of Packet & Circuit Switches Deployment Scenario: Different SPs

49 Demo Topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ISP# 1’s NetOS App PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ISP# 2’s NetOS App PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM Transport Service Provider’s (TSP) virtualized network Internet Service Provider’s (ISP# 1) OF enabled network with slice of TSP’s network Internet Service Provider’s (ISP# 2) OF enabled network with another slice of TSP’s network TSP’s private line customer

50 Demo Methodology We will show: 1.TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS. a)The FlowVisor will manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches } b)NMS/EMS can be used to manually provision circuits for Private Line customers 2.Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices. 1.ISP#1 is free to do whatever it wishes within its slice a)eg. use an automated control plane (like OpenFlow) b)bring up and tear-down links as dynamically as it wants 2.ISP#2 is free to do the same within its slice 3.Neither can control anything outside its slice, nor interfere with other slices 4.TSP can still use NMS/EMS for the rest of its network

51 ISP #1’s Business Model ISP# 1 pays for a slice = { bandwidth + TSP switching resources } 1.Part of the bandwidth is for static links between its edge packet switches (like ISPs do today) 2.and some of it is for redirecting bandwidth between the edge switches (unlike current practice) 3.The sum of both static bandwidth and redirected bandwidth is paid for up-front. 4.The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.

52 ISP# 1’s network PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Packet (virtual) topology Actual topology Notice the spare interfaces..and spare bandwidth in the slice

53 ISP# 1’s network PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Packet (virtual) topology Actual topology ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!

54 ISP #1’s Business Model Rationale Q. Why have spare interfaces on the edge switches? Why not use them all the time? A. Spare interfaces on the edge switches cost less than bandwidth in the core 1.sharing expensive core bandwidth between cheaper edge ports is more cost-effective for the ISP 2.gives the ISP flexibility in using dynamic circuits to create new packet links where needed, when needed 3.The comparison is between (in the simple network shown) a)3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth b)vs. 6 static links = 4 ports/edge switch + static core bandwidth c)as the number of edge switches increase, the gap increases

55 ISP #2’s Business Model ISP# 2 pays for a slice = { bandwidth + TSP switching resources } 1.Only the bandwidth for static links between its edge packet switches is paid for up-front. 2.Extra bandwidth is paid for on a pay-per-use basis 3.TSP switching resources are required to provision/tear- down extra bandwidth 4.Extra bandwidth is not guaranteed

56 ISP# 2’s network Packet (virtual) topology Actual topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!! Only static link bw paid for up-front

57 ISP #2’s Business Model Rationale Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G) A.Again it is for cost-efficiency reasons. 1.ISP’s today would pay for the 10G in the core up-front and then run their links at 10% utilization. 2.Instead they could pay for say 2.5G or 5G in the core, and ramp up when they need to or scale back when they don’t – pay per use.

58 Demonstrating Isolation Actual topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM Private line customer ISP# 2’s NetOS The switches inform the ISP# 2’s controller, that the non-guaranteed extra bandwidth is no longer available on this link (may be available elsewhere) TSP provisions private line and uses up all the spare bw on the link ISP #2 can still vary bw on this link FlowVisor would block ISP#2’s attempts on this link

59 FlowVisor Technical Report http://openflowswitch.org/downloads/technicalreports/ openflow-tr-2009-1-flowvisor.pdf Demo References Use of spare interfaces (for ISP# 1)– OFC 2002 paper Variable bandwidth packet links (for ISP# 2) http://www.openflowswitch.org/wp/2009/11/openflow- demo-at-sc09/

60 Summary OpenFlow is a large clean-slate program with many motivations and goals convergence of packet & circuit networks is one such goal OpenFlow simplifies and unifies across layers and technologies packet and circuit infrastructures electronics and photonics and enables new capabilities in converged networks with real circuits or virtual circuits


Download ppt "Pac.c Packet & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University"

Similar presentations


Ads by Google