ONOS Open Network Operating System

Slides:



Advertisements
Similar presentations
Video Services over Software-Defined Networks
Advertisements

OpenFlow and Software Defined Networks. Outline o The history of OpenFlow o What is OpenFlow? o Slicing OpenFlow networks o Software Defined Networks.
ONRC/ON.Lab Overview 1 Bring innovation and openness to Internet & Cloud Infrastructure.
SDN Controller Challenges
Software-defined networking: Change is hard Ratul Mahajan with Chi-Yao Hong, Rohan Gandhi, Xin Jin, Harry Liu, Vijay Gill, Srikanth Kandula, Mohan Nanduri,
Why SDN and MPLS? Saurav Das, Ali Reza Sharafat, Guru Parulkar, Nick McKeown Clean Slate CTO Summit 9 th November, 2011.
CloudWatcher: Network Security Monitoring Using OpenFlow in Dynamic Cloud Networks or: How to Provide Security Monitoring as a Service in Clouds? Seungwon.
Connect communicate collaborate GN3plus What the network should do for clouds? Christos Argyropoulos National Technical University of Athens (NTUA) Institute.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Network Virtualization COS 597E: Software Defined Networking.
An Overview of Software-Defined Network Presenter: Xitao Wen.
Software-Defined Networking, OpenFlow, and how SPARC applies it to the telecommunications domain Pontus Sköldström - Wolfgang John – Elisa Bellagamba November.
Programming Abstractions for Software-Defined Networks Jennifer Rexford Princeton University.
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Software Defined Networking.
Towards Virtual Routers as a Service 6th GI/ITG KuVS Workshop on “Future Internet” November 22, 2010 Hannover Zdravko Bozakov.
Scalable and Crash-Tolerant Load Balancing based on Switch Migration
An Overview of Software-Defined Network
1 Logically Centralized? State Distribution Trade-offs in Software Defined Networks Written By Dan Levin, Andreas Wundsam, Brandon Heller, Nikhil Handigol,
An Overview of Software-Defined Network Presenter: Xitao Wen.
Virtualized FPGA accelerators in Cloud Computing Systems
Information-Centric Networks10b-1 Week 13 / Paper 1 OpenFlow: enabling innovation in campus networks –Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru.
Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar Stanford University In collaboration with Martin Casado and Scott.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
Software Defined-Networking. Network Policies Access control: reachability – Alice can not send packets to Bob Application classification – Place video.
CS : Software Defined Networks 3rd Lecture 28/3/2013
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
A survey of SDN: Past, Present and Future of Programmable Networks Speaker :Yu-Fu Huang Advisor :Dr. Kai-Wei Ke Date:2014/Sep./30 1.
SDN and Openflow. Motivation Since the invention of the Internet, we find many innovative ways to use the Internet – Google, Facebook, Cloud computing,
SDN Management Layer DESIGN REQUIREMENTS AND FUTURE DIRECTION NO OF SLIDES : 26 1.
Extending OVN Forwarding Pipeline Topology-based Service Injection
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
Network Virtualization in Multi-tenant Datacenters Author: VMware, UC Berkeley and ICSI Publisher: 11th USENIX Symposium on Networked Systems Design and.
Improving Network Management with Software Defined Network Group 5 : z Xuling Wu z Haipeng Jiang z Sichen Wu z Aparna Sanil.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
Information-Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics.
ONOS Project Partners with Linux Foundation Driving Innovation by Global Developer Community Guru Parulkar Executive Director ON.Lab and Chairman of the.
OpenFlow MPLS and the Open Source Label Switched Router Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan,
CSci8211: SDN Controller Design 1 Overview of SDN Controller Design  SDN Re-cap  SDN Controller Design: Case Studies  NOX Next Week:  ONIX  ONOS 
NetEgg: Scenario-based Programming for SDN Policies Yifei Yuan, Dong Lin, Rajeev Alur, Boon Thau Loo University of Pennsylvania 1.
CSci8211: SDN Controller Design: ONOS 1 NOS Case Study: ONOS Open Network OS by ON.LAB  Prototype 1 focus on implementing a global network view goals:
Introduction to Avaya’s SDN Architecture February 2015.
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
The Road to SDN: An Intellectual History of Programmable Networks KyoungSoo Park Department of Electrical Engineering KAIST.
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
SDN challenges Deployment challenges
CIS 700-5: The Design and Implementation of Cloud Networks
SDN controller scalability issue
Software defined networking: Experimental research on QoS
Martin Casado, Nate Foster, and Arjun Guha CACM, October 2014
Hydra: Leveraging Functional Slicing for Efficient Distributed SDN Controllers Yiyang Chang, Ashkan Rezaei, Balajee Vamanan, Jahangir Hasan, Sanjay Rao.
ETHANE: TAKING CONTROL OF THE ENTERPRISE
Introduction to OpenFlow
NOX: Towards an Operating System for Networks
On the road: Test automation in practice for a BMW map update service
SoftMoW: Recursive and Reconfigurable Cellular WAN Architecture
Overview of SDN Controller Design
Week 6 Software Defined Networking (SDN): Concepts
6.829 Lecture 13: Software Defined Networking
© 2016 Global Market Insights, Inc. USA. All Rights Reserved Software Defined Networking Market to grow at 54% CAGR from 2017 to 2024:
Stanford University Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott.
湖南大学-信息科学与工程学院-计算机与科学系
Indigo Doyoung Lee Dept. of CSE, POSTECH
The Stanford Clean Slate Program
Software Defined Networking (SDN)
Casablanca Platform Enhancements to Support 5G Use Case (Network Deployment, Slicing, Network Optimization and Automation Framework) 5G Use Case Team.
An Introduction to Software Defined Networking and OpenFlow
Software Defined Networking
An Introduction to Software Defined Networking and OpenFlow
Presentation transcript:

ONOS Open Network Operating System An Experimental Open-Source Distributed SDN OS Introduction: Experimental, Open, Distributed ----- Meeting Notes (7/29/13 12:57) ----- Read the slide This is just a glimpse Pankaj Berde, Umesh Krishnaswamy, Jonathan Hart, Masayoshi Kobayashi, Pavlin Radoslavov, Pingping Lin, Sean Corcoran, Tim Lindberg, Rachel Sverdlov, Suibin Zhang, William Snow, Guru Parulkar

Agenda Overview of ONRC (Open Networking Research Center) ONOS Architecture Scale-out and high availability Network graph as north-bound abstraction DEMO Consistency models Development and test environment Performance Next steps

Leadership Nick Mckeown KP, Mayfield, Sequoia Professor, Stanford Larry Peterson Bob Kahn Professor Princeton Chief Architect, ON.LAB Scott Shenker Professor, UC Berkeley Chief Scientist, ICSI National Academy of Engineering ACM SIGCOMM Award Winners Fellow of IEEE and ACM Entrepreneurs Impact on practice of networking/cloud Let me begin at the top… We are really fortunate to have a leadership of three very accomplished gentlemen – they provide the vision and direction and that defines who we are and what we do. I thought I would simply list their common accomplishments and recognitions… Each one of them is o o o Moreover they have been also successful entrepreneurs and have had a history of impacting real practice of networking…. Now you may also wonder why they came together for this venture?

Stanford/Berkeley SDN Activities With Partners VM Migration (Best Demo) Trans-Pacific Baby GENI Nation Wide GENI “The OpenFlow Show” – IT World SDN Concept SIGCOMM08 GEC3 SIGCOMM09 GEC6 GEC9 Interop 2011 Demo Other countries Over 68 countries (Europe, Japan, China, Korea, Brazil, etc.) Deployment US R&E Community GENI: 8 Universities + Internet2 + NLR Many other campuses Stanford University ~45 switch/APs ~25user In McKeown Group CIS/EE Building Production Network OpenFlow Spec v0.8.9 v1.0 v1.1 Reference Switch NetFPGA Software Network OS NOX SNAC Beacon Virtualization FlowVisor FlowVisor (Java) Tools Test Suite oftrace Mininet Measurement tools GENI software suite Expedient/Opt-in Manager/FOAM Development Platform Ethane +Broadcom 2007 2008 2009 2010 2011

Scaling of SDN Innovation Build strong intellectual foundation Bring open source SDN tools/platforms to community Standardize OpenFlow and promote SDN ~100 Members from all parts of the industry Bring best SDN content; facilitate high quality dialogue 3 successive sold out events; participation of ecosys SDN Academy Bring best SDN training to companies to accelerate SDN development and adoption

ONRC Organizational Structure Berkeley Scott Shenker Sylvia Ratnasamy Stanford Nick McKeown Guru Parulkar Sachin Katti Open Network Lab Exec Director: Guru Parulkar VP Eng: Bill Snow Chief Architect: Larry Peterson 16-19 Engineers/Tech Leads (includes PlanetLab team) Tools/Platforms for SDN community OpenCloud demonstration of XaaS and SDN PhD/Postdocs Research To achieve our mission we came up with the following structure. It has three parts: Research at Berkeley Research at Stanford And an independent nonprofit Open Networking Lab. We have formalized our collaboration between Berkeley and Stanford through this center. This center gives us more reasons to get together with our students and promote more interactions. And that is great. I want to mention our collaborator Sachin Katti who is a professor of EE and CS has been working on OpenRadio and SDN based mobile wireless networks. The ON.Lab is kind of unique to our center. We created ON.Lab as an independent lab with the goal to develop, support, and deploy open source SDN tools and platforms for the benefit of the community. And create SDN deployments and demonstrations for R&E networks.

Mission Bring innovation and openness to internet and cloud infrastructure with open source tools and platforms

TestON with debugging support MININET, Cluster Edition Tools & Platforms 3rd party components SDN-IP Peering Apps Apps Apps Apps Open Interfaces ONOS Network OS Network OS Network Hypervisor FlowVisor, OpenVirteX NetSight TestON with debugging support Open Interfaces MININET, Cluster Edition Forwarding

Open Network OS (ONOS) Architecture Scale-out and high availability Network graph as north-bound abstraction DEMO Consistency models Development and test environment Performance Next steps

ONOS: Executive Summary Distributed Network OS Network Graph Northbound Abstraction Horizontally Scalable Highly Available Built using open source components ONOS Version 0.1 - Flow API, Shortest Path computation, Sample application - Build & QA ( Jenkins, Sanity Tests, Perf/Scale Tests, CHO) - Deployment in progress at REANNZ (SDN-IP peering) Status Next is sharing our experience with community. Address the non-goals we discussed earlier and evaluate more use cases and applications on this experimental Open Distributed NOS. Want to join us in this experiment? Lookout for more information on our website. Exploring performance & reactive computation frameworks Expand graph abstraction for more types of network state Control functions: intra-domain & inter-domain routing Example use cases: traffic engineering, dynamic virtual networks on demand, … Next

ONOS – Architecture Overview

Open Network OS Focus (Started in Summer 2012) Global network view Routing TE Mobility Global Network View Network OS Scale-out Design Openflow Packet Forwarding Two of them have to do with logical centralization of the control plane and design of Network OS. Number 1, will the Network OS become a performance bottleneck? Or can we scale the Network OS horizontally as we need more horse power? Number 2, will the Network OS become a single point of failure? Or can we make the Network OS and the control plane fault tolerant? The third question has to do with Northbound API. What is the best abstraction the Network OS can offer to application writers that enables reusable and pluggable network control and management applications? ONOS attempts to address exactly these issues… Fault Tolerance Packet Forwarding Programmable Base Station Packet Forwarding

Community needs an open source distributed SDN OS Prior Work Distributed control platform for large-scale networks Focus on reliability, scalability, and generality Scale-out NOS focused on network virtualization in data centers State distribution primitives, global network view, ONIX API ONIX Other Work Helios (NEC), Midonet (Midokura), Hyperflow, Maestro, Kandoo NOX, POX, Beacon, Floodlight, Trema controllers ONIX did attempt to solve these issues. There are few more efforts. To enable more research in this area community needs an open distributed NOS. ----- Meeting Notes (7/29/13 12:57) ----- ONIX is scale out NOS focused on Network Virtualization in data center context and is closed source. Community needs an open source distributed SDN OS

ONOS High Level Architecture Network Graph Eventually consistent Titan Graph DB Cassandra In-Memory DHT Distributed Registry Strongly Consistent Zookeeper Instance 1 Instance 2 Instance 3 OpenFlow Controller+ OpenFlow Controller+ OpenFlow Controller+ Built on two distributed data constructs 1> Network Graph which is the global network view containing the network state represented as a graph which is eventually consistent 2> Distributed Registry is the global cluster management state stored in Zookeeper using transactional consistency. Multiple instances of ONOS control different parts of network and help realize a single global network view by cooperatively using these two distributed data constructs. ----- Meeting Notes (5/15/13 14:21) ----- Distribruted Registry keeps information on who is in control of the switch objects and has write permissions to update the network graph. In general it stores the resource ownership in a strongly consistent way. ----- Meeting Notes (7/29/13 12:57) ----- order animation remove floodlight Host +Floodlight Drivers

Scale-out & HA

ONOS Scale-Out Network Graph Distributed Network OS Instance 1 Global network view Distributed Network OS Instance 1 Instance 2 Instance 3 A part of network is solely controlled by a single ONOS instance and the same instance is also solely responsible for maintaining the state of the partition into the network graph. [We also refer this as Control isolation.] This enables simpler scale-out design. As the network grows beyond the control capacity one can add another instance which will be responsible for a new part of network . As this part is realized into Network Graph, applications will get a global network view. ----- Meeting Notes (7/29/13 12:57) ----- Fix animation Data plane An instance is responsible for maintaining a part of network graph Control capacity can grow with network size or application need

ONOS Control Plane Failover Master Switch A = ONOS 2 Candidates = ONOS 3 Master Switch A = ONOS 1 Candidates = ONOS 2, ONOS 3 Candidates = ONOS 2, ONOS 3 Master Switch A = NONE Candidates = ONOS 2, ONOS 3 Candidates = ONOS 2, ONOS 3 Distributed Registry Distributed Network OS Instance 1 Instance 2 Instance 3 Switch A is being controlled by Instance 1 and the registry shows it as master for switch A. Instance 1 has a failure and dies. Registry detects that instance 1 is down and release the mastership for Switch A. Remaining candidates join the mastership election within registry. Lets say Instance 2 wins the election and is marked in registry as the master for Switch A. The channel with Instance 2 becomes the active channel and other channel becomes passive. This enables a quick failover of switch when there is a control plane failure. ----- Meeting Notes (7/29/13 12:57) ----- Mention strong consistency and elegent coordination Host A B C D E F

Network Graph

ONOS Network Graph Abstraction Id: 1 A Id: 101, Label Id: 103, Label Id: 2 C Id: 3 B Id: 102, Label Id: 104, Label Id: 106, Label Id: 105, Label Titan Graph DB Network graph is organized as a graph database. Vertices as network objects and connected by edges as relation between the vertices. We use Titan as graph DB with Cassandra as its backend. Cassandra is eventually consistent Cassandra In-memory DHT

Network Graph Network state is naturally represented as a graph port switch device on link host Network is naturally a graph with switches, ports, devices as objects as vertices. Similarly links and attachment points are modeled as edges. Applications can traverse and write to this graph to program the data plane. How? Lets look at this example application Network state is naturally represented as a graph Graph has basic network objects like switch, port, device and links Application writes to this graph & programs the data plane

Example: Path Computation App on Network Graph port switch device Flow path Flow entry on link inport outport host flow Path Computation is an application which is using Network Graph. The application can find a path from source to destination by traversing links and program this path with flow entries to create a flow-path. These flow-entries are translated by ONOS core into flow table rules and pushed onto the topology. Last bullet: Application is made simple and stateless. It does not need to worry about topology maintenance. ----- Meeting Notes (5/14/13 14:16) ----- start without text. Bring in text at end and make one point Application computes path by traversing the links from source to destination Application writes each flow entry for the path Thus path computation app does not need to worry about topology maintenance

Example: A simpler abstraction on network graph? Virtual network objects Logical Crossbar port switch device Edge Port on link physical host Real network objects Network graph simplifies applications but can it be used to accelerate innovations of simpler abstractions in control plane? Here is an example of Logical Crossbar. The complexity of network state and topology is hidden. One can build hierarchy of these abstractions further hiding the complexity. Last bullet: We feel network graph will unlock innovations. 7 minute Marker App or service on top of ONOS Maintains mapping from simpler to complex Thus makes applications even simpler and enables new abstractions

Network Graph Representation Flow entry Flow path flow Vertex with 10 properties Switch Vertex with 3 properties flow Vertex with 11 properties Flow entry Property (e.g. dpid) (e.g. state) … Edge Vertex represented as Cassandra row Row indices for fast vertex centric queries Column Value Label id + direction Primary key Edge id Vertex id Signature properties Other properties Edge represented as Cassandra column

Network Graph and Switches Network Graph: Switches Switch Manager Switch Manager Switch Manager Let us see how ONOS builds the network graph. Each ONOS node has a switch manager. When switches connect, switches and ports are get added as switches register with an ONOS node. When switches disconnect, they get marked as inactive in the network graph. OF OF OF

Network Graph and Link Discovery Network Graph: Links Link Discovery Link Discovery Link Discovery SM SM SM LLDP LLDP Each node sends out LLDP on the switches connected to it. Links with source and destination port controlled by different ONOS nodes can also be discovered using the network graph.

Devices and Network Graph Network Graph: Devices Device Manager Device Manager Device Manager SM LD SM LD SM LD Host packet Ins are used to learn about devices, their attachment points. The network graph is updated with this information. PKTIN Host PKTIN PKTIN

Path Computation with Network Graph Network Graph: Flow Paths Flow 1 Flow entries Flow 2 Flow entries Flow 3 Flow entries Flow 4 Flow entries Flow 5 Flow entries Flow 6 Flow entries Flow 7 Flow 8 Flow entries Flow entries SM LD DM SM LD DM SM LD DM Flow paths are provisioned in ONOS. The source dpid of a flow is used to partition which node will compute the path. Computed paths and flow entries are also stored in the network graph. Flow entries have relationship to the switches. Host

Network Graph and Flow Manager PC PC PC Network Graph: Flows Flow 1 Flow entries Flow 2 Flow entries Flow 3 Flow entries Flow 4 Flow entries Flow 5 Flow entries Flow 6 Flow entries Flow 7 Flow 8 Flow entries Flow entries Flow Manager Flow Manager Flow Manager Flowmod Flowmod Each flow manager programs the switches connected to it using the state in the network graph. When a link fails, PC will recompute a new path and Flow Manager will push new flow entries. Flowmod SM LD DM SM LD DM SM LD DM Host

ONOS High Level Architecture Network Graph Eventually consistent Titan Graph DB Cassandra In-Memory DHT Distributed Registry Strongly Consistent Zookeeper Instance 1 Instance 2 Instance 3 OpenFlow Controller+ OpenFlow Controller+ OpenFlow Controller+ Built on two distributed data constructs 1> Network Graph which is the global network view containing the network state represented as a graph which is eventually consistent 2> Distributed Registry is the global cluster management state stored in Zookeeper using transactional consistency. Multiple instances of ONOS control different parts of network and help realize a single global network view by cooperatively using these two distributed data constructs. ----- Meeting Notes (5/15/13 14:21) ----- Distribruted Registry keeps information on who is in control of the switch objects and has write permissions to update the network graph. In general it stores the resource ownership in a strongly consistent way. ----- Meeting Notes (7/29/13 12:57) ----- order animation remove floodlight Host +Floodlight Drivers

DEMO

Consistency Deep Dive

Consistency Definition Strong Consistency: Upon an update to the network state by an instance, all subsequent reads by any instance returns the last updated value. Strong consistency adds complexity and latency to distributed data management. Eventual consistency is slight relaxation – allowing readers to be behind for a short period of time. Consistency is not a binary value but a fraction between 0 and 1 excluding both extremes. So lets see what is our consistency definition. Strong consistency: Every update is instantly available on other instances of ONOS. Next bullet: It cannot be instant in reality, there is always some cost/delay involved. Our eventual consistency is much closer to 1 than many other eventual consistency definitions

Strong Consistency using Registry Network Graph Distributed Network OS Instance 1 Instance 2 Instance 3 Switch A Master = ONOS 1 Switch A Master = NONE A = Switch A Master = ONOS 1 Switch A Master = NONE Switch A Master = ONOS 1 Switch A Master = NONE A = ONOS 1 Registry We use Transactional Consistency for Master election. Lets look at Registry: We see Switch has NO Master when we start. Lets see the point in time where Master is elected and ONOS 1 is marked as master. After the Master is elected and mastership is consistently propagated on all instances. All subsequent reads for master on any instance shows Master as ONOS 1 for Switch A. This guarantees that we have control isolation. All instances Switch A Master = NONE Instance 1 Switch A Master = ONOS 1 Instance 2 Switch A Master = ONOS 1 Instance 3 Switch A Master = ONOS 1 All instances Switch A Master = NONE Timeline Master elected for switch A Delay of Locking & Consensus

Why Strong Consistency is needed for Master Election Weaker consistency might mean Master election on instance 1 will not be available on other instances. That can lead to having multiple masters for a switch. Multiple Masters will break our semantic of control isolation. Strong locking semantic is needed for Master Election Locking and latches are distributed primitives which need consistency.

Eventual Consistency in Network Graph Distributed Network OS Instance 1 Instance 2 Instance 3 Switch A State = ACTIVE SWITCH A STATE= INACTIVE Switch A State = ACTIVE Switch A State = INACTIVE Switch A STATE = INACTIVE Switch A STATE = ACTIVE DHT Lets take a concrete example. Before switch is connected: it is marked INACTIVE on our Network Graph. As soon as the switch connects to ONOS 1 it is marked ACTIVE immediately on INSTANCE 1. For a fraction of second, instance 2 and instance 3 might still show the switch as INACTIVE till they are eventually consistent (white line) From that point the switch state is consistent on All instances. Timeline All instances Switch A STATE = INACTIVE Instance 1 Switch A = ACTIVE Instance 2 Switch A = INACTIVE Instance 3 Switch A = INACTIVE All instances Switch A STATE = ACTIVE Switch Connected to ONOS Delay of Eventual Consensus

Cost of Eventual Consistency Short delay will mean the switch A state is not ACTIVE on some ONOS instances in previous example. Applications on one instance will compute flow through the switch A while other instances will not use the switch A for path computation. Eventual consistency becomes more visible during control plane network congestion. What does it mean? Switch is not available instantly for path computation on some instances. Some flows can be computed without using switch A. Similarly when flow entries are written to Network graph they become eventually available on all instances and each instance picks its flow entries and pushes onto switches.

Why is Eventual Consistency good enough for Network State? Physical network state changes asynchronously Strong consistency across data and control plane is too hard Control apps know how to deal with eventual consistency In the current distributed control plane, each router makes its own decision based on old info from other parts of the network and it works fine Strong Consistency is more likely to lead to inaccuracy of network state as network congestions are real. Asynchronously changing state is not transactional and does not need strong consistency. Control apps know that they are always working on a stale state and know to deal with it. Today’s routers work in the same fashion. Network state is most often used by current gen control applications with assumed eventual consistency. Time check: 13 minutes

Consistency learning One Consistency does not fit all Consequences of delays need to be well understood More research needs to be done on various states using different consistency models

Development & test environment

ONOS Development & Test cycle Source code on github Agile: 3-4 week sprints Mostly Java and many utility scripts CI: Maven, Jenkins, JUnit, Coverage, TestON Vagrant-based development VM Daily 4 hour of Continuous Hours of Operations (CHO) tests as part of build Several CHO cycles simulating rapid churns in network & failures on ONOS instances

ONOS Development Environment Single installation script creates a cluster of Virtual Box VMs

Test Lab Topology

ON.LAB ONOS Test implementation ON.LAB team has implemented following aspects of automated tests ONOS Unit Tests (70% coverage) ONOS System Tests for Functionality, Scale, Performance and Resiliency test (85% coverage) White Box Network Graph Performance Measurements All tests are executed nightly in Jenkins Continuous Integration environment.

Performance

Key performance metrics in Network OS Network scale (# switches, # ports) -> Delay and Throughput Link failure, switch failure, switch port failure Packet_in (request for setting reactive flows) Reading and searching network graph Network Graph Traversals Setup of proactive flows Application scale (# operations, # applications) Number of network events propagated to applications (delay & throughput) Number of operations on Network Graph (delay & throughput) Parallelism/threading for applications (parallelism on Network Graph) Parallel path computation performance

Performance: Hard Problems Off the shelf open source does not perform Ultra low-latency requirements are unique Need to apply distributed/parallel programming techniques to scale control applications Reactive control applications need event-driven framework which scale

ONOS: Summary ONOS Status Next Distributed Network OS Version 0.1 Network Graph Northbound Abstraction Horizontally Scalable Highly Available Built using open source components ONOS Version 0.1 - Flow API, Shortest Path computation, Sample application - Build & QA ( Jenkins, Sanity Tests, Perf/Scale Tests, CHO) - Deployment in progress at REANNZ (SDN-IP peering) Status Next is sharing our experience with community. Address the non-goals we discussed earlier and evaluate more use cases and applications on this experimental Open Distributed NOS. Want to join us in this experiment? Lookout for more information on our website. Exploring performance & reactive computation frameworks Expand graph abstraction for more types of network state Control functions: intra-domain & inter-domain routing Example use cases: traffic engineering, dynamic virtual networks on demand, … Next

www.onlab.us