Software defined networking: Experimental research on QoS

Slides:



Advertisements
Similar presentations
Video Services over Software-Defined Networks
Advertisements

Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
DOT – Distributed OpenFlow Testbed
An Overview of Software-Defined Network Presenter: Xitao Wen.
SDN and Openflow.
Networking Technologies for Cloud Computing USTC-INY5316 Instructor: Chi Zhang Fall 2014 Welcome to.
Flowspace revisited OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot L4 sport L4 dport Rule Action.
An Overview of Software-Defined Network
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Software Stack COS 597E: Software Defined Networking.
An Overview of Software-Defined Network Presenter: Xitao Wen.
Connecting LANs, Backbone Networks, and Virtual LANs
OpenFlow: Enabling Technology Transfer to Networking Industry Nikhil Handigol Nikhil Handigol Cisco Nerd.
Software Defined-Networking. Network Policies Access control: reachability – Alice can not send packets to Bob Application classification – Place video.
OpenFlow: Enabling Innovation in Campus Networks
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
SDN Management Layer DESIGN REQUIREMENTS AND FUTURE DIRECTION NO OF SLIDES : 26 1.
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
Mininet and Openflow Labs. Install Mininet (do not do this in class) Download VirtualBox Download Xming for windows (X11) Download Mininet VM for linux-ubuntu.
Introduction to Mininet, Open vSwitch, and POX
3.6 Software-Defined Networks and OpenFlow
Coping with Link Failures in Centralized Control Plane Architecture Maulik Desai, Thyagarajan Nandagopal.
Software Defined Networking and OpenFlow Geddings Barrineau Ryan Izard.
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
Programming Assignment 2 Zilong Ye. Traditional router Control plane and data plane embed in a blackbox designed by the vendor high-seed switching fabric.
© Airspan Networks Inc. Automatic QoS Testing over IEEE Standard.
SDN basics and OpenFlow. Review some related concepts SDN overview OpenFlow.
Constructing Multiple Steiner Trees for Software-Defined Networking Multicast Presented by Professor Jehn-Ruey Jiang Advanced Computing and Networking.
InterVLAN Routing 1. InterVLAN Routing 2. Multilayer Switching.
Mininet and Openflow Labs
Konstantin agouros Omkar deshpande
Ready-to-Deploy Service Function Chaining for Mobile Networks
Chapter 4 Network Layer: The Data Plane
SDN challenges Deployment challenges
Instructor Materials Chapter 6: Quality of Service
Chapter 9 Optimizing Network Performance
Multi-layer software defined networking in GÉANT
Intrusion Detection Systems
University of Maryland College Park
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
Programming Assignment
Top-Down Network Design Chapter Thirteen Optimizing Your Network Design Copyright 2010 Cisco Press & Priscilla Oppenheimer.
Virtual Local Area Networks (VLANs) Part I
NOX: Towards an Operating System for Networks
Author: Daniel Guija Alcaraz
Network Data Plane Part 2
Chapter 4 Data Link Layer Switching
Chapter 4: Routing Concepts
Chapter 6: Network Layer
Virtual LANs.
SDN Overview for UCAR IT meeting 19-March-2014
SDN basics and OpenFlow
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
Chapter 5 Network Layer: The Control Plane
The Stanford Clean Slate Program
CS 31006: Computer Networks – The Routers
Software Defined Networking (SDN)
Software Defined Networking
Network Core and QoS.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 4: Planning and Configuring Routing and Switching.
Implementing an OpenFlow Switch on the NetFPGA platform
An Introduction to Software Defined Networking and OpenFlow
Performance Evaluation of Computer Networks
Chapter 3 Part 3 Switching and Bridging
Performance Evaluation of Computer Networks
SDN-Guard: DoS Attacks Mitigation in SDN Networks
Chapter 5 Network Layer: The Control Plane
In-network computation
Network Core and QoS.
An Introduction to Software Defined Networking and OpenFlow
Chapter 4: outline 4.1 Overview of Network layer data plane
Presentation transcript:

Software defined networking: Experimental research on QoS UNIVERSITY OF MONTENEGRO FACULTY OF ELECTRICAL ENGINEERING Software defined networking: Experimental research on QoS Slavica Tomovic RIPE SEE6 Regional Meeting, Budva, 13.06. 2017.

Motivation: QoS problems Internet today: “Best-effort” service model Different types of traffic are treated in the same way There is no performance guarantees Trend: A constant growth in demand for multimedia applications Video streaming, VoIP, video conferencing, online gaming, IPTV... Different QoS requirements: bandwidth, delay, jitter, packet loss... “Best-effort” is not good enough!

General outlook on SDN Application The key concepts: NorthBound API Application The key concepts: Separation od the control plane from the data plane Centralized control plane Programmable devices Network Operating System “open” API (e.g. OpenFlow) Ap Operating system SDN benefits: Flexible control Traffic engineering Simplified management Global view of the network state Specialized hardware Ap Ap Operating system Operating system Specialized hardware Specialized hardware

QoS controller design Modules: Resource monitoring Routing Resorce reservation Admission control Flow rules Match fields Counters Actions In Port Src MAC Dst MAC Eth Type Vlan Id IP Tos IP Prot. IP Src IP Dst TCP Src Port TCP Dst Port L2 L3 L4

QoS controller design Resource monitoring Resource reservation Collects information about the current state of the network Topology discovery and statistics gathering Resource reservation Reserves bandwidth for priority traffic flows by configuring ouput queus of OpenFlow switches HTB scheduling algorithm For each QoS flow minimum and maximum rate is configured according to its requirements Special buffer is created for best-effort traffic

QoS controller design Route computation QoS algorithm Bandwidth-delay constrained algorithm Best-effort algorithm Routing metric based on estimated level of the link load Rerouting algorithm is run when link load reaches the certain treshold value: Congestion treshold: 80% of link capacity Goal: bring the link usage below the threshold level with as few as possible reroutings.

Priority flow! Delete best-effort entries from the flow tables! OpenMont controller Priority flow! Delete best-effort entries from the flow tables! Create new buffers on the route: switch 1 – switch 2 and send priority packets to them Not a priority flow! Send packets to buffers with id 0, over the route switch 1 – switch 2 It is not a priority flow! The shortest route is congested. Go over longer route! List of priority flows 50Mbit/s Source Destination Demand 8.8.8.2 8.8.8.5 50Mbit/s QoS Instruction Link capacity 100Mbit/s !!! Instructon Instrution Flow table of switch 1 Flow table of switch 2 Tok Action Flow Action p.2 8.8.8.1 – 8.8.8.7 Go to buffer 0 of port 2 Go to buffer 0 of port 1 8.8.8.1 – 8.8.8.7 Go to buffer 0 on port 2 p.1 8.8.8.2 – 8.8.8.5 Go to buffer 1 of port 1 8.8.8.2 – 8.8.8.5 Go to bafer 1 of port 1 p.1 p.2

Experimental results IntServ QoS experiment Best-effort experiment SDN QoS model

12 X = = =4X Testbed OpenFlow switches: HP ProCurve Pica 8 OpenvSwitch software switches NetFPGA boards Like other Linux-based machines with OVS installed, Pica8 could be logically partitioned into multiple independent OpenFlow switches with dedicated physical interfaces. For our purpose, Pica8 is virtualized into twelve virtual switches (colored in orange in Fig. 1). These virtualized switches are managed in the same way as OVS switches, but with the possibility to store the flow entries in the hardware flow tables. HP ProCurve 6600 switches cannot be virtualized in the same way as Pica8. While white box switches allow creation of multiple virtual switches and assignment of physical ports to them, in order to create multiple virtual OpenFlow switches from HP ProCurve 6600, separate VLAN (Virtual Local Area Network) has to be configured for each virtual switch. OpenFlow virtual switches are independent and have their own configuration and connection towards the OpenFlow controller. In our testbed, HP ProCurve is logically separated into four OpenFlow virtual switches, colored in green = 12 X = =4X

Controller benchmarking The most popular open-source SDN controllers: ONOS FLOODLIGHT POX Latency tests Throughput tests Cbench — Controller benchmarking tool Cbench emulates OpenFlow switches which communicate with controller. As input arguments it takes a number of switches that should be emulated, number of hosts per switch and the controller's address. It supports two working modes: latency and throughput mode. In latency mode, it sends PACKET_IN messages to the controller. These messages are used by OpenFlow protocol to inform the controller when a packet received by a switch does not match any entry in the flow table. As a response, controller generates FLOW_MOD message, which installs a new entry in the flow table. If a controller cannot make routing decision, (e.g. because packet destination is unknown), it will just instruct switch to drop or flood a packet via PACKET_OUT message, depending on the control application used. When Cbench works in latency mode, it measures the time taken to handle a single packet. The next PACKET_IN message is generated only after a corresponding response is received from the controller. In throughput mode, the emulated switches send as many PACKET_IN messages as possible to the controller, making sure that the controller always has messages to process [6]. The result of both, throughput and latency tests, are expressed in number of received responses per second. In order to examine how multi-threading impacts controllers' performances, we conducted multiple experiments where controllers have been run with different number of threads (using taskset Linux command).

Cbench throughput test: EVALUATION results Cbench latency test: Cbench throughput test:

Conclusions The distributed control plane hinders development of efficient QoS mechanisms SDN architecture enables implementation of efficient network monitoring mechanisms QoS-aware design of SDN controller promisses better QoS to both, priority and best effort traffic flows Scalability of SDN controller is an important issue

Thank you! Questions?