Openflow App Testing Chao SHI, Stephen Duraski. Motivation Network is still a complex stuff ! o Distributed mechanism o Complex protocol o Large state.

Slides:



Advertisements
Similar presentations
Implementation and Verification of a Cache Coherence protocol using Spin Steven Farago.
Advertisements

1 Data Link Protocols By Erik Reeber. 2 Goals Use SPIN to model-check successively more complex protocols Using the protocols in Tannenbaums 3 rd Edition.
Tintu David Joy. Agenda Motivation Better Verification Through Symmetry-basic idea Structural Symmetry and Multiprocessor Systems Mur ϕ verification system.
Global States.
A Survey of Runtime Verification Jonathan Amir 2004.
Automatic Verification Book: Chapter 6. What is verification? Traditionally, verification means proof of correctness automatic: model checking deductive:
Ryu Book Chapter 1 Speaker: Chang, Cheng-Yu Date: 25/Nov./
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed System Architectures.
Incremental Consistent Updates Naga Praveen Katta Jennifer Rexford, David Walker Princeton University.
OpenFlow-Based Server Load Balancing GoneWild
Multiple Processor Systems
SDN and Openflow.
Scalable Network Virtualization in Software-Defined Networks
1 Internet Networking Spring 2004 Tutorial 13 LSNAT - Load Sharing NAT (RFC 2391)
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff.
Page: 1 Director 1.0 TECHNION Department of Computer Science The Computer Communication Lab (236340) Summer 2002 Submitted by: David Schwartz Idan Zak.
CSCI 4550/8556 Computer Networks Comer, Chapter 19: Binding Protocol Addresses (ARP)
Chapter 23: ARP, ICMP, DHCP IS333 Spring 2015.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
Applying Dynamic Analysis to Test Corner Cases First Penka Vassileva Markova Madanlal Musuvathi.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #12 LSNAT - Load Sharing NAT (RFC 2391)
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
OpenFlow Switch Limitations. Background: Current Applications Traffic Engineering application (performance) – Fine grained rules and short time scales.
Connecting LANs, Backbone Networks, and Virtual LANs
Switching, routing, and flow control in interconnection networks.
CN2668 Routers and Switches Kemtis Kunanuraksapong MSIS with Distinction MCTS, MCDST, MCP, A+
CECS 5460 – Assignment 3 Stacey VanderHeiden Güney.
EPFL 25Apr ’12. Software-Defined Networking (SDN) 2 Enables new functionality through programmability … Third-party program 25 Apr 2012NSDI'12.
Presentation Title Subtitle Author Copyright © 2002 OPNET Technologies, Inc. TM Introduction to IP and Routing.
Semester 1 Module 8 Ethernet Switching Andres, Wen-Yuan Liao Department of Computer Science and Engineering De Lin Institute of Technology
CUTE: A Concolic Unit Testing Engine for C Technical Report Koushik SenDarko MarinovGul Agha University of Illinois Urbana-Champaign.
Formal Modeling of an Openflow Switch using Alloy Natali Ruchansky and Davide Proserpio.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Chapter 6-2 the TCP/IP Layers. The four layers of the TCP/IP model are listed in Table 6-2. The layers are The four layers of the TCP/IP model are listed.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Delivery, Forwarding, and Routing of IP Packets
Application Block Diagram III. SOFTWARE PLATFORM Figure above shows a network protocol stack for a computer that connects to an Ethernet network and.
IP1 The Underlying Technologies. What is inside the Internet? Or What are the key underlying technologies that make it work so successfully? –Packet Switching.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
1 Version 3.1 Module 6 Routed & Routing Protocols.
CS265: Dynamic Partial Order Reduction Koushik Sen UC Berkeley.
Mapping IP Addresses to Hardware Addresses Chapter 5.
CUTE: A Concolic Unit Testing Engine for C Koushik SenDarko MarinovGul Agha University of Illinois Urbana-Champaign.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Data-Plane Verification COS 597E: Software Defined Networking.
Winter 2007SEG2101 Chapter 121 Chapter 12 Verification and Validation.
By Nitin Bahadur Gokul Nadathur Department of Computer Sciences University of Wisconsin-Madison Spring 2000.
1 Kyung Hee University Chapter 11 User Datagram Protocol.
Autumn 2006CSE P548 - Dataflow Machines1 Von Neumann Execution Model Fetch: send PC to memory transfer instruction from memory to CPU increment PC Decode.
Symbolic Model Checking of Software Nishant Sinha with Edmund Clarke, Flavio Lerda, Michael Theobald Carnegie Mellon University.
Address Resolution Protocol Yasir Jan 20 th March 2008 Future Internet.
Process-to-Process Delivery:
Network Layer COMPUTER NETWORKS Networking Standards (Network LAYER)
SDN challenges Deployment challenges
Chapter 11 User Datagram Protocol
CIS 700-5: The Design and Implementation of Cloud Networks
Programming SDN Newer proposals Frenetic (ICFP’11) Maple (SIGCOMM’13)
Jennifer Rexford Princeton University
The DPIaaS Controller Prototype
Chapter 14: System Protection
Martin Casado, Nate Foster, and Arjun Guha CACM, October 2014
ETHANE: TAKING CONTROL OF THE ENTERPRISE
ARP and RARP Objectives Chapter 7 Upon completion you will be able to:
NOX: Towards an Operating System for Networks
TCP Transport layer Er. Vikram Dhiman LPU.
CCNA 2 v3.1 Module 6 Routing and Routing Protocols
Software Defined Networking (SDN)
Switching, routing, and flow control in interconnection networks
CUTE: A Concolic Unit Testing Engine for C
Foundations and Definitions
Presentation transcript:

Openflow App Testing Chao SHI, Stephen Duraski

Motivation Network is still a complex stuff ! o Distributed mechanism o Complex protocol o Large state o Asynchronous events, random ordering, race conditions.

Motivation Openflow does not reduce complexity, it just encapsulates it. o The distributed and asynchronous feature of networking is inherent o The correctness of controller not verified o Hard to test your apps on Openflow controller

Race Condition

Background

Finite State Machine A model of hardware or computer programs State o Variable value assignment, flags, code block in execution, etc Transition Input (Event) Output (Action)

State Diagram

Model Checking for Distributed System System: o Components & Channels State: o Component State o System State = Component States + Channel Contents Transition o Enabled Transitions (System level) o System Execution (Sequence of Transitions)

Model Checking What to look for: o Deadlocks, incompatible states causing crash o Both hardware and software Method: o State Space Enumeration  Depth First Search using stack in the paper o No Scalability Less suitable for software o Large possibility of user input o "Undecidability cannot be fully algorithmic"

Symbolic Execution Tracking the symbolic variable rather than actual value Symbolic variable is associated with input Real variables are associated with symbolic variables and the expression is updated accordingly Fork at every branch. o Still not scalable

Symbolic Execution Equivalent class o Same code path for all instances in the same class Still problems o Code path coverage does not mean system path coverage Combine State Modeling and Symbolic Execution is good ! o State modeling to exhaust system states o Symbolic execution to reduce transition numbers

Design

Modeling in Openflow State: Controller + Switch + Host Transition Input: o Event (Packet_IN, Flow_Removed, Port_Status) o Message Content (OFMessage) ( We don't want it ) Output: o The actual code path of the event Handler (Atomic) o That problem can be solved using the extension of Symbolic Execution

OFSwitch and Host Simplification Switch is channels + flow table Switch has Two simple transition o process_pkt, process_of o process_pkt: multiple packet processing - > a state transfer Equivalent flow table is the same state Host is categorized o Server-- receive / send_reply o Client -- send / receive

Symbolic Execution Find the equivalent class of input packet Pick one packet from the class as relevant packet Symbolic packet composed of Symbolic variables Concrete Controller State

Combination

Initialization: enable discover_packets by every switch Run discover_packets_transition and enable new transitions

Are you the switch in discover_packets_transiti on. If so, your next transition is already assigned. If not, continue listening on packets

state of the ctrl is changed

OpenFlow Specific Search Strategies PKT-SEQ: Relevant packet sequences NO-DELAY: Instantaneous rule updates UNUSUAL: Uncommon delays and reordings FLOW-IR: Flow independence reduction PKT-SEQ is complementary to other strategies in that it only reduces the number of send transitions rather than the possible kind of event orderings

Specifying Application Correctness Customizable correctness properties - testing correctness involves verifying that "something bad never happens" (safety) and that "something good eventually happens" (liveness). To check these properties, NICE allows correctness properties to (i)access the system state, (ii) register callbacks, and (iii) maintain local state

Specifying Application Correctness NICE provides a library of correctness properties applicable to a wide variety of OpenFlow applications. Includes: NoForwardingLoops, NoBlackHoles, DirectPaths, StrictDirectPaths, NoForgottenPackets

Implementation NICE consists of 3 parts: A model checker - NICE remembers the sequence of transitions that created the state and restores it by replaying the sequence, state-matching is doing by comparing and storing hashes of the explored states. A concolic execution engine - executes the Python code with concrete instead of symbolic inputs A collection of models including the simplified switch and several end hosts.

Performance The experiments are run on the topology of Figure 1 (reproduced here) Host A sends a “layer-2 ping” packet to host B which replies with a packet to A. The controller runs the MAC-learning switch program of Figure 3. The numbers of transitions and unique states, and the execution time as the number of concurrent pings is increased (a pair of packets) are reported.

Performance

Comparison to state-of-the-art model checkers SPIN and JPF SPIN performs a full search more efficiently, but state- space explosion occurs and SPIN runs out of memory with 7 pings SPIN's Partial-order reduction only decreases the growth rate by 18% JPF is already slower than nice, and even after extra effort is put into improving the Java model used in JPF, it is still substantially slower.

NICE and Real Applications NICE detected 3 bugs in a MAC-learning switch (PySwitch) o Host unreachable after moving o Delayed direct path o Excess flooding

NICE and Real Applications NICE found several bugs in a Web Server Load Balancer (dynamically reconfigures data center traffic based on load) o Next TCP packet always dropped after reconfiguration o Some TCP packets dropped after reconfiguration o ARP packets forgotten during address resolution o Duplicate SYN packets during transitions

NICE and Real Applications Energy-Efficient Traffic Engineering (selectively powering down links and redirecting during light loads) o First packet of new flow is dropped o The first few packets of a new flow can be dropped o Only on-demand routes used under high loads o Packets can be dropped when the load reduces

Overhead of running NICE

Coverage vs. Overhead Trade-Offs There are some current limitations to NICE with the current approach Concrete execution on the switch Concrete global controller variables Infinite execution trees in symbolic execution

Related work NICE uses ideas in model checking and symbolic execution, and is the first to apply those techniques to testing OpenFlow networks. Other work such as Frenetic, OFRewind, and FlowChecker can perform limited testing in OpenFlow networks.