Enabling Innovation Inside the Network

Slides:



Advertisements
Similar presentations
OpenFlow and Software Defined Networks. Outline o The history of OpenFlow o What is OpenFlow? o Slicing OpenFlow networks o Software Defined Networks.
Advertisements

Jennifer Rexford Princeton University MW 11:00am-12:20pm Logically-Centralized Control COS 597E: Software Defined Networking.
Towards Software Defined Cellular Networks
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
SDN Applications Jennifer Rexford Princeton University.
COS 461 Fall 1997 Routing COS 461 Fall 1997 Typical Structure.
Jennifer Rexford Princeton University
OpenSketch Slides courtesy of Minlan Yu 1. Management = Measurement + Control Traffic engineering – Identify large traffic aggregates, traffic changes.
An Overview of Software-Defined Network Presenter: Xitao Wen.
SDN and Openflow.
Scalable Flow-Based Networking with DIFANE 1 Minlan Yu Princeton University Joint work with Mike Freedman, Jennifer Rexford and Jia Wang.
Measuring Large Traffic Aggregates on Commodity Switches Lavanya Jose, Minlan Yu, Jennifer Rexford Princeton University, NJ 1.
A Routing Control Platform for Managing IP Networks Jennifer Rexford Princeton University
An Overview of Software-Defined Network
Data Plane Verification. Background: What are network policies Alice can talk to Bob Skype traffic must go through a VoIP transcoder All traffic must.
Building a Strong Foundation for a Future Internet Jennifer Rexford ’91 Computer Science Department (and Electrical Engineering and the Center for IT Policy)
Enabling Innovation Inside the Network Jennifer Rexford Princeton University
An Overview of Software-Defined Network Presenter: Xitao Wen.
Enabling Innovation Inside the Network Jennifer Rexford Princeton University
Enabling Innovation Inside the Network Jennifer Rexford Princeton University
Enabling Innovation Inside the Network Jennifer Rexford Princeton University
Frenetic: A Programming Language for Software Defined Networks Jennifer Rexford Princeton University Joint work with Nate.
Software-Defined Networks Jennifer Rexford Princeton University.
Reasoning about Software Defined Networks Mooly Sagiv Tel Aviv University Thursday (Physics 105) Monday Schrieber.
Software Defined Networking Mike Freedman COS 461: Computer Networks
SDX: A Software-Defined Internet eXchange Jennifer Rexford Princeton University
SDN AND OPENFLOW SPECIFICATION SPEAKER: HSUAN-LING WENG DATE: 2014/11/18.
Programming Languages for Software Defined Networks Jennifer Rexford and David Walker Princeton University Joint work with the.
Intradomain Traffic Engineering By Behzad Akbari These slides are based in part upon slides of J. Rexford (Princeton university)
Aaron Gember, Theophilus Benson, Aditya Akella University of Wisconsin-Madison.
SDN and Openflow. Motivation Since the invention of the Internet, we find many innovative ways to use the Internet – Google, Facebook, Cloud computing,
Programming Abstractions for Software-Defined Networks Jennifer Rexford Princeton University.
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Programming Languages COS 597E: Software Defined Networking.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Data-Plane Verification COS 597E: Software Defined Networking.
Separating Routing From Routers Jennifer Rexford Princeton University
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
Software–Defined Networking Meron Aymiro. What is Software-Defined Networking?  Software-Defined Networking (SDN) has the potential of to transcend the.
Data Center Networks and Software-defined Networking
Programming SDN 1 Problems with programming with POX.
SDN challenges Deployment challenges
Instructor Materials Chapter 1: LAN Design
CIS 700-5: The Design and Implementation of Cloud Networks
Software defined networking: Experimental research on QoS
University of Maryland College Park
Jennifer Rexford Princeton University
Martin Casado, Nate Foster, and Arjun Guha CACM, October 2014
Data Streaming in Computer Networking
Overview of SDN Controller Design
6.829 Lecture 13: Software Defined Networking
Software Defined Networking (SDN)
Software Defined Networking
Programming the Networks of the Future
The Stanford Clean Slate Program
Software Defined Networking (SDN)
Switching, routing, and flow control in interconnection networks
Software Defined Networking
Enabling Innovation Inside the Network
COS 561: Advanced Computer Networks
Programming Languages for Programmable Networks
Programmable Networks
Backbone Traffic Engineering
Lecture 10, Computer Networks (198:552)
Software Defined Networking
Chapter-5 Traffic Engineering.
Frenetic: Programming Software Defined Networks
Chapter 5 Network Layer: The Control Plane
Elmo Muhammad Shahbaz Lalith Suresh, Jennifer Rexford, Nick Feamster,
Control-Data Plane Separation
Control-Data Plane Separation
Chapter 4: outline 4.1 Overview of Network layer data plane
Presentation transcript:

Enabling Innovation Inside the Network Jennifer Rexford Princeton University http://www.cs.princeton.edu/~jrex During the past few years, the computer networking field has moved toward greater programmability inside the network. This is a tremendous opportunity to get network software right, and put the field on a stronger foundation. In this talk, I want to tell you about this trend, and discus some of our work on raising the level of abstraction for programming the network.

The Internet: A Remarkable Story Tremendous success From research experiment to global infrastructure Brilliance of under-specifying Network: best-effort packet delivery Hosts: arbitrary applications Enables innovation in applications Web, P2P, VoIP, social networks, virtual worlds But, change is easy only at the edge… 

Inside the ‘Net: A Different Story… Closed equipment Software bundled with hardware Vendor-specific interfaces Over specified Slow protocol standardization Few people can innovate Equipment vendors write the code Long delays to introduce new features Impacts performance, security, reliability, cost…

How Hard are Networks to Manage? Operating a network is expensive More than half the cost of a network Yet, operator error causes most outages Buggy software in the equipment Routers with 20+ million lines of code Cascading failures, vulnerabilities, etc. The network is “in the way” Especially a problem in data centers … and home networks

Creating Foundation for Networking A domain, not a discipline Alphabet soup of protocols Header formats, bit twiddling Preoccupation with artifacts From practice, to principles Intellectual foundation for networking Identify the key abstractions … and support them efficiently To build networks worthy of society’s trust

Rethinking the “Division of Labor”

Traditional Computer Networks Data plane: Packet streaming Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both? Forward, filter, buffer, mark, rate-limit, and measure packets

Traditional Computer Networks Control plane: Distributed algorithms Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both? Track topology changes, compute routes, install forwarding rules

Traditional Computer Networks Management plane: Human time scale Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both? Collect measurements and configure the equipment

Shortest-Path Routing Management: set the link weights Control: compute shortest paths Data: forward packets to next hop 1 1 1 1 3

Shortest-Path Routing Management: set the link weights Control: compute shortest paths Data: forward packets to next hop 1 1 1 1 3

Inverting the Control Plane Traffic engineering Change link weights … to induce the paths … that alleviate congestion 5 1 1 1 3

Avoiding Transient Anomalies Distributed protocol Temporary disagreement among the nodes … leaves packets stuck in loops Even though the change was planned! 1  5 1 1 1 3

Death to the Control Plane! Simpler management No need to “invert” control-plane operations Faster pace of innovation Less dependence on vendors and standards Easier interoperability Compatibility only in “wire” protocols Simpler, cheaper equipment Minimal software

Software Defined Networking (SDN) Logically-centralized control Smart, slow API to the data plane (e.g., OpenFlow) Dumb, fast Switches

OpenFlow Networks http://www.openflow.org

Data-Plane: Simple Packet Handling Simple packet-handling rules Pattern: match packet header bits Actions: drop, forward, modify, send to controller Priority: disambiguate overlapping patterns Counters: #bytes and #packets src=1.2.*.*, dest=3.4.5.*  drop src = *.*.*.*, dest=3.4.*.*  forward(2) 3. src=10.1.2.3, dest=*.*.*.*  send to controller

Controller: Programmability App #1 App #2 App #3 Network OS Events from switches Topology changes, Traffic statistics, Arriving packets Commands to switches (Un)install rules, Query statistics, Send packets

Dynamic Access Control Inspect first packet of each connection Consult the access control policy Install rules to block or route traffic

Seamless Mobility/Migration See host sending traffic at new location Modify rules to reroute the traffic

E.g.: Server Load Balancing Pre-install load-balancing policy Split traffic based on source IP src=0* src=1*

See http://www.openflow.org/videos/ Example Applications Dynamic access control Seamless mobility/migration Server load balancing Using multiple wireless access points Energy-efficient networking Adaptive traffic monitoring Denial-of-Service attack detection Network virtualization See http://www.openflow.org/videos/

OpenFlow in the Wild Open Networking Foundation Creating Software Defined Networking standards Google, Facebook, Microsoft, Yahoo, Verizon, Deutsche Telekom, and many other companies Commercial OpenFlow switches HP, NEC, Quanta, Dell, IBM, Juniper, … Network operating systems NOX, Beacon, Floodlight, Nettle, ONIX, POX, Frenetic Network deployments Eight campuses, and two research backbone networks Commercial deployments (e.g., Google backbone)

Algorithmic Challenges in Software Defined Networking

Two-Tiered Computational Model What problems can we solve efficiently in this two-tiered model? Smart, slow Dumb, fast

Example: Hierarchical Heavy Hitters Measure large traffic aggregates Understand network usage Detect unusual activity Fixed aggregation level Top 10 source IP addresses? Top 10 groups of 256 IP addresses? Better: Identify the top IP prefixes, of any size Contributing a fraction T of the traffic

Heavy Hitters (HH) All IP prefixes Contributing >= T of capacity 40 **** 40 0*** 1*** HH: sends more than T= 10% of link cap. 100 40 00** 01** 19 21 But want to aggregate at a level only when useful… 000* 001* 010* 011* 12 7 12 9 0000 0001 0010 0011 0100 0101 0110 0111 11 1 5 2 9 3 5 4 27

Hierarchical Heavy Hitters (HHH) All IP prefixes Contributing >= T of capacity … excluding descendents **** 40 0*** 1*** 40 HH: 00** 01** 19 21 HHH: 000* 12 on the right.. Some work done on this.. 001* 010* 011* 12 7 12 9 0000 0001 0010 0011 0100 0101 0110 0111 11 1 5 2 9 3 5 4

Computational Model ? Previous work: custom hardware Streaming algorithms using sketches Fast and accurate, but not commodity hardware Our model: simplified OpenFlow Slow, smart controller Read traffic counters Adapt rules installed in the switch Fast, dumb switch Ternary match on packet headers Traffic counters What algorithm should the controller run? ? rules counters

Monitoring HHHes Priority Prefix Rule Count 1 0000 11 2 010* 12 3 0*** 17 19 12 11 1 7 5 2 21 9 3 4 00** 000* 0000 0001 0010 0011 0100 0101 0110 0111 001* 010* 011* 01** 40 0*** 1*** **** Suppose we know all the HHHes, simply need to read off the counts, but sometimes there may be a new HHH…

Detecting New HHHs Monitor children of HHH Using at most 2/T rules 40 **** 40 0*** 1*** 40 00** 01** 19 21 000* So when a child exceeds the threshold, we know there’s a new HHH Not necessarily the child, but counted by the child… How do we know which one… 001* 010* 011* 12 7 12 9 0000 0001 0010 0011 0100 0101 0110 0111 11 1 5 2 9 910 3 3 2 5 4

Iteratively Adjust Wildcard Rules Expand: if count > T, install rule for children Collapse: if count < T, remove rule 0*** Priority Prefix Rule Count 1 0*** 80 2 **** 80 00** 01** 77 3 Still continuing to use 2/T, since we expand only when we detect an HHH At most 1/T. Still not always using 2/T rules… Ask- change to monitoring both children and explain collpase, by Saying stop monitoring other child? Expanding two children at a time still using 2/T for every HHH we detect 000* 001* 010* 011* 70 7 3 0000 0001 0010 0011 0100 0101 0110 0111 70 2 5 3

Iteratively Adjust Wildcard Rules Expand: if count > T, install rule for children Collapse: if count < T, remove rule 0*** Priority Prefix Rule Count 1 00** 77 2 01** 3 **** 80 00** 01** 77 3 Expanding two children at a time still using 2/T for every HHH we detect 000* 001* 010* 011* 70 7 3 0000 0001 0010 0011 0100 0101 0110 0111 70 2 5 3

Iteratively Adjust Wildcard Rules Expand: if count > T, install rule for children Collapse: if count < T, remove rule 0*** Priority Prefix Rule Count 1 000* 70 2 001* 7 3 **** 80 00** 01** 77 3 Still continuing to use 2/T, since we expand only when we detect an HHH At most 1/T. Still not always using 2/T rules… Ask- change to monitoring both children and explain collpase, by Saying stop monitoring other child? Expanding two children at a time still using 2/T for every HHH we detect 000* 010* 010* 011* 70 7 3 0000 0001 0010 0011 0100 0101 0110 0111 70 2 5 3

Using Leftover Rules Monitoring children of all HHHs Threshold fraction T of traffic (e.g., T=0.1) At most 2/T rules (e.g., 20 rules) Often need much less Some HHH are very large So, may have fewer than 1/T HHHs Using the extra rules Monitor nodes close to the threshold Most likely to become new HHHs soon!

Experimental Results CAIDA packet traces Accuracy Speed 12 11 1 Trace of 400,000 IP packets/second Measuring HHHes for T=0.05 and T=0.10 Time intervals of 1 to 60 seconds Accuracy Detect and measure ~9 out of 10 HHHes Large traffic aggregates are stable Speed Take a few intervals to find new HHHs Meanwhile, report coarse-grain results 000* 12 0000 0001 11 1

Stepping Back Other monitoring problems Multi-dimensional HHH Detecting large changes Denial-of-Service attack detection Changing the computational model Efficient, generic support for traffic monitoring Good for many problems, rather than perfect for one Other kinds of problems Flexible server load balancing Latency-equalized routing …

Making SDN Work Better Distributed controllers Distributed policies Improve scalability, reliability, performance Need: efficient distributed algorithms Distributed policies Spread rules over multiple switches Need: policy transformation algorithms Verifying network properties Test whether the rules satisfy invariants Need: efficient verification algorithms Many more interesting problems!

Conclusion SDN is exciting SDN is happening Great research opportunity Enables innovation Simplifies management Rethinks networking SDN is happening Practice: useful APIs and good industry traction Principles: start of higher-level abstractions Great research opportunity Practical impact on future networks Placing networking on a strong foundation