Presentation is loading. Please wait.

Presentation is loading. Please wait.

Enabling Innovation Inside the Network

Similar presentations


Presentation on theme: "Enabling Innovation Inside the Network"— Presentation transcript:

1 Enabling Innovation Inside the Network
Jennifer Rexford Princeton University During the past few years, the computer networking field has moved toward greater programmability inside the network. This is a tremendous opportunity to get network software right, and put the field on a stronger foundation. In this talk, I want to tell you about this trend, and discus some of our work on raising the level of abstraction for programming the network.

2 The Internet: A Remarkable Story
Tremendous success From research experiment to global infrastructure Brilliance of under-specifying Network: best-effort packet delivery Hosts: arbitrary applications Enables innovation in applications Web, P2P, VoIP, social networks, virtual worlds But, change is easy only at the edge… 

3 Inside the ‘Net: A Different Story…
Closed equipment Software bundled with hardware Vendor-specific interfaces Over specified Slow protocol standardization Few people can innovate Equipment vendors write the code Long delays to introduce new features Impacts performance, security, reliability, cost…

4 How Hard are Networks to Manage?
Operating a network is expensive More than half the cost of a network Yet, operator error causes most outages Buggy software in the equipment Routers with 20+ million lines of code Cascading failures, vulnerabilities, etc. The network is “in the way” Especially a problem in data centers … and home networks

5 Creating Foundation for Networking
A domain, not a discipline Alphabet soup of protocols Header formats, bit twiddling Preoccupation with artifacts From practice, to principles Intellectual foundation for networking Identify the key abstractions … and support them efficiently To build networks worthy of society’s trust

6 Rethinking the “Division of Labor”

7 Traditional Computer Networks
Data plane: Packet streaming Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both? Forward, filter, buffer, mark, rate-limit, and measure packets

8 Traditional Computer Networks
Control plane: Distributed algorithms Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both? Track topology changes, compute routes, install forwarding rules

9 Traditional Computer Networks
Management plane: Human time scale Maybe this is a better picture… though need some details of what happens in each plane… like in next slides… or, have both? Collect measurements and configure the equipment

10 Shortest-Path Routing
Management: set the link weights Control: compute shortest paths Data: forward packets to next hop 1 1 1 1 3

11 Shortest-Path Routing
Management: set the link weights Control: compute shortest paths Data: forward packets to next hop 1 1 1 1 3

12 Inverting the Control Plane
Traffic engineering Change link weights … to induce the paths … that alleviate congestion 5 1 1 1 3

13 Avoiding Transient Anomalies
Distributed protocol Temporary disagreement among the nodes … leaves packets stuck in loops Even though the change was planned! 1  5 1 1 1 3

14 Death to the Control Plane!
Simpler management No need to “invert” control-plane operations Faster pace of innovation Less dependence on vendors and standards Easier interoperability Compatibility only in “wire” protocols Simpler, cheaper equipment Minimal software

15 Software Defined Networking (SDN)
Logically-centralized control Smart, slow API to the data plane (e.g., OpenFlow) Dumb, fast Switches

16 OpenFlow Networks

17 Data-Plane: Simple Packet Handling
Simple packet-handling rules Pattern: match packet header bits Actions: drop, forward, modify, send to controller Priority: disambiguate overlapping patterns Counters: #bytes and #packets src=1.2.*.*, dest=3.4.5.*  drop src = *.*.*.*, dest=3.4.*.*  forward(2) 3. src= , dest=*.*.*.*  send to controller

18 Controller: Programmability
App #1 App #2 App #3 Network OS Events from switches Topology changes, Traffic statistics, Arriving packets Commands to switches (Un)install rules, Query statistics, Send packets

19 Dynamic Access Control
Inspect first packet of each connection Consult the access control policy Install rules to block or route traffic

20 Seamless Mobility/Migration
See host sending traffic at new location Modify rules to reroute the traffic

21 E.g.: Server Load Balancing
Pre-install load-balancing policy Split traffic based on source IP src=0* src=1*

22 See http://www.openflow.org/videos/
Example Applications Dynamic access control Seamless mobility/migration Server load balancing Using multiple wireless access points Energy-efficient networking Adaptive traffic monitoring Denial-of-Service attack detection Network virtualization See

23 OpenFlow in the Wild Open Networking Foundation
Creating Software Defined Networking standards Google, Facebook, Microsoft, Yahoo, Verizon, Deutsche Telekom, and many other companies Commercial OpenFlow switches HP, NEC, Quanta, Dell, IBM, Juniper, … Network operating systems NOX, Beacon, Floodlight, Nettle, ONIX, POX, Frenetic Network deployments Eight campuses, and two research backbone networks Commercial deployments (e.g., Google backbone)

24 Algorithmic Challenges in Software Defined Networking

25 Two-Tiered Computational Model
What problems can we solve efficiently in this two-tiered model? Smart, slow Dumb, fast

26 Example: Hierarchical Heavy Hitters
Measure large traffic aggregates Understand network usage Detect unusual activity Fixed aggregation level Top 10 source IP addresses? Top 10 groups of 256 IP addresses? Better: Identify the top IP prefixes, of any size Contributing a fraction T of the traffic

27 Heavy Hitters (HH) All IP prefixes Contributing >= T of capacity 40
**** 40 0*** 1*** HH: sends more than T= 10% of link cap. 100 40 00** 01** 19 21 But want to aggregate at a level only when useful… 000* 001* 010* 011* 12 7 12 9 0000 0001 0010 0011 0100 0101 0110 0111 11 1 5 2 9 3 5 4 27

28 Hierarchical Heavy Hitters (HHH)
All IP prefixes Contributing >= T of capacity … excluding descendents **** 40 0*** 1*** 40 HH: 00** 01** 19 21 HHH: 000* 12 on the right.. Some work done on this.. 001* 010* 011* 12 7 12 9 0000 0001 0010 0011 0100 0101 0110 0111 11 1 5 2 9 3 5 4

29 Computational Model ? Previous work: custom hardware
Streaming algorithms using sketches Fast and accurate, but not commodity hardware Our model: simplified OpenFlow Slow, smart controller Read traffic counters Adapt rules installed in the switch Fast, dumb switch Ternary match on packet headers Traffic counters What algorithm should the controller run? ? rules counters

30 Monitoring HHHes Priority Prefix Rule Count 1 0000 11 2 010* 12 3 0*** 17 19 12 11 1 7 5 2 21 9 3 4 00** 000* 0000 0001 0010 0011 0100 0101 0110 0111 001* 010* 011* 01** 40 0*** 1*** **** Suppose we know all the HHHes, simply need to read off the counts, but sometimes there may be a new HHH…

31 Detecting New HHHs Monitor children of HHH Using at most 2/T rules 40
**** 40 0*** 1*** 40 00** 01** 19 21 000* So when a child exceeds the threshold, we know there’s a new HHH Not necessarily the child, but counted by the child… How do we know which one… 001* 010* 011* 12 7 12 9 0000 0001 0010 0011 0100 0101 0110 0111 11 1 5 2 9 910 3 3 2 5 4

32 Iteratively Adjust Wildcard Rules
Expand: if count > T, install rule for children Collapse: if count < T, remove rule 0*** Priority Prefix Rule Count 1 0*** 80 2 **** 80 00** 01** 77 3 Still continuing to use 2/T, since we expand only when we detect an HHH At most 1/T. Still not always using 2/T rules… Ask- change to monitoring both children and explain collpase, by Saying stop monitoring other child? Expanding two children at a time still using 2/T for every HHH we detect 000* 001* 010* 011* 70 7 3 0000 0001 0010 0011 0100 0101 0110 0111 70 2 5 3

33 Iteratively Adjust Wildcard Rules
Expand: if count > T, install rule for children Collapse: if count < T, remove rule 0*** Priority Prefix Rule Count 1 00** 77 2 01** 3 **** 80 00** 01** 77 3 Expanding two children at a time still using 2/T for every HHH we detect 000* 001* 010* 011* 70 7 3 0000 0001 0010 0011 0100 0101 0110 0111 70 2 5 3

34 Iteratively Adjust Wildcard Rules
Expand: if count > T, install rule for children Collapse: if count < T, remove rule 0*** Priority Prefix Rule Count 1 000* 70 2 001* 7 3 **** 80 00** 01** 77 3 Still continuing to use 2/T, since we expand only when we detect an HHH At most 1/T. Still not always using 2/T rules… Ask- change to monitoring both children and explain collpase, by Saying stop monitoring other child? Expanding two children at a time still using 2/T for every HHH we detect 000* 010* 010* 011* 70 7 3 0000 0001 0010 0011 0100 0101 0110 0111 70 2 5 3

35 Using Leftover Rules Monitoring children of all HHHs
Threshold fraction T of traffic (e.g., T=0.1) At most 2/T rules (e.g., 20 rules) Often need much less Some HHH are very large So, may have fewer than 1/T HHHs Using the extra rules Monitor nodes close to the threshold Most likely to become new HHHs soon!

36 Experimental Results CAIDA packet traces Accuracy Speed 12 11 1
Trace of 400,000 IP packets/second Measuring HHHes for T=0.05 and T=0.10 Time intervals of 1 to 60 seconds Accuracy Detect and measure ~9 out of 10 HHHes Large traffic aggregates are stable Speed Take a few intervals to find new HHHs Meanwhile, report coarse-grain results 000* 12 0000 0001 11 1

37 Stepping Back Other monitoring problems
Multi-dimensional HHH Detecting large changes Denial-of-Service attack detection Changing the computational model Efficient, generic support for traffic monitoring Good for many problems, rather than perfect for one Other kinds of problems Flexible server load balancing Latency-equalized routing

38 Making SDN Work Better Distributed controllers Distributed policies
Improve scalability, reliability, performance Need: efficient distributed algorithms Distributed policies Spread rules over multiple switches Need: policy transformation algorithms Verifying network properties Test whether the rules satisfy invariants Need: efficient verification algorithms Many more interesting problems!

39 Conclusion SDN is exciting SDN is happening Great research opportunity
Enables innovation Simplifies management Rethinks networking SDN is happening Practice: useful APIs and good industry traction Principles: start of higher-level abstractions Great research opportunity Practical impact on future networks Placing networking on a strong foundation


Download ppt "Enabling Innovation Inside the Network"

Similar presentations


Ads by Google