Presentation is loading. Please wait.

Presentation is loading. Please wait.

Projects Related to Coronet Jennifer Rexford Princeton University

Similar presentations


Presentation on theme: "Projects Related to Coronet Jennifer Rexford Princeton University"— Presentation transcript:

1 Projects Related to Coronet Jennifer Rexford Princeton University http://www.cs.princeton.edu/~jrex

2 Outline SEATTLE –Scalable Ethernet architecture Router grafting (joint work with Kobus) –Seamless re-homing of links to BGP neighbors –Applications of grafting for traffic engineering Static multipath routing (Martin’s AT&T project) –Joint traffic engineering and fault tolerance 2

3 SEATTLE 3 Scalable Ethernet Architecture for Large Enterprises (joint work with Changhoon Kim and Matt Caesar) http://www.cs.princeton.edu/~jrex/papers/seattle08.pdf

4 Goal: Network as One Big LAN Shortest-path routing on flat addresses –Shortest paths: scalability and performance –MAC addresses: self-configuration and mobility Scalability without hierarchical addressing –Limit dissemination and storage of host info –Sending packets on slightly longer paths 4 S H S S S S S S S S S S S S H H H H H H H H

5 SEATTLE Design Decisions 5 ObjectiveApproachSolution 1. Avoiding flooding Never broadcast unicast traffic Network-layer one-hop DHT 2. Restraining broadcasting Bootstrap hosts via unicast 3. Reducing routing state Populate host info only when and where it is needed Traffic-driven resolution with caching 4. Shortest-path forwarding Allow switches to learn topology L2 link-state routing maintaining only switch-level topology * Meanwhile, avoid modifying end hosts

6 Network-Layer One-hop DHT Maintains pairs with function F –Consistent hash mapping a key to a switch –F is defined over the set of live switches One-hop DHT –Link-state routing ensures switches know each other Benefits –Fast and efficient reaction to changes –Reliability and capacity naturally grow with size of the network 6 012 128 -1

7 Location Resolution 7 Switches End hosts Control message Data traffic = Host discovery B B x x Hash F (MAC x ) = B Store Traffic to x Hash F (MAC x ) = B Tunnel to A Notify E E Forward directly from D to A A A Tunnel to B C C D D y y Owner User Resolver Publish

8 Address Resolution 8 = Traffic following ARP takes a shortest path without separate location resolution B B D D Hash F (IP x ) = B Store Broadcast ARP request for IP x Hash F (IP x ) = B Unicast reply E E A A Unicast look-up to B C C x x y y

9 Handling Network and Host Dynamics Network events –Switch failure/recovery  Change in for DHT neighbor  Fortunately, switch failures are not common –Link failure/recovery  Link-state routing finds new shortest paths Host events –Host location, MAC address, or IP address –Must update stale host-information entries 9

10 Handling Host Information Changes 10 Resolver y y Host talking with x D D Old location New location Dealing with host mobility MAC- or IP-address change can be handled similarly B B x x A A C C E E F F

11 Packet-Level Simulations Large-scale packet-level simulation –Event-driven simulation of control plane –Synthetic traffic based on LBNL traces –Campus, data center, and ISP topologies Main results –Much less routing state than Ethernet –Only slightly more stretch than IP routing –Low overhead for handling host mobility 11

12 Prototype Implementation 12 Host-info registration and notification msgs User/Kernel Click XORP OSPF Daemon OSPF Daemon Ring Manager Ring Manager Host Info Manager SeattleSwitch Link-state advertisements Data Frames Routing Table Network Map Click Interface Throughput: 800 Mbps for 512B packets, or 1400 Mbps for 896B packets

13 Conclusions on SEATTLE SEATTLE –Self-configuring, scalable, efficient Enabling design decisions –One-hop DHT with link-state routing –Reactive location resolution and caching –Shortest-path forwarding Relevance to Coronet –Backbone as one big virtual LAN –Using Ethernet addressing 13

14 Router Grafting 14 Joint work with Eric Keller, Kobus van der Merwe, and Michael Schapira http://www.cs.princeton.edu/~jrex/papers/nsdi10.pdf http://www.cs.princeton.edu/~jrex/papers/temigration.pdf

15 Today: Change is Disruptive Planned change –Maintenance on a link, card, or router –Re-homing customer to enable new features –Traffic engineering by changing the traffic matrix Several minutes of disruption –Remove link and reconfigure old router –Connect link to the new router –Establish BGP session and exchange routes 15 customer provider

16 Router Grafting: Seamless Migration IP: signal new path in underlying transport network TCP: transfer TCP state, and keep IP address BGP: copy BGP state, repeat decision process 16 Send state Move link

17 Prototype Implementation Added grafting into Quagga –Import/export routes, new ‘inactive’ state –Routing data and decision process well separated Graft daemon to control process SockMi for TCP migration 17 Modified Quagga graft daemon Linux kernel 2.6.19.7 SockMi.ko Graftable Router Handler Comm Linux kernel 2.6.19.7-click click.ko Emulated link migration Quagga Unmod. Router Linux kernel 2.6.19.7

18 Grafting for Traffic Engineering 18 Rather than tweaking the routing protocols… * Rehome customer to change traffic matrix

19 Internet2 topology, and traffic data Developed algorithms to determine links to graft Traffic Engineering Evaluation 19 Network can handle more traffic (at same level of congestion)

20 Conclusions Grafting for seamless change –Make maintenance and upgrades seamless –Enable new management applications (e.g., TE) Implementing grafting –Modest modifications to the router –Leveraging programmable transport networks Relevance to Coronet –Flexible edge-router connectivity –Without disrupting neighboring ISPs 20

21 Joint Failure Recovery and Traffic Engineering 21 Joint work with Martin Suchara, Dahai Xu, Bob Doverspike, and David Johnson http://www.cs.princeton.edu/~jrex/papers/stamult10.pdf

22 Simple Network Architecture Precomputed multipath routing –Offline computation based on underlying topology –Multiple paths between each pair of routers Path-level failure detection –Edge router only learns which path(s) have failed –E.g., using end-to-end probes, like BFD –No need for network-wide flooding Local adaptation to path failures –Ingress router rebalances load over remaining paths –Based on pre-installed weights 22

23 Architecture 23 topology design list of shared risks traffic demands t s fixed paths splitting ratios 0.25 0.5

24 Architecture 24 t s link cut path probing fixed paths splitting ratios 0.5 0

25 State-Dependent Splitting Custom splitting ratios –Weights for each combination of path failures 25 0.4 0.2 FailureSplitting Ratios -0.4, 0.4, 0.2 p20.6, 0, 0.4 …… configuration: 0.6 0.4 p1p1 p2p2 p3p3 at most 2 #paths entries

26 Optimizing Paths and Weights Optimization algorithms –Computing multiple paths per pair of routers –Computing splitting ratios for each failure scenario Performance evaluation –On AT&T topology, traffic, and shared-risk data –Performance competitive with optimal solution –Using around 4-8 paths per pair of routers Benefits –Joint failure recovery and traffic engineering –Very simple network elements (nearly zero code) –Part of gradual move away from dynamic layer 3 26


Download ppt "Projects Related to Coronet Jennifer Rexford Princeton University"

Similar presentations


Ads by Google