Download presentation
Presentation is loading. Please wait.
1
1 Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah
2
2 Distributed Systems How will you evaluate your distributed system? –DHT –P2P –Content Distribution
3
3 Network Emulation
4
4 Why Emulation? Test distributed systems +Repeatable +Real PCs, Applications, Protocols +Controlled Dedicated Nodes, Network Parameters Link Emulation
5
5 Goal: Path Emulation Emulated Internet
6
6 Link by Link Emulation 10 Mbps 50 Mbps 100 Mbps50 Mbps
7
7 End to End Emulation RTT ABW
8
8 Contributions Principles for path emulation –Pick appropriate queue sizes –Separate capacity from ABW –Model reactivity as a function of flows –Model shared bottlenecks
9
9 Obvious Solution RTT ABW App Shape Delay Queue Link Emulator Shape Delay Queue
10
10 Obvious Solution (Good News) Actual Path Bandwidth Accuracy –8.0% Error on forward path –7.2% Error on reverse path 2.3 Mbps 2.2 Mbps Iperf
11
11 Obvious Solution (Bad News) Real PathObvious Emulation Time (s) RTT (ms) Latency is an order of magnitude higher
12
12 Obvious Solution (More Problems) Measure asymmetric path Bandwidth Accuracy – 50.6% error on forward path! Much smaller than on real path – 8.5% error on reverse path 6.4 Mbps 2.6 Mbps Iperf
13
13 What’s Happening? TCP increases congestion window until it sees a loss There are no losses until the queue fills up Queue fills up Delays grows until queue is full Large delays were queuing delays
14
14 What’s Happening App 6.4 Mbps 12 ms 50 Packet Link Emulator 2.6 Mbps 12 ms 50 Packet
15
15 Delays App Link Emulator 97 ms 239 ms How do we make this more rigorous?
16
16 Maximum Tolerable Queueing Delay Max Tolerable Queueing DelayMax Window Size Target Bandwidth Other RTT Queue size must be limited above by delay
17
17 Upper Bound (Queue Size) Max Tolerable Delay Total Queue Size Capacity Upper limit is proportional to capacity
18
18 Upper bound Forward direction 13k –9 packets Reverse direction 5k –3 packets Queue size is too small –Drops even small bursts!
19
19 Lower Bound (Queue Size) Total Window Size Small Queues Drop Packets On Bursts
20
20 Can’t Fulfill Both Bounds Upper limit is 13k Lower limit is 65k –1 TCP connection, no window scaling No viable queue size
21
21 Capacity Vs. ABW Capacity is the rate at which everyone’s packets drain from the queue ABW is the rate at which MY packets drain from the queue Link emulator replicates capacity Setting capacity == ABW interacts with queue size
22
22 Capacity and Queue Size Queue Size (KB) Capacity (Mbps) Viable Queue Sizes Upper Bound Lower Bound
23
23 Our Solution Set queue based on constraints Set shaper to high bandwidth Introduce CBR cross-traffic App Shape Delay Queue Path Emulator Shape Delay Queue CBR Source CBR Source CBR Sink CBR Sink
24
24 CBR Traffic TCP cross-traffic backs off CBR does not Background traffic cannot back off –If it does, the user will see larger ABW than they set
25
25 Reactivity Reactive CBR traffic?!?!?! Approximate aggregate ABW as function of number of foreground flows Change CBR traffic based on flow count
26
26 Does it work?
27
27 Testing Bandwidth 6.4 Mbps 2.6 Mbps Iperf Obvious ErrorOur Error Forward50.6 %4.1 % Reverse8.5 %5.0 %
28
28 More Bandwidth Tests ForwardReverseLink ErrorPath Error 2.3 Mbps2.2 Mbps8.0 %2.1 % 4.1 Mbps2.8 Mbps31.7 %5.8 % 6.4 Mbps2.6 Mbps50.6 %4.1 % 25.9 Mbps17.2 Mbps20.4 %10.2 % 8.0 Mbps 22.0 %6.3 % 12.0 Mbps 21.5 %6.5 % 10.0 Mbps3.0 Mbps66.5 %8.5 %
29
29 Testing Delay Real PathPath Emulator Time (s) RTT (ms) Obvious solution was an order of magnitude higher
30
30 BitTorrent Setup Measured conditions among 13 PlanetLab hosts 12 BitTorrent Clients, 1 Seed Isolate capacity and queue size changes
31
31 BitTorrent Nodes Download Duration (s) Obvious Solution Our Solution
32
32 Related Work Link Emulation –Emulab –Dummynet –ModelNet –NIST Net
33
33 Related Work Queue Sizes –Apenzeller et al (Sigcomm 2004) Large number of flows –Buffer requirements are small Small number of flows –Queue size should be bandwidth-delay product –We build on this work to determine our lower bound –We focus on emulating a given bandwidth rather than maximizing performance
34
34 Related Work Characterize traffic through a particular link –Harpoon (IMC 2004) –Swing (Sigcomm 2006) –Tmix (CCR 2006) We use only end to end measurements and characterize reactivity as a function of flows
35
35 Conclusion New path emulator –End to end conditions Four principles combine for accuracy –Pick appropriate queue sizes –Separate capacity from ABW –Model reactivity as a function of flows –Model shared bottlenecks
36
36 Questions? Available now at www.emulab.netwww.emulab.net Email: duerig@cs.utah.edu
37
37 Backup Slides
38
38 Does capacity matter?
39
39 Scale
40
40 Shared Bottlenecks are Hard
41
41 Stationarity
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.