Download presentation
Presentation is loading. Please wait.
Published byCharleen Clarke Modified over 9 years ago
1
Not All Microseconds are Equal: Fine-Grained Per-Flow Measurements with Reference Latency Interpolation Myungjin Lee †, Nick Duffield‡, Ramana Rao Kompella† †Purdue University, ‡AT&T Labs–Research
2
Low-latency applications 2 Several new types of applications require extremely low end-to-end latency Algorithmic trading applications in financial data center networks High performance computing applications in data center networks Storage applications Low latency cut-through switches Arista 7100 series Woven EFX 1000 series
3
………… ToR S/W Edge Router Core Router … Need for high-fidelity measurements 3 At every router, high-fidelity measurements are critical to localize root causes Once root cause localized, operators can fix by rerouting traffic, upgrade links or perform detailed diagnosis Which router causes the problem?? 1ms1ms Router Measurement within a router is necessary
4
Measurement solutions today 4 SNMP and NetFlow No latency measurements Active probes Typically end-to-end, do not localize the root cause Expensive high-fidelity measurement box Corvil boxes (£ 90,000): used by London Stock Exchange Cannot place these boxes ubiquitously Lossy Difference Aggregator (LDA) [Kompella, SIGCOMM’09] Provides average latency and variance at high-fidelity within a switch Provides a good start but may not be sufficient to diagnose flow-specific problems
5
Motivation for per-flow measurements 5 Key observation: Significant amount of difference in average latencies across flows at a router Delay Time S/W … Queue Average latency Measurement period Large delay Small delay
6
Outline of the rest of talk 6 Measurement model Alternative approaches Intuition behind our approach: Delay locality Our architecture: Reference Latency Interpolation (RLI) Evaluation
7
Measurement model 7 Assumption: Time synchronization between router interfaces Constraint: Cannot modify regular packets to carry timestamps Intrusive changes to the routing forwarding path Extra bandwidth consumption up to 10% capacity Router Ingress I Egress E
8
Naïve approach 8 For each flow key, Store timestamps for each packet at I and E After a flow stops sending, I sends the packet timestamps to E E computes individual packet delays E aggregates average latency, variance, etc for each flow Problem: High communication costs At 10Gbps, few million packets per second Sampling reduces communication, but also reduces accuracy Ingress I Egress E 10 − = 20 23 27 30 + 15 13 18 − = 22 32 Avg. delay = 22/2 = 11 Avg. delay = 32/2 = 16 − + −
9
A (naïve) extension of LDA 9 Maintaining LDAs with many counters for flows of interest Problem: (Potentially) high communication costs Proportional to the number of flows Ingress I Egress E LDA 28 15 2 1 Packet count Sum of timestamps … Coordination Per-flow latency
10
Key observation: Delay locality 10 LocaTrue mean delay = W(D1 + WD2 + WD3) / 3 Localized mean delay = (WD1 + WD2 + WD3) / 3 WD1 WD3 WD2 How close is localized mean delay to true mean delay as window size varies? Delay Time D1D2 D3
11
Key observation: Delay locality 11 True Mean delay per key / ms Local mean delay per key / ms Global Mean 0.1ms: RMSRE=0.054 10ms: RMSRE=0.16 1s: RMSRE=1.72 Data sets from real router and synthetic queueing model
12
Exploiting delay locality 12 Reference packets are injected regularly at the ingress I Special packets carrying ingress timestamp Provide some reference delay samples Used to approximate the latencies of regular packets Delay Time Reference Packet Ingress Timestamp
13
RLI architecture 13 Component 1: Reference Packet generator Injects reference packets regularly Component 2: Latency Estimator Estimates packet latencies and updates per-flow statistics Estimates directly at the egress with no extra state maintained at ingress side (reduces storage and communication overheads) Egress EIngress I 1) Reference Packet Generator 2) Latency Estimator 1 2 3 1 2 3 R L Ingress Timestamp
14
Component 1: Reference packet generator 14 Question: When to inject a reference packet ? Idea 1: 1-in-n: Inject one reference packet every n packets Problem: low accuracy under low utilization Idea 2: 1-in- τ : Inject one reference packet every τ seconds Problem: bad in case where short-term delay variance is high Our approach: Dynamic injection based on utilization High utilization low injection rate Low utilization high injection rate Adaptive scheme works better than fixed rate schemes
15
Component 2: Latency estimator 15 Question 1: How to estimate latencies using reference packets Solution: Different estimators possible Use only the delay of a left reference packet (RLI-L) Use linear interpolation of left and right reference packets (RLI) Other non-linear estimators possible (e.g., shrinkage) L Interpolated delay Delay Time Error in delay estimate Regular Packet Reference Packet Linear interpolation line Arrival time is known Arrival time and delay are known Estimated delay Error in delay estimate R
16
Component 2: Latency estimator 16 Flow key C1C2C3 81139 236 Interpolation buffer Estimate 102080 347 Avg. latency = C2 / C1 R L Right Reference Packet arrived When a flow is exported Question 2: How to compute per-flow latency statistics Solution: Maintain 3 counters per flow at the egress side C1: Number of packets C2: Sum of packet delays C3: Sum of squares of packet delays (for estimating variance) To minimize state, can use any flow selection strategy to maintain counters for only a subset of flows Flow Key 451 Delay Square of delay 16251 Update Any flow selection strategy Update Selection
17
Experimental environment 17 Data sets No public data center traces with timestamps Real router traces with synthetic workloads: WISC Real backbone traces with synthetic queueing: CHIC and SANJ Simulation tool: Open source NetFlow software – YAF Supports reference packet injection mechanism Simulates a queueing model with RED active queue management policy Experiments with different link utilizations
18
Accuracy of RLI under high link utilization 18 Relative error CDF Median relative error is 10-12%
19
Comparison with other solutions 19 Utilization Average relative error Packet sampling rate = 0.1% 1-2 orders of magnitude difference
20
Overhead of RLI 20 Bandwidth overhead is low less than 0.2% of link capacity Impact to packet loss is small Packet loss difference with and without RLI is at most 0.001% at around 80% utilization
21
Summary A scalable architecture to obtain high-fidelity per-flow latency measurements between router interfaces Achieves a median relative error of 10-12% Shows 1-2 orders of magnitude lower relative error compared to existing solutions Measurements are obtained directly at the egress side Future work: Per-packet diagnosis 21
22
Thank you! Questions? 22
23
23 Backup
24
Comparison with other solutions 24 Relative error CDF
25
Bandwidth overhead 25 Utilization Bandwidth consumption
26
Interference with regular traffic 26 Per-flow delay interference (seconds) Cumulative fraction
27
Impact to packet losses 27 Loss rate difference Utilization
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.