Not All Microseconds are Equal: Fine-Grained Per-Flow Measurements with Reference Latency Interpolation Myungjin Lee †, Nick Duffield‡, Ramana Rao Kompella†

Slides:



Advertisements
Similar presentations
Reducing Network Energy Consumption via Sleeping and Rate- Adaption Sergiu Nedevschi, Lucian Popa, Gianluca Iannaccone, Sylvia Ratnasamy, David Wetherall.
Advertisements

Software-defined networking: Change is hard Ratul Mahajan with Chi-Yao Hong, Rohan Gandhi, Xin Jin, Harry Liu, Vijay Gill, Srikanth Kandula, Mohan Nanduri,
Florin Dinu T. S. Eugene Ng Rice University Inferring a Network Congestion Map with Traffic Overhead 0 zero.
Fine-Grained Latency and Loss Measurements in the Presence of Reordering Myungjin Lee, Sharon Goldberg, Ramana Rao Kompella, George Varghese.
Estimating TCP Latency Approximately with Passive Measurements Sriharsha Gangam, Jaideep Chandrashekar, Ítalo Cunha, Jim Kurose.
Efficient Constraint Monitoring Using Adaptive Thresholds Srinivas Kashyap, IBM T. J. Watson Research Center Jeyashankar Ramamirtham, Netcore Solutions.
Fast, Memory-Efficient Traffic Estimation by Coincidence Counting Fang Hao 1, Murali Kodialam 1, T. V. Lakshman 1, Hui Zhang 2, 1 Bell Labs, Lucent Technologies.
Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee Sagar Kumar, Ramana Rao Kompella.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Network Border Patrol: Preventing Congestion Collapse and Promoting Fairness in the Internet Celio Albuquerque, Brett J. Vickers, Tatsuya Suda 1.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
CPSC Topics in Multimedia Networking A Mechanism for Equitable Bandwidth Allocation under QoS and Budget Constraints D. Sivakumar IBM Almaden Research.
Trajectory Sampling for Direct Traffic Observation Matthias Grossglauser joint work with Nick Duffield AT&T Labs – Research.
Probabilistic Aggregation in Distributed Networks Ling Huang, Ben Zhao, Anthony Joseph and John Kubiatowicz {hling, ravenben, adj,
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Internet Traffic Patterns Learning outcomes –Be aware of how information is transmitted on the Internet –Understand the concept of Internet traffic –Identify.
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
Shadow Configurations: A Network Management Primitive Richard Alimi, Ye Wang, Y. Richard Yang Laboratory of Networked Systems Yale University.
1 Cross-Layer Design for Wireless Communication Networks Ness B. Shroff Center for Wireless Systems and Applications (CWSA) School of Electrical and Computer.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
SACRIO - An Active Buffer Mangement Scheme for Differentiaed Services Networks Saikrishnan Gopalakrishnan Cisco Systems Narasimha Reddy Texas A & M University.
FTDCS 2003 Network Tomography based Unresponsive Flow Detection and Control Authors Ahsan Habib, Bharat Bhragava Presenter Mohamed.
On Multi-Path Routing Aditya Akella 03/25/02. What is Multi-Path Routing?  Dynamically route traffic Multiple paths to a destination Path taken dependant.
RRAPID: Real-time Recovery based on Active Probing, Introspection, and Decentralization Takashi Suzuki Matthew Caesar.
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
S. Suri, M, Waldvogel, P. Warkhede CS University of Washington Profile-Based Routing: A New Framework for MPLS Traffic Engineering.
Understanding Network Failures in Data Centers: Measurement, Analysis and Implications Phillipa Gill University of Toronto Navendu Jain & Nachiappan Nagappan.
RelSamp: Preserving Application Structure in Sampled Flow Measurements Myungjin Lee, Mohammad Hajjat, Ramana Rao Kompella, Sanjay Rao.
Internet Traffic Management Prafull Suryawanshi Roll No - 04IT6008.
A Machine Learning-based Approach for Estimating Available Bandwidth Ling-Jyh Chen 1, Cheng-Fu Chou 2 and Bo-Chun Wang 2 1 Academia Sinica 2 National Taiwan.
Tomo-gravity Yin ZhangMatthew Roughan Nick DuffieldAlbert Greenberg “A Northern NJ Research Lab” ACM.
ElasticTree: Saving Energy in Data Center Networks 許倫愷 2013/5/28.
Distributed Quality-of-Service Routing of Best Constrained Shortest Paths. Abdelhamid MELLOUK, Said HOCEINI, Farid BAGUENINE, Mustapha CHEURFA Computers.
Cost-Performance Tradeoffs in MPLS and IP Routing Selma Yilmaz Ibrahim Matta Boston University.
1 Reading Report 9 Yin Chen 29 Mar 2004 Reference: Multivariate Resource Performance Forecasting in the Network Weather Service, Martin Swany and Rich.
Internet Traffic Management. Basic Concept of Traffic Need of Traffic Management Measuring Traffic Traffic Control and Management Quality and Pricing.
Network Aware Resource Allocation in Distributed Clouds.
QoS Support in High-Speed, Wormhole Routing Networks Mario Gerla, B. Kannan, Bruce Kwan, Prasasth Palanti,Simon Walton.
NetFlow: Digging Flows Out of the Traffic Evandro de Souza ESnet ESnet Site Coordinating Committee Meeting Columbus/OH – July/2004.
Improving Capacity and Flexibility of Wireless Mesh Networks by Interface Switching Yunxia Feng, Minglu Li and Min-You Wu Presented by: Yunxia Feng Dept.
ACN: CSFQ1 CSFQ Core-Stateless Fair Queueing Presented by Nagaraj Shirali Choong-Soo Lee ACN: CSFQ1.
Noise Can Help: Accurate and Efficient Per-flow Latency Measurement without Packet Probing and Time Stamping Dept. of Computer Science and Engineering.
High-Fidelity Latency Measurements in Low-Latency Networks Ramana Rao Kompella Myungjin Lee (Purdue), Nick Duffield (AT&T Labs – Research)
Load-Balancing Routing in Multichannel Hybrid Wireless Networks With Single Network Interface So, J.; Vaidya, N. H.; Vehicular Technology, IEEE Transactions.
1 Optical Packet Switching Techniques Walter Picco MS Thesis Defense December 2001 Fabio Neri, Marco Ajmone Marsan Telecommunication Networks Group
Queueing and Active Queue Management Aditya Akella 02/26/2007.
Zibin Zheng DR 2 : Dynamic Request Routing for Tolerating Latency Variability in Cloud Applications CLOUD 2013 Jieming Zhu, Zibin.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Measurement COS 597E: Software Defined Networking.
Opportunistic Traffic Scheduling Over Multiple Network Path Coskun Cetinkaya and Edward Knightly.
Multiplicative Wavelet Traffic Model and pathChirp: Efficient Available Bandwidth Estimation Vinay Ribeiro.
Interconnect simulation. Different levels for Evaluating an architecture Numerical models – Mathematic formulations to obtain performance characteristics.
Florida State UniversityZhenhai Duan1 BCSQ: Bin-based Core Stateless Queueing for Scalable Support of Guaranteed Services Zhenhai Duan Karthik Parsha Department.
Network Computing Laboratory 1 Vivaldi: A Decentralized Network Coordinate System Authors: Frank Dabek, Russ Cox, Frans Kaashoek, Robert Morris MIT Published.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
1 ISMA Backbone Traffic Inference MAKE SYSTEMS THE NETWORK RESOURCE PLANNING COMPANY ISP Backbone Traffic Inference Methods to Support Traffic Engineering.
1 Traffic Engineering By Kavitha Ganapa. 2 Introduction Traffic engineering is concerned with the issue of performance evaluation and optimization of.
Coping with Link Failures in Centralized Control Plane Architecture Maulik Desai, Thyagarajan Nandagopal.
Logically Centralized? State Distribution Trade-offs in Software Defined Networks.
PATH DIVERSITY WITH FORWARD ERROR CORRECTION SYSTEM FOR PACKET SWITCHED NETWORKS Thinh Nguyen and Avideh Zakhor IEEE INFOCOM 2003.
1 Scalability and Accuracy in a Large-Scale Network Emulator Nov. 12, 2003 Byung-Gon Chun.
Problem: Internet diagnostics and forensics
Corelite Architecture: Achieving Rated Weight Fairness
Tapping Into The Unutilized Router Processing Power
Dingming Wu+, Yiting Xia+*, Xiaoye Steven Sun+,
Catching the Microburst Culprits with Snappy
Catching the Microburst Culprits with Snappy
Author: Ramana Rao Kompella, Kirill Levchenko, Alex C
Presentation transcript:

Not All Microseconds are Equal: Fine-Grained Per-Flow Measurements with Reference Latency Interpolation Myungjin Lee †, Nick Duffield‡, Ramana Rao Kompella† †Purdue University, ‡AT&T Labs–Research

Low-latency applications 2  Several new types of applications require extremely low end-to-end latency  Algorithmic trading applications in financial data center networks  High performance computing applications in data center networks  Storage applications  Low latency cut-through switches  Arista 7100 series  Woven EFX 1000 series

………… ToR S/W Edge Router Core Router … Need for high-fidelity measurements 3  At every router, high-fidelity measurements are critical to localize root causes  Once root cause localized, operators can fix by rerouting traffic, upgrade links or perform detailed diagnosis Which router causes the problem?? 1ms1ms Router Measurement within a router is necessary

Measurement solutions today 4  SNMP and NetFlow  No latency measurements  Active probes  Typically end-to-end, do not localize the root cause  Expensive high-fidelity measurement box  Corvil boxes (£ 90,000): used by London Stock Exchange  Cannot place these boxes ubiquitously  Lossy Difference Aggregator (LDA) [Kompella, SIGCOMM’09]  Provides average latency and variance at high-fidelity within a switch  Provides a good start but may not be sufficient to diagnose flow-specific problems

Motivation for per-flow measurements 5  Key observation: Significant amount of difference in average latencies across flows at a router Delay Time S/W … Queue Average latency Measurement period Large delay Small delay

Outline of the rest of talk 6  Measurement model  Alternative approaches  Intuition behind our approach: Delay locality  Our architecture: Reference Latency Interpolation (RLI)  Evaluation

Measurement model 7  Assumption: Time synchronization between router interfaces  Constraint: Cannot modify regular packets to carry timestamps  Intrusive changes to the routing forwarding path  Extra bandwidth consumption up to 10% capacity Router Ingress I Egress E

Naïve approach 8  For each flow key,  Store timestamps for each packet at I and E  After a flow stops sending, I sends the packet timestamps to E  E computes individual packet delays  E aggregates average latency, variance, etc for each flow  Problem: High communication costs  At 10Gbps, few million packets per second  Sampling reduces communication, but also reduces accuracy Ingress I Egress E 10 − = − = Avg. delay = 22/2 = 11 Avg. delay = 32/2 = 16 − + −

A (naïve) extension of LDA 9  Maintaining LDAs with many counters for flows of interest  Problem: (Potentially) high communication costs  Proportional to the number of flows Ingress I Egress E LDA Packet count Sum of timestamps … Coordination Per-flow latency

Key observation: Delay locality 10 LocaTrue mean delay = W(D1 + WD2 + WD3) / 3 Localized mean delay = (WD1 + WD2 + WD3) / 3 WD1 WD3 WD2 How close is localized mean delay to true mean delay as window size varies? Delay Time D1D2 D3

Key observation: Delay locality 11 True Mean delay per key / ms Local mean delay per key / ms Global Mean 0.1ms: RMSRE= ms: RMSRE=0.16 1s: RMSRE=1.72 Data sets from real router and synthetic queueing model

Exploiting delay locality 12  Reference packets are injected regularly at the ingress I  Special packets carrying ingress timestamp  Provide some reference delay samples  Used to approximate the latencies of regular packets Delay Time Reference Packet Ingress Timestamp

RLI architecture 13  Component 1: Reference Packet generator  Injects reference packets regularly  Component 2: Latency Estimator  Estimates packet latencies and updates per-flow statistics  Estimates directly at the egress with no extra state maintained at ingress side (reduces storage and communication overheads) Egress EIngress I 1) Reference Packet Generator 2) Latency Estimator R L Ingress Timestamp

Component 1: Reference packet generator 14  Question: When to inject a reference packet ?  Idea 1: 1-in-n: Inject one reference packet every n packets  Problem: low accuracy under low utilization  Idea 2: 1-in- τ : Inject one reference packet every τ seconds  Problem: bad in case where short-term delay variance is high  Our approach: Dynamic injection based on utilization  High utilization  low injection rate  Low utilization  high injection rate  Adaptive scheme works better than fixed rate schemes

Component 2: Latency estimator 15  Question 1: How to estimate latencies using reference packets  Solution: Different estimators possible  Use only the delay of a left reference packet (RLI-L)  Use linear interpolation of left and right reference packets (RLI)  Other non-linear estimators possible (e.g., shrinkage) L Interpolated delay Delay Time Error in delay estimate Regular Packet Reference Packet Linear interpolation line Arrival time is known Arrival time and delay are known Estimated delay Error in delay estimate R

Component 2: Latency estimator 16 Flow key C1C2C Interpolation buffer Estimate Avg. latency = C2 / C1 R L Right Reference Packet arrived When a flow is exported  Question 2: How to compute per-flow latency statistics  Solution: Maintain 3 counters per flow at the egress side  C1: Number of packets  C2: Sum of packet delays  C3: Sum of squares of packet delays (for estimating variance)  To minimize state, can use any flow selection strategy to maintain counters for only a subset of flows Flow Key 451 Delay Square of delay Update Any flow selection strategy Update Selection

Experimental environment 17  Data sets  No public data center traces with timestamps  Real router traces with synthetic workloads: WISC  Real backbone traces with synthetic queueing: CHIC and SANJ  Simulation tool: Open source NetFlow software – YAF  Supports reference packet injection mechanism  Simulates a queueing model with RED active queue management policy  Experiments with different link utilizations

Accuracy of RLI under high link utilization 18 Relative error CDF Median relative error is 10-12%

Comparison with other solutions 19 Utilization Average relative error Packet sampling rate = 0.1% 1-2 orders of magnitude difference

Overhead of RLI 20  Bandwidth overhead is low  less than 0.2% of link capacity  Impact to packet loss is small  Packet loss difference with and without RLI is at most 0.001% at around 80% utilization

Summary  A scalable architecture to obtain high-fidelity per-flow latency measurements between router interfaces  Achieves a median relative error of 10-12%  Shows 1-2 orders of magnitude lower relative error compared to existing solutions  Measurements are obtained directly at the egress side  Future work: Per-packet diagnosis 21

Thank you! Questions? 22

23 Backup

Comparison with other solutions 24 Relative error CDF

Bandwidth overhead 25 Utilization Bandwidth consumption

Interference with regular traffic 26 Per-flow delay interference (seconds) Cumulative fraction

Impact to packet losses 27 Loss rate difference Utilization