Performance Debugging for Distributed Systems of Black Boxes Eri Prasetyo Sources: - Tutorial at quality week, 2002,

Slides:



Advertisements
Similar presentations
Visual Attention & Inhibition of Return
Advertisements

Topics to be discussed Introduction Performance Factors Methodology Test Process Tools Conclusion Abu Bakr Siddiq.
Evaluation of a Scalable P2P Lookup Protocol for Internet Applications
1 VLDB 2006, Seoul Mapping a Moving Landscape by Mining Mountains of Logs Automated Generation of a Dependency Model for HUG’s Clinical System Mirko Steinle,
The Mystery Machine: End-to-end performance analysis of large-scale Internet services Michael Chow David Meisner, Jason Flinn, Daniel Peek, Thomas F. Wenisch.
Fast Algorithms For Hierarchical Range Histogram Constructions
UC Berkeley Online System Problem Detection by Mining Console Logs Wei Xu* Ling Huang † Armando Fox* David Patterson* Michael Jordan* *UC Berkeley † Intel.
Trace Analysis Chunxu Tang. The Mystery Machine: End-to-end performance analysis of large-scale Internet services.
Active and Accelerated Learning of Cost Models for Optimizing Scientific Applications Piyush Shivam, Shivnath Babu, Jeffrey Chase Duke University.
Performance Debugging for Distributed Systems of Black Boxes Yinghua Wu, Haiyong Xie Yinghua Wu, Haiyong Xie.
SKELETON BASED PERFORMANCE PREDICTION ON SHARED NETWORKS Sukhdeep Sodhi Microsoft Corp Jaspal Subhlok University of Houston.
1 Sensor Relocation in Mobile Sensor Networks Guiling Wang, Guohong Cao, Tom La Porta, and Wensheng Zhang Department of Computer Science & Engineering.
Online Performance Auditing Using Hot Optimizations Without Getting Burned Jeremy Lau (UCSD, IBM) Matthew Arnold (IBM) Michael Hind (IBM) Brad Calder (UCSD)
Mobile and Wireless Computing Institute for Computer Science, University of Freiburg Western Australian Interactive Virtual Environments Centre (IVEC)
Models and Security Requirements for IDS. Overview The system and attack model Security requirements for IDS –Sensitivity –Detection Analysis methodology.
CSCE 715 Ankur Jain 11/16/2010. Introduction Design Goals Framework SDT Protocol Achievements of Goals Overhead of SDT Conclusion.
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
User-level Internet Path Diagnosis R. Mahajan, N. Spring, D. Wetherall and T. Anderson.
CPSC 641 Computer Graphics: Fourier Transform Jinxiang Chai.
Performance Debugging in Data Centers: Doing More with Less Prashant Shenoy, UMass Amherst Joint work with Emmanuel Cecchet, Maitreya Natu, Vaishali Sadaphal.
Winter Retreat Connecting the Dots: Using Runtime Paths for Macro Analysis Mike Chen, Emre Kıcıman, Anthony Accardi, Armando Fox, Eric Brewer
Maintaining and Updating Windows Server 2008
Slide 1 Testing Multivariate Assumptions The multivariate statistical techniques which we will cover in this class require one or more the following assumptions.
An Effective Defense Against Spam Laundering Paper by: Mengjun Xie, Heng Yin, Haining Wang Presented at:CCS'06 Presentation by: Devendra Salvi.
VLSI Physical Design: From Graph Partitioning to Timing Closure Chapter 5: Global Routing © KLMH Lienig 1 FLUTE: Fast Lookup Table Based RSMT Algorithm.
By Ravi Shankar Dubasi Sivani Kavuri A Popularity-Based Prediction Model for Web Prefetching.
Statistical Methods For Engineers ChE 477 (UO Lab) Larry Baxter & Stan Harding Brigham Young University.
SharePoint 2010 Business Intelligence Module 6: Analysis Services.
Computer vision.
Towards Highly Reliable Enterprise Network Services via Inference of Multi-level Dependencies Paramvir Bahl, Ranveer Chandra, Albert Greenberg, Srikanth.
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Database Performance Tuning and Query Optimization.
 1  Outline  stages and topics in simulation  generation of random variates.
Offline Algorithmic Techniques for Several Content Delivery Problems in Some Restricted Types of Distributed Systems Mugurel Ionut Andreica, Nicolae Tapus.
LiveCycle Data Services Introduction Part 2. Part 2? This is the second in our series on LiveCycle Data Services. If you missed our first presentation,
9 Chapter Nine Compiled Web Server Programs. 9 Chapter Objectives Learn about Common Gateway Interface (CGI) Create CGI programs that generate dynamic.
Module 7 Reading SQL Server® 2008 R2 Execution Plans.
Information-Centric Networks07a-1 Week 7 / Paper 1 Internet Indirection Infrastructure –Ion Stoica, Daniel Adkins, Shelley Zhuang, Scott Shenker, Sonesh.
Scalable and Efficient Data Streaming Algorithms for Detecting Common Content in Internet Traffic Minho Sung Networking & Telecommunications Group College.
Introduction Algorithms and Conventions The design and analysis of algorithms is the core subject matter of Computer Science. Given a problem, we want.
©NEC Laboratories America 1 Huadong Liu (U. of Tennessee) Hui Zhang, Rauf Izmailov, Guofei Jiang, Xiaoqiao Meng (NEC Labs America) Presented by: Hui Zhang.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
1 Network Monitoring Mi-Jung Choi Dept. of Computer Science KNU
Planned AlltoAllv a clustered approach Stephen Booth (EPCC) Adrian Jackson (EPCC)
Copyright © 2003 OPNET Technologies, Inc. Confidential, not for distribution to third parties. Session 1341: Case Studies of Security Studies of Intrusion.
College of Computer National University of Defense Technology Jingwen Zhou, Zhenbang Chen, Haibo Mi, and Ji Wang {jwzhou, This work.
PERFORMANCE DEBUGGING FOR DISTRIBUTED SYSTEMS OF BLACK BOXES Marcos K. AguileraMarcos K. Aguilera, Jeffrey C. Mogul, Janet L. Wiener, Patrick Reynolds,
Chap. 5 Building Valid, Credible, and Appropriately Detailed Simulation Models.
Detection of Routing Loops and Analysis of Its Causes Sue Moon Dept. of Computer Science KAIST Joint work with Urs Hengartner, Ashwin Sridharan, Richard.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Performance Debugging for Distributed Systems of Black Boxes Marcos K. Aguilera Jeffrey C. Mogul Janet L. Wiener HP Labs Patrick Reynolds, Duke Athicha.
© 2013 IBM Corporation IBM Tivoli Composite Application Manager for Transactions Transaction Tracking Best Practice for Workspace Navigation.
NetLogger Using NetLogger for Distributed Systems Performance Analysis of the BaBar Data Analysis System Data Intensive Distributed Computing Group Lawrence.
Overview of AIMS Hans Sherburne UPC Group HCS Research Laboratory University of Florida Color encoding key: Blue: Information Red: Negative note Green:
LogTree: A Framework for Generating System Events from Raw Textual Logs Liang Tang and Tao Li School of Computing and Information Sciences Florida International.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
R-Trees: A Dynamic Index Structure For Spatial Searching Antonin Guttman.
8 Chapter Eight Server-side Scripts. 8 Chapter Objectives Create dynamic Web pages that retrieve and display database data using Active Server Pages Process.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Pinpoint: Problem Determination in Large, Dynamic Internet Services Mike Chen, Emre Kıcıman, Eugene Fratkin {emrek,
LIOProf: Exposing Lustre File System Behavior for I/O Middleware
Maintaining and Updating Windows Server 2008 Lesson 8.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
Interaction and Animation on Geolocalization Based Network Topology by Engin Arslan.
The Client/Server Database Environment
Displaying Form Validation Info
Predictive Performance
Design and Implementation of OverLay Multicast Tree Protocol
Presentation transcript:

Performance Debugging for Distributed Systems of Black Boxes Eri Prasetyo Sources: - Tutorial at quality week, 2002, Cem kaner, florida Institute of tech. - Performance debugging for Distributed systems of Black Box, Yinghua Wu, Haiyong Xie

Background

Dimensions of Test case Quality

Models

Paradigms

Motivation Complex distributed systems –Built from black box components –Heavy communications traffic –Bottlenecks at some specific nodes These systems may have performance problems –High or erratic latency –Caused by complex system interactions Isolating performance bottlenecks is hard –We cannot always examine or modify system components We need tools to infer where bottlenecks are –Choose which black boxes to open

Example multi-tier system client web server client web server database server database server application server application server authentication server 100ms

Goals Isolating performance bottlenecks –Find high-impact causal path patterns Causal path: series of nodes that sent/received messages. Each message is caused by receipt of previous message, and Some causal paths occur many times High-impact: occurs frequently, and contributes significantly to overall latency –Identify high-latency nodes on high-impact patterns Add significant latency to these patterns Then What should We do? Messages Trace is enough

The Black Box Desired properties –Zero-knowledge, zero-instrumentation, zero-perturbation –Scalability –Accuracy –Efficiency (time and space) Complex distributed system built from “black boxes” Performance bottlenecks

Overview of Approach Obtain traces of messages between components –Ethernet packets, middleware messages, etc. –Collect traces as non-invasively as possible –Require very little information: [timestamp, source, destination, call/return, call-id] Analyze traces using our algorithms –Nesting: faster, more accurate, limited to RPC-style systems –Convolution: works for all message-based systems Visualize results and highlight high-impact paths

Recap. causal path client web server client web server database server database server application server application server authentication server 100ms

Challenges Trace contain interleaved messages from many causal paths –How to identify causal paths? Causality trace by Timestamp Want only statistically significant causal paths –How to differentiate significance? It is easy! They appear repeatedly

The nesting algorithm Depends on RPC(remote procedural call)-style communication Infers causality from “nesting” relationships by message timestamps –Suppose A calls B and B calls C before returning to A –Then the B  C call is “nested” in the A  B call Uses statistical correlation time node A node B node C call return

Nesting: an example causal path AB C D Consider this system of 4 nodes time node A node B node C call return call return node D return Looking for internal delays at each node

Steps of the nesting algorithm 1.Pair call and return messages – (A  B, B  A), (B  D, D  B), (B  C, C  B) 2.Find and score all nesting relationships –B  C nested in A  B –B  D also nested in A  B 3.Pick best parents –Here: unambiguous 4.Reconstruct call paths –A  B  [C ; D] O(m) run time m = number of messages time node A node B node C call return call return node D return

Pseudo-code for the nesting algorithm Detects calls pairs and find all possible nestings of one call pair in another Pick the most likely candidate for the causing call for each call pair Derive call paths from the causal relationships procedure FindCallPairs for each trace entry (t1, CALL/RET, sender A, receiver B, callid id) case CALL: store (t1,CALL,A,B,id) in Topencalls case RETURN: find matching entry (t2, CALL, B, A, id) in Topencalls if match is found then remove entry from Topencalls update entry with return message timestamp t2 add entry to Tcallpairs entry.parents := {all callpairs (t3, CALL, X, A, id2) in Topencalls with t3 < t2} procedure ScoreNestings for each child (B, C, t2, t3) in Tcallpairs for each parent (A, B, t1, t4) in child.parents scoreboard[A, B, C, t2-t1] += (1/|child.parents|) procedure FindNestedPairs for each child (B; C; t2; t3) in call pairs maxscore := 0 for each p (A, B, t1, t4) in child.parents score[p] := scoreboard[A, B, C, t2-t1]*penalty if (score[p] > maxscore) then maxscore := score[p] parent := p parent.children := parent.children U {child} procedure FindCallPaths initialize hash table Tpaths for each callpair (A, B, t1, t2) if callpair.parents = null then root := { CreatePathNode(callpair, t1) } if root is in Tpaths then update its latencies else add root to Tpaths function CreatePathNode(callpair (A, B, t1, t4), tp) node := new node with name B node.latency := t4 - t1 node.call_delay := t1 - tp for each child in callpair.children node.edges := node.edges U { CreatePathNode(child, t1)} return node

Inferring nesting − Local info not enough − Use aggregate info − Histograms keep track of possible latencies − Medium-length delay will be selected − Assign nesting − Heuristic methods time node A node B node C An example of Parallel calls t1 t2 t3 t4

The convolution algorithm “Time signal” of messages for each –A sent message to B at times 1,2,5,6,7 S1(t)= A  B messages time

The convolution algorithm Look for time-shifted similarities –Compute convolution X(t) = S 2 (t)  S 1 (t) –Use Fast Fourier Transforms S 1 (t) (A  B) S 2 (t) (B  C) X(t) A  BB  C Peaks in X(t) suggest causality between A  B and B  C Time shift of a peak indicates delay

Convolution details Time complexity: O(em+eVlogV) –m = messages –e = output edges –V = number of time steps in trace Need to choose time step size –Must be shorter than delays of interest –Too coarse: poor accuracy –Too fine: long running time Robust to noise in trace

Algorithm comparison Nesting –Looks at individual paths and then aggregates –Finds rare paths –Requires call/return style communication –Fast enough for real-time analysis Convolution –Applicable to a broader class of systems –Slower: more work with less information –May need to try different time steps to get good results –Reasonable for off-line analysis

Communication style Rare events Level of Trace detail Time and space complexity Visualization Nesting AlgorithmConvolution Algorithm RPC onlyRPC or free-form messages Yes, but hardNo + call/return tag Linear space Linear time Linear space Polynomial time RPC call and return combined  More compact Less compact Summarize

Maketrace Synthetic trace generator Needed for testing –Validate output for known input –Check corner cases Uses set of causal path templates –All call and return messages, with latencies –Delays are x ± y seconds, Gaussian normal distribution Recipe to combine paths –Parallelism, start/stop times for each path –Duration of trace

Desired results for one trace Causal paths –How often –How much time spent Nodes –Host/component name –Time spent in node and all of the nodes it calls Edges –Time parent waits before calling child

Measuring Added Delay Added 200msec delay in WS2 The nesting algorithm detects the added delay, and so does the convolution algorithm

Results: Petstore Sample EJB application J2EE middleware for Java –Instrumentation from Stanford’s PinPoint project 50msec delay added in mylist.jsp

Results: running time TraceLength (messages) Duration (sec) Memory (MB) CPU time (sec) Nesting Multi-tier (short)20, Multi-tier202, Multi-tier (long)2,026,6585, PetStore234,0362, Convolution (20 ms time step) PetStore234,0362, , More details and results in paper

Accuracy vs. parallelism Increased parallelism degrades accuracy slightly Parallelism is number of paths active at same time

Other results for nesting algorithm Clock skew –Little effect on accuracy with skew ≤ delays of interest Drop rate –Little effect on accuracy with drop rates ≤ 5% Delay variance –Robust to ≤ 30% variance Noise in the trace –Only matters if same nodes send noise –Little effect on accuracy with ≤ 15% noise

Visualization GUI Goal: highlight dominant paths Paths sorted –By frequency –By total time Red highlights –High-cost nodes Timeline –Nested calls –Dominant subcalls Time plots –Node time –Call delay

Related work Systems that trace end-to-end causality via modified middleware using modified JVM or J2EE layers –Magpie (Microsoft Research), aimed at performance debugging –Pinpoint (Stanford/Berkeley), aimed at locating faults –Products such as AppAssure, PerformaSure, OptiBench Systems that make inferences from traces –Intrusion detection (Zhang & Paxson, LBL) uses traces + statistics to find compromised systems

Future work Automate trace gathering and conversion Sliding-window versions of algorithms –Find phased behavior –Reduce memory usage of nesting algorithm –Improve speed of convolution algorithm Validate usefulness on more complicated systems

Conclusions Looking for bottlenecks in black box systems Finding causal paths is enough to find bottlenecks Algorithms to find paths in traces really work –We find correct latency distributions –Two very different algorithms get similar results –Passively collected traces have sufficient information