Reporting Metrics: Different Points of View (revisions and discussion) Al Morton Gomathi Ramachandran Ganga Maguluri December3, 2007 draft-morton-ippm-reporting-metrics-03.

Slides:



Advertisements
Similar presentations
Welcome to the Award Winning Easiest to Use & Most Advanced View, Manage, and Control Security, Access Control, Video, Energy & Lighting Systems, & Critical.
Advertisements

Directed Diffusion for Wireless Sensor Networking
Pathload A measurement tool for end-to-end available bandwidth Manish Jain, Univ-Delaware Constantinos Dovrolis, Univ-Delaware Sigcomm 02.
Delay Variation Applicability Statement draft-morton-ippm-delay-var-as-04 December 3, 2007 Al Morton Benoit Claise.
1 H2 Cost Driver Map and Analysi s Table of Contents Cost Driver Map and Analysis 1. Context 2. Cost Driver Map 3. Cost Driver Analysis Appendix A - Replica.
William Stallings Data and Computer Communications 7 th Edition Chapter 13 Congestion in Data Networks.
M. Reber © 5/1/2015 Document Development Cycle Creating Your User’s Guide Step-by-Step.
RTP: A Transport Protocol for Real-Time Applications Provides end-to-end delivery services for data with real-time characteristics, such as interactive.
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
Reporting Metrics: Different Points of View (revisions and discussion) Al Morton Gomathi Ramachandran Ganga Maguluri March 21, 2006 draft-morton-ippm-reporting-metrics-01.
A Flexible Model for Resource Management in Virtual Private Networks Presenter: Huang, Rigao Kang, Yuefang.
1 Proposal for Tuning IGMPv3/MLDv2 Protocol Behavior in Wireless and Mobile networks Qin Wu Hui Liu 79 th IETF Beijing draft-wu-multimob-igmp-mld-tuning-03.
Standards for Internal Control in the Government Going Green Standards for Internal Control in the Federal Government 1.
Software Quality Metrics
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
1 Crawling the Web Discovery and Maintenance of Large-Scale Web Data Junghoo Cho Stanford University.
Edpsy 511 Homework 1: Due 2/6.
Chapter 2Design & Analysis of Experiments 7E 2009 Montgomery 1 Chapter 2 –Basic Statistical Methods Describing sample data –Random samples –Sample mean,
1 Crawling the Web Discovery and Maintenance of Large-Scale Web Data Junghoo Cho Stanford University.
Understanding Network Failures in Data Centers: Measurement, Analysis and Implications Phillipa Gill University of Toronto Navendu Jain & Nachiappan Nagappan.
PROMISE: Peer-to-Peer Media Streaming Using CollectCast Presented by: Randeep Singh Gakhal CMPT 886, July 2004.
CHAPTER 5 Infrastructure Components PART I. 2 ESGD5125 SEM II 2009/2010 Dr. Samy Abu Naser 2 Learning Objectives: To discuss: The need for SQA procedures.
1 Reading Report 7 Yin Chen 22 Mar 2004 Reference: Customer View of Internet Service Performance: Measurement Methodology and Metrics, Cross-Industry Working.
WG RAQMON Internet-Drafts RMON MIB WG Meeting Washington, Nov. 11, 2004.
Network management Reinhard Laroy BIPT European Parliament - 27 February 2012.
Tutor: Prof. A. Taleb-Bendiab Contact: Telephone: +44 (0) CMPDLLM002 Research Methods Lecture 8: Quantitative.
POSTECH DP&NM Lab. Internet Traffic Monitoring and Analysis: Methods and Applications (1) 2. Network Monitoring Metrics.
(Long-Term) Reporting Metrics: Different Points of View Al Morton Gomathi Ramachandran Ganga Maguluri November 2010 draft-ietf-ippm-reporting-metrics-04.
1 Advancing Metrics on the Standards Track: RFC 2680 (1-way Loss) Test Plan and Results draft-ietf-ippm-testplan-rfc Len Ciavattone, Rüdiger Geib,
IPPM metrics registry extension draft-stephan-ippm-registry-ext-00.txt 79th IETF Meeting – November 2010 IPPM Working Group Emile Stephan.
Going to Extremes: A parametric study on Peak-Over-Threshold and other methods Wiebke Langreder Jørgen Højstrup Suzlon Energy A/S.
Tools Menu and Other Concepts Alerts Event Log SLA Management Search Address Space Search Syslog Download NetIIS Standalone Application.
Quality of Service (QoS) Monitoring and Functions of Internet ITU Regional Standardization Forum for Africa (Kampala, Uganda, June 2014) Yvonne UMUTONI.
Analysis of a Protocol for Dynamic Configuration of IPv4 Link Local Addresses Using Uppaal Miaomiao Zhang Frits W. Vaandrager Department of Computer Science.
Measuring the Congestion Responsiveness of Internet Traffic Ravi Prasad & Constantine Dovrolis Networking and Telecommunications Group College of Computing.
Active Measurements on the AT&T IP Backbone Len Ciavattone, Al Morton, Gomathi Ramachandran AT&T Labs.
1 Presentation Template: Instructor Comments u The following template presents a guideline for preparing a Six Sigma presentation. An effective presentation.
1 draft-ietf-ippm-loss-episode-metrics-00 Loss Episode Metrics for IPPM Nick Duffield, Al Morton, AT&T Joel Sommers, Colgate University IETF 79, Beijing,
Information Session DNS Service level recommendations and experiences.
Thomas Dreibholz Institute for Experimental Mathematics University of Duisburg-Essen, Germany University of Duisburg-Essen, Institute.
A Real-Time Packet Burst Metric TERENA Networking Conference 2004 Klaus Mochalski Computer Science Department University of Leipzig, Germany Jörg Micheel.
Is your Service Available? or Common Network Metrics Nevil Brownlee, CAIDA NANOG 19, Albuquerque, June 2000.
Multiplicative Wavelet Traffic Model and pathChirp: Efficient Available Bandwidth Estimation Vinay Ribeiro.
Delay Variation Applicability Statement draft-morton-ippm-delay-var-as-02 March 21, 2007 Al Morton Benoit Claise “
1 draft-duffield-ippm-burst-loss-metrics-01.txt Nick Duffield, Al Morton, AT&T Joel Sommers, Colgate University IETF 76, Hiroshima, Japan 11/10/2009.
Requirement Analysis Guidelines. 2 Process Model for Requirement Analysis Gather Requirements Develop Service Metrics To measure performance Characterizing.
Statistical Process Control. A process can be described as a transformation of set of inputs into desired outputs. Inputs PROCESSOutputs What is a process?
EE 122: Lecture 15 (Quality of Service) Ion Stoica October 25, 2001.
1 Software Architecture in Practice Quality attributes (The amputated version)
Delay Variation Applicability Statement draft-morton-ippm-delay-var-as-03 July 24, 2007 Al Morton Benoit Claise.
Framework for Metric Composition + Spatial Composition of Metrics Al Morton + many others December 3, 2007.
75 th IETF, Stockholm, Sweden July 26-31, 2009 BMWG SIP Benchmarking BMWG, Monday July 27, 2009 Scott Poretsky Carol Davids Vijay K. Gurbani.
Document number Anticipated Impacts for FRRS Pilot Program ERCOT TAC Meeting September 7, 2012.
1 Advancing Metrics on the Standards Track: RFC 2680 (Loss) Test Plan and Results draft-morton-ippm-testplan-rfc Len Ciavattone, Rüdiger Geib, Al.
United Nations Statistics Division Overview of handbook on cyclical composite indicators Expert Group Meeting on Short-Term Economic Statistics in Western.
FREQUENCY DISTRIBUTION
Initial Performance Metric Registry Entries
Jitter Definitions What is what ? Discussion
Authors: Jiang Xie, Ian F. Akyildiz
Document Development Cycle
Nick Duffield, Al Morton, AT&T Joel Sommers, Colgate University
RTP: A Transport Protocol for Real-Time Applications
How to Crawl the Web Peking University 12/24/2003 Junghoo “John” Cho
(Long-Term) Reporting Metrics: Different Points of View
IP Performance Specifications - Progress and Next Steps
Project Plan Template (Help text appears in cursive on slides and in the notes field)
Achieving, Monitoring and Maintaining A High Quality of Experience
draft-ietf-ippm-multipoint-alt-mark-00
Statistical Process Control
Overview: Chapter 2 Localization and Tracking
Presentation transcript:

Reporting Metrics: Different Points of View (revisions and discussion) Al Morton Gomathi Ramachandran Ganga Maguluri December3, 2007 draft-morton-ippm-reporting-metrics-03 “Our plans miscarry because they have no aim. When a man does not know what harbor he is making for, no wind is the right wind.” Seneca

Page 2 Different Points of View (POV): 2 key ones When designing IP measurements and reporting results, MUST know the Audience to be relevant Key question: “How will the results be used?” SRC DST Network Characterization: Monitoring (QA) Trouble-shooting Modeling SLA (or verification) Application Performance Estimation: Metrics Facilitate process Transfer-dependent aspects Modify App Design How can I _______ my network? What happened to my stream?

Page 3 New Section: Long-Term Reporting Section 6 now:  Summarizes results for each metric, loss, delay delay var.  Discusses Long-Term Reporting Measurement Intervals need not be the same length as “Long” Reporting Intervals (days, weeks, months) Long Measurements come with some risks  Temporary power failure: loose results to date.  Timing signal outage invalidating some measurements.  Maintenance on the meas. system, or its connectivity. Relatively Short Meas. Intervals can help to  match user session length  allow dual-use of measurements in monitoring activities

Page 4 Approaches to Measurement Aggregation Store all the singletons of the Measurement Intervals  Evaluate all singletons in the Reporting Interval Methods like those envisioned in "Framework for Metric Composition", draft-ietf-ippm-framework- compagg-05, for Temporal Aggregation  Produce an estimate of the metric for the Reporting Interval using a deterministic process to combine the metrics from measurement intervals. Use a numerical objective for the metric, and compare the results of each measurement interval:  Every measurement interval where the results meet the objective contribute to the fraction of time with performance as specified.  Present the results as "metric A was less than or equal to objective X during Y% of time.

Page 5 Ver 04 Resolved Steve Konish’s comments Section 4 separates the discussion for delay for each point-of-view. Section 3 (Loss) would benefit from the same organization – synopsis at beginning and end.  New sub-sections to help with this. What do you mean when saying the processing "forks" in this context: section and again in section 4.3.  think “forks in the road” – no eating utensils in the text now  (4.3)... where subsequent processing depends on whether the packet arrives or times-out,… Other editorial comments  They all help, thanks!

Page 6 Summary of Recommendations so far: Set a LONG Loss threshold  Distinguish between Long Finite Delay and Loss  Avoid truncated distributions Delay of Lost Packets is UNDEFINED  Maintain orthogonality – avoid double-counting defects  Use conditional distributions and compute statistics Report BOTH Loss and Delay Report BOTH the Sample Mean and Median.  Comparison of the Mean and Median is informative  Means may be combined over time and space (when applicable)  Means come with a weighting function for each sample if needed, the sample Size, and Loss simply reduces the sample size  Means are more Robust to a single wonky measurement when the sample size is Large Move the Industry Away from “Average Jitter”  Use the 99.9%-ile minus minimum PDV  Portray this as a Delay Variation “Pseudo-Range”

Page 7 What’s Next? Homework for IETF-70  Did you read either draft? Proposal:  Make this draft the basis for non-short-term reporting  Complement to current (short-term) draft, without the restrictions brought-on by producing a result every 10 seconds