Enabling Prediction of Performance

Slides:



Advertisements
Similar presentations
Model calibration using. Pag. 5/3/20152 PEST program.
Advertisements

Enterprise Architecture
MATE: MPLS Adaptive Traffic Engineering Anwar Elwalid, et. al. IEEE INFOCOM 2001.
A year 1 computer userA year 2 computer userA year 3 computer user Algorithms and programming I can create a series of instructions. I can plan a journey.
Section 2: Science as a Process
Introduction to Systems Analysis and Design Trisha Cummings.
1 Software Testing (Part-II) Lecture Software Testing Software Testing is the process of finding the bugs in a software. It helps in Verifying and.
Generic Approaches to Model Validation Presented at Growth Model User’s Group August 10, 2005 David K. Walters.
Doc.: IEEE /1202r1 Submission October 2004 C. Wright, Azimuth SystemsSlide 1 Proposed Metrics for TGT and Call to Action Date: Oct 21, 2004 Author:
Doc.: IEEE /441r0 Submission April 2004 Roger Skidmore, WVCSlide 1 Overview of Prediction of Wireless Communication Network Performance Roger.
1 Model Calibration John M. Broemmelsiek ITS / Traffic Operations US DOT / FHWA Louisiana Division
Doc.: wng Submission - Study Project Proposal WPP – Simulating Field Performance Jerry Carr, News IQ Inc. Test LaboratorySlide 1 November.
Mobile Application Testing Mobile Application Testing.
Doc.: IEEE /582 SubmissionTom Alexander, VeriWave, Inc..Slide 1 Draft PAR for WPP Tom Alexander Roger Skidmore Khaled Amer Larry Green Areg Alimian.
Doc.: wng Submission - Study Project Proposal WPP - Introduction November 2003 Pratik Mehta, Dell Inc.Slide 1 IEEE WNG Study Project.
Doc.: wng Submission - Study Project Proposal WPP - Introduction November 2003 Pratik Mehta, Dell Inc.Slide 1 IEEE WNG Study Project.
1 Team Skill 3 Defining the System Part 1: Use Case Modeling Noureddine Abbadeni Al-Ain University of Science and Technology College of Engineering and.
Chapter 33 Estimation for Software Projects
Business Intelligence
Group mambers: Maira Naseer (BCS ).
OPERATING SYSTEMS CS 3502 Fall 2017
DATA COLLECTION METHODS IN NURSING RESEARCH
How to Research Lynn W Zimmerman, PhD.
CSCI-235 Micro-Computer Applications
Coupling and Cohesion 1.
Network Performance and Quality of Service
Ch. 3 Semiotic Engineering
DSS & Warehousing Systems
Preface to the special issue on context-aware recommender systems
Section 2: Science as a Process
Chapter 6 Calibration and Application Process
System Control based Renewable Energy Resources in Smart Grid Consumer
P802.11aq Waiver request regarding IEEE RAC comments
Software Engineering: A Practitioner’s Approach, 6/e Chapter 23 Estimation for Software Projects copyright © 1996, 2001, 2005 R.S. Pressman & Associates,
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
DSS & Warehousing Systems
Using Seven Reader-Centered Patterns for Organizing
Chapter 10 Verification and Validation of Simulation Models
IEEE 802 Study Project Proposal -
Inferential statistics,
Routing and Switching Essentials v6.0
Introduction to Software Testing
November 18 July 2008 Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs) Submission Title: Task Group 4e definitions Date.
Proposed Metrics for TGT and Call to Action
Introduction to Systems Analysis and Design
CSc4730/6730 Scientific Visualization
Scientific Inquiry Unit 0.3.
Testing and Test-Driven Development CSC 4700 Software Engineering
Enhancements to Mesh Discovery
1 Limits, Alternatives, and Choices
Enhancement to Mesh Discovery
Switching Techniques.
Algorithm and Ambiguity
Baisc Of Software Testing
Chapter 33 Estimation for Software Projects
Program Evaluation, Archival Research, and Meta-Analytic Designs
Directed Multicast Service (DMS)
Software Engineering: A Practitioner’s Approach, 6/e Chapter 23 Estimation for Software Projects copyright © 1996, 2001, 2005 R.S. Pressman & Associates,
Measurement Methodology Proposal based on Approved Framework
Design Of Experiment Eng. Ibrahim Kuhail.
Data Warehousing Concepts
Requirements Date: Authors: March 2010 Month Year
P802.11aq Waiver request regarding IEEE RAC comments
P802.11aq Waiver request regarding IEEE RAC comments
M. Kezunovic (P.I.) S. S. Luo D. Ristanovic Texas A&M University
THE PROCESS OF INTERACTION DESIGN
TGT Conductive Test Environment
Chapter 26 Estimation for Software Projects.
doc.: IEEE <doc#>
TGT Conductive Test Environment
Presentation transcript:

Enabling Prediction of Performance September 2004 Enabling Prediction of Performance September 16, 2004 Roger Skidmore Wireless Valley Communications, Inc. Roger Skidmore

Abstract Introduction What is Performance “Prediction”? September 2004 doc.: IEEE 802.11-04/1009 September 2004 Abstract Introduction What is Performance “Prediction”? Different Categories of Prediction How Does TGT Fit? Conclusion and Suggested Focus Roger Skidmore Pratik Mehta (Dell), et al

September 2004 Introduction One of the goals of TGT is to enable prediction of performance Effectively, anyone who needs to predict 802.11 performance is in TGT’s audience Problem: How do we enable prediction of performance? Prediction means different things to different people What does it mean to predict performance? The purpose of this presentation is to take the first step toward how TGT can assist those who need to do predictions Roger Skidmore

What is Performance “Prediction”? September 2004 What is Performance “Prediction”? Prediction means different things to different people Is “prediction” a form of analysis, simulation, or something else entirely? Depends on the level of comfort and confidence of the user Most engineers are comfortable with measurements because they are “real” and “repeatable” and “mean something”, whereas predictions are viewed as dealing with “uncertainty” This is valid to a limited degree It is important to consider that people make all kinds of predictions every day For example, consider that you can not deploy an 802.11 network without making myriad assumptions and “predictions” A decision must be made about what to buy, where to put it, and how to configure it Note that measurements taken during the design phase may not reflect the “real” situation once the network is live, and may also not be very “repeatable” depending on the environment Roger Skidmore

What is Performance “Prediction”? (cont.) September 2004 What is Performance “Prediction”? (cont.) Performance prediction is a method of analysis that combines assumptions with accepted facts using algorithms and/or logic in order to reach a conclusion regarding the expected behavior of a device or group of devices under study The benefit is that (potentially) untenable problems are simplified and conclusions reached more quickly that (hopefully) fall within an acceptable margin of error Note that there is another layer of complexity and diversity in the types of predictions one can perform For example, what is a device? Am I planning a network, or building a better access point? Roger Skidmore

Different Categories of Predictions September 2004 Different Categories of Predictions Performance predictions can be categorized many ways based on what is being predicted By layer (e.g., MAC performance, PHY performance, etc.) Site-specific vs. non-site-specific Physical network/device vs. logical network/device Combinations of the above The person carrying out the prediction could have multiple goals, and these may even conflict For example, minimize number of APs but maximize coverage It is outside the scope of this presentation to catalog all the different types of performance predictions currently in use Instead, look for commonalities across major categories Roger Skidmore

Commonalities Across Predictions September 2004 Commonalities Across Predictions Combining assumptions and accepted facts with an algorithm and/or logic to reach a conclusion For the most part, Garbage in => Garbage out Remember that the goal is to reach a conclusion with an acceptable margin of error Assumptions Accepted Facts Algorithm and/or Logic Conclusion Roger Skidmore

September 2004 How Does TGT Help? TGT can help improve performance predictions in several ways: Providing increased number of Accepted Facts E.g., How does a device respond when placed under certain conditions? Providing increased number of valid Assumptions E.g., Is a device capable of supporting a certain traffic load? Providing increased confidence in the validity of Assumptions E.g., How has a device performed under similar tests conditions? Providing conclusions that can be used to empirically tune the Algorithm and/or refine the Logic (i.e., calibrate the prediction) E.g., Device A produced this output when under a certain set of conditions/inputs. Reproduce those conditions/inputs in the performance prediction and tune the algorithm so that the predictive analysis matches the measured result as closely as possible. The end result is a more accurate prediction of performance Roger Skidmore

Conclusion and Suggested Focus September 2004 Conclusion and Suggested Focus Most performance predictions are done using software today If TGT specified standard test conditions, methodologies, and a standard reporting format that could be easily parsed in software, many things can be automated A specification for conducting and reporting tests across a range of variable inputs would allow for more precise device characterization The level of the test (e.g., What layer? What type of device?) will determine what subset of performance predictions can utilize the data For example, a battery life test may not contain data directly beneficial to an analysis of an access point’s coverage area It will be impossible to address all possible tests – identify key tests with the most direct benefit across all of TGT’s audience Even standardized generic tests will still be beneficial to performance predictions Roger Skidmore