Environment and Metrics Laboratory vs. Real World

Slides:



Advertisements
Similar presentations
Doc.: IEEE /1582r1 Submission January 2005 Dr. Michael D. Foegelle, ETS-LindgrenSlide 1 Environment and Metrics Laboratory vs. Real World Notice:
Advertisements

Doc.: IEEE /tbd Submission March/2006 Pertti Visuri, Airgain, Inc. Over the Air Testing - Comparing Systems with Different Antennas Notice: This.
Doc.: IEEE /0129r0 Submission January 2006 Don Berry, Wireless Enterprise ConsultingSlide 1 Non Noise Interference Testing Notice: This.
Emergency Call Support
Outdoor Use Case Date: Authors: November 2005 January 2995
Submission on comments to +HTC frames
Triggered QoS Measurements
[ Interim Meetings 2006] Date: Authors: July 2005
Document Framework Section
Latency-sensitive Applications - metrics
TCP Parameters and Settings
TGT Terminology and Concepts
Document Framework Section
Status, Next Steps and Call to Action
LB73 Noise and Location Categories
LB73 Noise and Location Categories
Waveform Generator Source Code
Transmit Power Requirements
March 2014 Election Results
Attendance and Documentation for the March 2007 Plenary
3GPP Extended Date: Authors: July 2005 July 2005
CBP and David Allen’s methods
Document Framework Section
Motion to accept Draft p 2.0
Protected SSIDs Date: Authors: March 2005 March 2005
MIMO performance Test Methodology proposal
3GPP liaison report July 2006
[place presentation subject title text here]
TGu-changes-from-d0-02-to-d0-03
Doc.: IEEE /1006r0 July 2006 July 2006 Supporting information for eliminating Section 5.7 "OTA Shielded Enclosure Environment" Date:
Proposed Changes to Requirements
R8E4 and XML Date: January 12th 2006 Authors: January 2006
Reflector Tutorial Date: Authors: July 2006 Month Year
TGv Redline D0.07 Insert and Deletion
TGv Redline D0.06 Insert and Deletion
ADS Study Group Mid-week Report
Selection Procedure Recommendation
TGu-changes-from-d0-01-to-d0-02
LB73 Noise and Location Categories
TGT Definitions Ad Hoc Report
TGy draft 2.0 with changebars from draft 1.0
TGv Redline D0.10 Insert and Deletion
WAPI Position Paper Sept 2005 Sept 2005 IEEE WG
Redline of draft P802.11w D2.2 Date: Authors:
Proposed Changes to Requirements
Simulation Results for Adaptive Rate Control
Freedom to use 40MHz at 5GHz
TGr Proposed Draft Revision Notice
Deployment Considerations in Wireless Mesh Networking
TGu-changes-from-d0-02-to-d0-03
Traceable OTA Performance Testing Presentation
Draft P802.11s D1.03 WordConversion
Questions to the Contention-based Protocol (CBP) Study Group
TGT Conductive Test Environment
Medium occupancy as coexistence metrics
Some Simulation Results for ‘Express Forwarding’
TGu-changes-from-d0-04-to-d0-05
for video transmission, Status
Adaptive rate control Requirements
TGu-changes-from-d0-03-to-d0-04
TGu Motions Date: Authors: May 2006 May 2006
Multiple Networks Date: Authors: July 2005 July 2005
TGT Conductive Test Environment
DFS Regulatory Update Date: Authors: May 2005 May 2005
Simulation Results for Adaptive Rate Control
WAPI Position Paper Sept 2005 Sept 2005 IEEE WG
Proposal for Diagnostic Alerts
Wireless Architectural Thoughts
Selection Procedure Recommendation
TGp Motions Date: Authors: January 2006 Month Year
Presentation transcript:

Environment and Metrics Laboratory vs. Real World Doc.: IEEE 802.11-05/1582r0 January 2005 January 2005 Environment and Metrics Laboratory vs. Real World Date: 2005-1-4 Authors: Notice: This document has been prepared to assist IEEE 802.11. It is offered as a basis for discussion and is not binding on the contributing individual(s) or organization(s). The material in this document is subject to change in form and content after further study. The contributor(s) reserve(s) the right to add, amend or withdraw material contained herein. Release: The contributor grants a free, irrevocable license to the IEEE to incorporate material contained in this contribution, and any modifications thereof, in the creation of an IEEE Standards publication; to copyright in the IEEE’s name any IEEE Standards publication even though it may include portions of this contribution; and at the IEEE’s sole discretion to permit others to reproduce in whole or in part the resulting IEEE Standards publication. The contributor also acknowledges and accepts that this contribution may be made public by IEEE 802.11. Patent Policy and Procedures: The contributor is familiar with the IEEE 802 Patent Policy and Procedures <http:// ieee802.org/guides/bylaws/sb-bylaws.pdf>, including the statement "IEEE standards may include the known use of patent(s), including patent applications, provided the IEEE receives assurance from the patent holder or applicant with respect to patents essential for compliance with both mandatory and optional portions of the standard." Early disclosure to the Working Group of patent information that might be relevant to the standard is essential to reduce the possibility for delays in the development process and increase the likelihood that the draft publication will be approved for publication. Please notify the Chair <stuart.kerry@philips.com> as early as possible, in written or electronic form, if patented technology (or technology under patent application) might be incorporated into a draft standard being developed within the IEEE 802.11 Working Group. If you have questions, contact the IEEE Patent Committee Administrator at <patcom@ieee.org>. Dr. Michael D. Foegelle, ETS-Lindgren Dr. Michael D. Foegelle, ETS-Lindgren

Doc.: IEEE 802.11-05/1582r0 January 2005 January 2005 Abstract There still remains a level of confusion and disagreement as to the goals of the TGT. This presentation attempts to more clearly define some basic concepts in an effort to illustrate the need for certain approaches. The goal is to then determine if and how we can satisfy the needs of the various groups involved in TGT given this information. From there, a better understanding of the document requirements (framework) can be reached. Dr. Michael D. Foegelle, ETS-Lindgren Dr. Michael D. Foegelle, ETS-Lindgren

Overview Environment vs. Metrics Environments Metrics January 2005 Overview Environment vs. Metrics Environments Metrics Measurement Framework Environment Dr. Michael D. Foegelle, ETS-Lindgren

Environment vs. Metrics January 2005 Environment vs. Metrics There tends to be some confusion when discussing usage cases and the impact of the real-world on those cases. It’s important to show how the environment can be considered completely independent of the application. Environmental effects are inputs to the low level metrics which eventually impact the application level performance. Methodology related to measuring application level metrics, or corresponding methods of predicting application level performance from lower level metrics, is far removed from the inputs to those low level metrics. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Real-World Environments represent the actual user experience – in that particular environment. Any sufficiently rigorous test method should be able to reproduce the same results in that same environment. If the tests are designed properly, these results should represent the user experience in this environment. A given real-world environment is likely to demonstrate effects similar to that of another real world environment. Qualitative measurements of the relative performance between two or more devices are possible. The relative differences between devices should be similar for different real world environments, given appropriate methodology. Absolute performance metrics of an individual device are not practical in a real-world environment. Can’t compare DUT1 from RWE1 to DUT2 in RWE2. Dr. Michael D. Foegelle, ETS-Lindgren

Environments Case Study: Real-World Environment Testing January 2005 Environments Case Study: Real-World Environment Testing Assume two different real world environments. Three APs are tested by placing each one in the same place and comparing the performance between a number of clients placed in various locations (varied distance and relationship). Similar clients are used in both cases. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Case Study: Real-World Environment Testing – Comparison #1 Identical clients are placed the same distance from the AP. Environment #1 has a throughput of 5 MB/s. What’s the throughput in Environment #2? Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Case Study: Real-World Environment Testing – Comparison #2 AP 2 is substituted for AP1 in the exact same position. Environment #1 has a throughput of 2 MB/s. What’s the throughput in Environment #2? Is AP1 better than AP2? Does Environment #2 prove this too? Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Case Study: Real-World Environment Testing – Comparison #3 A client is moved away from each AP in a straight line in 5 m steps until the throughput drops from maximum. In Env. #1 AP 1’s throughput drops at 25m and AP 2 drops at 35m At what distance does throughput drop in Environment #2? Is AP1 better than AP2? Does Environment #2 prove this too? Dr. Michael D. Foegelle, ETS-Lindgren

Environments Case Study: Real-World Environment Testing January 2005 Absolute comparisons between environments are obviously impractical. The meaning of distance in one environment can be considerably different than another, due to reflections, etc. Relative comparisons between DUTs aren’t guaranteed to be equivalent between different real-world environments. Without specific controls, it is possible to confuse issues related to test setup in a specific test environment with DUT performance issues. (Case in point being the previously described methodology.) Standardization of methodology would have to address these setup issues in an effort to compensate for the non-standardized environment. Likely to entail specialized testing of the environment to determine levels of contributing factors before each test. Statistical approach would be necessary to remove fixed errors. Eg. Location and orientation of antenna may be more critical than location and orientation of base. Thus, multiple locations and orientations of DUT and antenna are necessary to determine relevant factors. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Controlled Laboratory Environments provide ways of simulating real-world behavior in a repeatable, comparable, and traceable manner. Laboratory environment includes both DUT test environment, and the test equipment used to simulate real-world effects. DUT test environment can be kept conceptually simple so that they are easily replicated, allowing tests to be duplicated by anyone. Traceability of laboratory test equipment is a common practice and allows results from different labs to be compared with a known confidence (uncertainty) level. It is not necessary to test two devices in the same lab in order to compare results. Traceability is to a standardized calibration, test method, and/or test system design. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Environmental effects can be separated into two basic categories: Purely Physical: Separation distance/path loss Multi-path fading Adjacent channel interference Noise Systemic (MAC): Network loading Hidden nodes Real-World environments combine all of these simultaneously at varying levels. Lab environments can simulate each one individually. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Environments Will expand on this topic further after the next section on Metrics. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Metrics There seems to be an inherent assumption that real-world testing implies real-world (application level) metrics, while laboratory testing implies low level (sub-metric) testing. While this seems logical, it doesn’t have to be so. There’s no reason application level metrics can’t be measured in a laboratory environment. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Metrics Application level metrics entail directly measuring QOS performance exactly as the user encounters. User experience is both qualitative and quantitative. Qualitative measurements are often: Subjective: “Sounds good to me…” Expensive: User panels Difficult to repeat. Hard to compare. Quantitative measurements are more useful, but may be difficult to perform at the application level for some usage cases. Implies a measurement system inherent in application or a custom application for the purpose of measurement. The latter is no longer truly a real-world case. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Metrics All applications (usage cases) share the effects of basic sub-metrics such as throughput, delay, jitter, etc. Exact relationship of these sub-metrics to the usage case metric(s) is currently undefined. There is a difference between application level testing and testing with a particular usage case application (i.e. VOIP, streaming video, etc.) Application level testing involves testing a given metric at the Application Layer of the ISO model. Ensures that effects due to things like drivers are accounted for. Application testing (usage case testing) involves testing a given application or usage case to determine its performance, thus combining the effects of all of the basic metrics into one measurement. By definition, this occurs at the Application Layer. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Metrics Currently, Client testing requires Application level (Application Layer) testing anyway. Without a test API, some sort of test traffic client application must be run to create or record the traffic used for testing. A test API may allow a way to test at a lower level, but this would provide additional diagnostic capabilities without necessarily eliminating the need to perform Application level tests. Tests of AP may or may not involve higher layers, depending on the metric or sub-metric being tested. Forwarding rate through an AP from Client to Client does not involve the application layer of the AP. Server-side metrics of higher level network layers may affect end user performance, but are these part of the APs performance metric? Certainly not part of its Wireless performance! Question of scope of the TGT… Dr. Michael D. Foegelle, ETS-Lindgren

Measurement Framework-Metrics January 2005 Measurement Framework-Metrics It’s apparent that Application Level testing is necessary, at least on the client side. Need to ensure that we’ve tested all components of a device. Application (usage case) testing is a slightly greyer area. Ideally any application specific testing would be standardized. Requires specialized applications for measuring things like video frame rate, audio reproduction quality, etc. Making the link between sub-metrics and the associated performance of a given usage case should be relatively straightforward. There are already applications for measuring throughput, etc. at the Client Application level. Opportunity for some R&D in this area. Dr. Michael D. Foegelle, ETS-Lindgren

Measurement Framework-Environment January 2005 Measurement Framework-Environment The usefulness of real-world testing environments is less obvious. There are significant limitations to the usefulness of data generated in this manner. In light of the previous metric discussion, much of the value of real world testing lies in application level metrics, not real-world environments. Most metrics can be measured in a low cost conducted laboratory environment, giving less credence to concerns over cost of testing. The fact that “anyone can do it” is not sufficient to justify incorporation of real-world testing into TGT. However, real-world testing does have a place. Dr. Michael D. Foegelle, ETS-Lindgren

Measurement Framework-Environment January 2005 Measurement Framework-Environment Dr. Michael D. Foegelle, ETS-Lindgren

Measurement Framework-Environment January 2005 Measurement Framework-Environment Dr. Michael D. Foegelle, ETS-Lindgren

Measurement Framework-Environment January 2005 Measurement Framework-Environment Real-World Environment Tests are useful for: Initial R&D work: Correlation to laboratory tests to develop appropriate models. Validation of models. Such R&D work is input to TGT, so should associated test methodology really be an output? Verification of Predicted Results: Allows user to confirm that installation is performing as expected. Allows adjustment to model inputs after testing small sub-installation before completing entire installation. Methodology for this purpose is a legitimate output of TGT. There is a level of political pressure for Real-World Product Testing. Not sure if this is an education issue, cost issue, or just because it’s what some have implemented to date. Dr. Michael D. Foegelle, ETS-Lindgren

Measurement Framework-Environment January 2005 Measurement Framework-Environment Real-World Environment Tests need a level of standardization to obtain useful results. Refer to previous case study. Statistical approach critical Move DUT(s) ~ 1/2 wavelength each direction and repeat test to determine multipath and/or near field contributions. Real-World Environment Tests ARE NOT SUBSTITUTES for Controlled Environment Tests! The real long-term value of TGT will be to produce standardized tests performed in controlled environments capable of simulating all critical real-world interactions and producing results suitable for comparing products and predicting real-world performance without the need for real-world tests. Dr. Michael D. Foegelle, ETS-Lindgren

January 2005 Conclusion It’s important to separate Test Metrics from Test Environments. Both Application Level and Low Level metrics are useful, and can be performed in the same test environment. Laboratory Test Environment can simulate necessary real-world effects and provide needed traceability for product comparison and prediction. Real-World Environment Tests are useful for R&D validation and verification of prediction results, but a level of standardization is required to ensure validity of results. Real-World Environment Tests are NOT suitable for product comparison testing or prediction of performance in any resultant environment. Real-World Environment Tests ARE NOT SUBSTITUTES for Controlled Environment Tests! Dr. Michael D. Foegelle, ETS-Lindgren