Doc.: IEEE 802.11-06/0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 1 Introduction to Measurement Uncertainty Notice: This document.

Slides:



Advertisements
Similar presentations
Doc.: IEEE /90r0 Submission Nov., 2012 NICTSlide b NICT Proposal IEEE P Wireless RANs Date: Authors: Notice: This document.
Advertisements

Doc.: IEEE /0930r0 Submission July 2006 Nancy Cam-Winget, Cisco Slide 1 Editor Updates since Jacksonville Notice: This document has been prepared.
Doc.: IEEE /0094r0 Submission November 2009 Steve Shellhammer, QualcommSlide 1 Comments on PAR Notice: This document has been prepared.
Doc.: IEEE /0121r0 Submission January 2006 S. Bezzateev, A. Fomin, M. WongSlide 1 Broadcast Management Frame Protection Notice: This document.
Doc.: IEEE /0644r2 Submission May 2006 Päivi Ruuska, NokiaSlide 1 Measurement Pilot Transmission Information as optional information in Probe.
Doc.: IEEE /0129r0 Submission January 2006 Don Berry, Wireless Enterprise ConsultingSlide 1 Non Noise Interference Testing Notice: This.
Doc.: IEEE /0217r0 Submission March 2005 Jason Liu and Justin McNew, TechnoComSlide 1 WRSS Discussions Notice: This document has been prepared.
Doc.: IEEE /2237r0 Submission July 2007 Emily Qi, Intel CorporationSlide 1 TGv Redline D1.0 Insert and Deletion Notice: This document has been.
Doc.: IEEE /1212r0 Submission TGT and MEF Liaison Notice: This document has been prepared to assist IEEE It is offered as a basis for.
Doc.: IEEE /0028r0 Submission January 2005 Eleanor Hepworth, Siemens Roke ManorSlide 1 Definitions and Terminology Notice: This document has been.
Doc.: IEEE /1528r0 Submission 22 September 2006 Naveen Kakani, Nokia, IncSlide 1 TGn PSMP adhoc Group September Closing Report Notice: This document.
Doc.: IEEE /0477r0 Submission March 2007 C. Wright, AzimuthSlide 1 Proposal for fixing additional issues in some subclauses Notice: This document.
Doc.: IEEE /0652r1 Submission May 2007 Emily Qi, Intel CorporationSlide 1 TGv Redline D0.12 Insert and Deletion Notice: This document has been.
LB84 General AdHoc Group Sept. Closing TGn Motions
[ Interim Meetings 2006] Date: Authors: July 2005
LB73 Noise and Location Categories
LB73 Noise and Location Categories
Waveform Generator Source Code
March 2014 Election Results
TGp Closing Report Date: Authors: July 2007 Month Year
Attendance and Documentation for the March 2007 Plenary
Attendance and Documentation for the March 2007 Plenary
[ Policies and Procedure Summary]
[ Policies and Procedure Summary]
3GPP liaison report May 2006 May 2006 Date: Authors:
Motion to accept Draft p 2.0
Protected SSIDs Date: Authors: March 2005 March 2005
3GPP liaison report July 2006
[place presentation subject title text here]
Motions Date: Authors: January 2006
(Presentation name) For (Name of group) (Presenter’s name,title)
TGp Motions Date: Authors: November 2005 Month Year
TGp Closing Report Date: Authors: March 2006 Month Year
On Coexistence Mechanisms
TGu-changes-from-d0-02-to-d0-03
TGp Closing Report Date: Authors: May 2007 Month Year
ATSC DTV Receiver Performance Multipath Equalization
Contribution on Location Privacy
On Coexistence Mechanisms
TGp Closing Report Date: Authors: March 2006 Month Year
Reflector Tutorial Date: Authors: July 2006 Month Year
TGv Redline D0.07 Insert and Deletion
TGv Redline D0.06 Insert and Deletion
IEEE WG Opening Report – July 2008
IEEE P Wireless RANs Date:
Spectrum Sensing Tiger Team
TGu-changes-from-d0-01-to-d0-02
LB73 Noise and Location Categories
March 2012 Opening Report Date: Authors: March 2012
TGy draft 2.0 with changebars from draft 1.0
TGv Redline D0.10 Insert and Deletion
IEEE WG Opening Report – July 2007
WAPI Position Paper Sept 2005 Sept 2005 IEEE WG
Redline of draft P802.11w D2.2 Date: Authors:
TGp Closing Report Date: Authors: March 2007 Month Year
TGr Proposed Draft Revision Notice
TGu-changes-from-d0-02-to-d0-03
[ Policies and Procedure Summary]
March Opening Report Date: Authors: March 2011
3GPP2 Liaison Report Date: Authors: May 2006 May 2006
Beamforming and Link Adaptation Motions
Beam Ad Hoc Agenda Date: Authors: March 2007 March 2007
Draft P802.11s D1.03 WordConversion
EC Motions – July 2005 Plenary
TGu-changes-from-d0-04-to-d0-05
TGu-changes-from-d0-03-to-d0-04
TGu Motions Date: Authors: May 2006 May 2006
WNG SC Closing Report Date: Authors: November 2005
WAPI Position Paper Sept 2005 Sept 2005 IEEE WG
TGp Motions Date: Authors: January 2006 Month Year
Presentation transcript:

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 1 Introduction to Measurement Uncertainty Notice: This document has been prepared to assist IEEE It is offered as a basis for discussion and is not binding on the contributing individual(s) or organization(s). The material in this document is subject to change in form and content after further study. The contributor(s) reserve(s) the right to add, amend or withdraw material contained herein. Release: The contributor grants a free, irrevocable license to the IEEE to incorporate material contained in this contribution, and any modifications thereof, in the creation of an IEEE Standards publication; to copyright in the IEEE’s name any IEEE Standards publication even though it may include portions of this contribution; and at the IEEE’s sole discretion to permit others to reproduce in whole or in part the resulting IEEE Standards publication. The contributor also acknowledges and accepts that this contribution may be made public by IEEE Patent Policy and Procedures: The contributor is familiar with the IEEE 802 Patent Policy and Procedures, including the statement "IEEE standards may include the known use of patent(s), including patent applications, provided the IEEE receives assurance from the patent holder or applicant with respect to patents essential for compliance with both mandatory and optional portions of the standard." Early disclosure to the Working Group of patent information that might be relevant to the standard is essential to reduce the possibility for delays in the development process and increase the likelihood that the draft publication will be approved for publication. Please notify the Chair as early as possible, in written or electronic form, if patented technology (or technology under patent application) might be incorporated into a draft standard being developed within the IEEE Working Group. If you have questions, contact the IEEE Patent Committee Administrator at. Date: Authors:

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 2 Abstract This presentation introduces the common industry concept of Measurement Uncertainty to represent the quality of a measurement. Other common terms such as accuracy, precision, error, repeatability, and reliability are defined and their relationship to measurement uncertainty is shown. Basic directions on calculating uncertainty and an example are included.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 3 Overview Definitions Measurement Uncertainty –Type A Evaluations –Type B Evaluations –Putting It All Together – RSS –Reporting Uncertainty –Special Cases Example Uncertainty Budget Summary References

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 4 Definitions Error – The deviation of a measured result from the correct or accepted value of the quantity being measured. There are two basic types of errors, random and systematic.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 5 Definitions Random Errors – cause the measured result to deviate randomly from the correct value. The distribution of multiple measurements with only random error contributions will be centered around the correct value. Some Examples –Noise (random noise) –Careless measurements –Low resolution instruments –Dropped digits

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 6 Definitions Systematic Errors – cause the measured result to deviate by a fixed amount in one direction from the correct value. The distribution of multiple measurements with systematic error contributions will be centered some fixed value away from the correct value. Some Examples: –Mis-calibrated instrument –Unaccounted cable loss

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 7 Definitions Measurements typically contain some combination of random and systematic errors. Precision is an indication of the level of random error. Accuracy is an indication of the level of systematic error. Accuracy and precision are typically qualitative terms.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 8 Definitions Measurement Uncertainty combines these concepts into a single quantitative value representing the total expected deviation of a measurement from the actual value being measured. –Includes a statistical confidence in the resulting uncertainty. –Contains contributions from all components of the measurement system, requiring an understanding of the expected statistical distribution of these contributions. –By definition, measurement uncertainty does not typically contain contributions due to the variability of the DUT. The “correct” value of a measurement is the value generated by the DUT at the time it is tested. Variability of the DUT cannot be pre-determined. Still, the uncertainty of a particular measurement result will include this variability.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 9 Definitions Repeatability refers to the ability to perform the same measurement on the same DUT under the same test conditions and get the same result over time. By repeating the test setup between measurements of a stable DUT, a statistical determination of System Repeatability can be made. This is simply the level of random error (precision) of the entire system, including the contribution of the test operator, setup, etc.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 10 Definitions Reproducibility typically refers to the stability of the DUT and the ability to reproduce the same measurement result over time using a system with a high level of repeatability. More generally, it refers to achieving the same measurement result under varied conditions. –Different test equipment –Different DUT –Different Operator –Different location/test lab

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 11 Definitions Reliability refers to producing the same result in statistical trials. This would typically refer to the stability of the DUT, and has connotations of operational reliability of the DUT. Correction - value added algebraically to the uncorrected result of a measurement to compensate for systematic error. Correction Factor - numerical factor by which the uncorrected result of a measurement is multiplied to compensate for systematic error. Resolution – indicates numerical uncertainty of test equipment readout. Actual uncertainty may be larger.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 12 Measurement Uncertainty A measurement uncertainty represents a statistical level encompassing the remaining unknown error in a measurement. If the actual value of an error is known, then it is not part of the measurement uncertainty. Rather, it should be used to correct the measurement result. The methods for determining a measurement uncertainty have been divided into two generic classes: –Type A evaluation produces a statistically determined uncertainty based on a normal distribution. –Type B evaluation represents uncertainties determined by any other means.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 13 Type A Evaluations Uncertainties are determined through Type A evaluation by performing repeated measurements and determining the statistical distribution of the results. This approach works primarily for random contributions. –Repeated measurements with systematic deviations from a known correct value gives an error value that should be corrected for. However, when evaluating the resulting measurement, the effect of many systematic uncertainties combine with random uncertainties in such a way that their effect can be determined statistically. –Eg. A systematic offset in temperature can cause an increase in the random thermal noise in the measurement result.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 14 Type A Evaluations Type A evaluation is based on the standard deviation of repeat measurements, which for n measurements with results q k and average value q, is approximated by: The standard uncertainty contribution u i of a single measurement q k is given by: If n measurements are averaged together, this becomes: _

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 15 Type B Evaluations For cases where Type A evaluation is unavailable or impractical, and to cover contributions not included in the Type A analysis, a Type B analysis is used. –Determine potential contributions to the total meas. uncertainty. –Determine the uncertainty value for each contribution. Type A evaluation. Manufacturer’s datasheet. Estimate a limit value. Note: Contribution must be in terms of the variation in the measured quantity, not the influence quantity. –For each contribution, choose expected statistical distribution and determine its standard uncertainty. –Combine resulting u i s and calculate the expanded uncertainty.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 16 Type B Evaluations There are a number of common distributions for uncertainty contributions: Normal distribution: Examples: –Results of Type A evaluations –expanded uncertainties of components where U i is the expanded uncertainty of the contribution and k is the coverage factor (k = 2 for 95% confidence).

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 17 Type B Evaluations Rectangular distribution – measurement result has an equal probability of being anywhere within the range of –a i to a i. Examples: –Equipment manufacturer ± accuracy values (not from standard uncertainty budget) –Equipment resolution limits. –Any term where only maximal range or error is known.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 18 Type B Evaluations U-shaped distribution – measurement result has a higher likelihood of being some value above or below the median than being at the median. Examples: –Mismatch (VSWR) –Distribution of a sine wave –5% Resistors (Culling)

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 19 Type B Evaluations Triangular distribution – non-normal distribution with linear fall-off from maximum to zero. Examples: –Alternate to rectangular or normal distribution when distribution is known to peak at center and has a known maximum expected value.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 20 Type B Evaluations Another Look

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 21 Putting It All Together - RSS Once standard uncertainties have been determined for all components, including any Type A analysis, they are combined into a total standard uncertainty (the combined standard uncertainty, u c ), for the resultant measurement quantity using the root sum of squares method: where N is the number of standard uncertainty components in the Type B analysis. The combined standard uncertainty is assumed to have a normal distribution.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 22 Reporting Uncertainty The standard uncertainty is the common term used for calculations. It represents a ±1  span (~68%) of a normal distribution. Typically, measurement uncertainties are expressed as an Expanded Uncertainty, U = k u c, where k is the coverage factor. A coverage factor of k=2 is typically used, representing a 95% confidence that the measured value is within the specified measurement uncertainty. Reporting of expanded uncertainties must include both the uncertainty value and either the coverage factor or confidence interval in order to assure proper use.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 23 Special Cases For Type A analyses with only a small number of samples, the standard coverage factor is insufficient to ensure that the expanded uncertainty covers the expected confidence interval. Must use variable k p. RSS math works for values in dB! However, distribution of a linear value may change when converted to dB. –Uncertainties typically always determined in measurement output units. N  kpkp

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 24 Special Cases Not all distributions are symmetrical! –Can develop asymmetrical uncertainties (+X/-Y) treating asymmetric inputs separately. –Can separate random portion of uncertainty from systematic portion and apply a systematic error correction to measurement. (Convert asymmetric uncertainty to symmetric uncertainty.) error correction = (X+Y)/2, U = (X-Y)/2

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 25 Example Uncertainty Budget

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 26 Summary This presentation gives common definitions for various terms that have been used and misused in the TGT draft. The concept of measurement uncertainty has been introduced as the industry standard replacement for terms such as accuracy, precision, repeatability, etc. Basic information has been given for a general knowledge of the concepts and components of measurement uncertainty. This document is not intended as a reference! Please refer to the published documents referenced here.

Doc.: IEEE /0333r0 Submission March 2006 Dr. Michael D. Foegelle, ETS-LindgrenSlide 27 References 1.NIST Technical Note , “Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results”, Barry N. Taylor and Chris E. Kuyatt. 2.NIS-81, “The Treatment of Uncertainty in EMC Measurements”, NAMAS 3.ISO/IEC Guide 17025, “General requirements for the competence of testing and calibration laboratories.”