Presentation is loading. Please wait.

Presentation is loading. Please wait.

1. Michelson (ether: 1887, 1907), Einstein (1905).

Similar presentations


Presentation on theme: "1. Michelson (ether: 1887, 1907), Einstein (1905)."— Presentation transcript:

1 1. Michelson (ether: 1887, 1907), Einstein (1905).
LECTURE 5. 1. Michelson (ether: 1887, 1907), Einstein (1905). 2. Position measurement: z0 . What type of measurement is it? 3. White noise is a zero-mean uncorrelated noise. 4. Wiener-Khinchin theorem, Einstein (1914), Khinchin, Kolmogorov (1934), Wiener (1949). HOMEWORK: [10] L. Cohen, “The history of noise: on the 100th anniversary of its birth,” IEEE Signal Processing Magazine, vol. 20, 2005.

2 Example: White noise x(t): Gaussian, uniform.
LECTURE 5. Example: White noise x(t): Gaussian, uniform. x - 2 1 3 x(t) f (x) 50 100 150 200 250 300 t

3 4.2.1. Uncertainty and inaccuracy 4.2.2. Crest factor
LECTURE 5. Contents 4. Measurement errors 4.1. Systematic errors 4.2. Random errors Uncertainty and inaccuracy Crest factor 4.3. Error sensitivity analysis Systematic errors Random errors

4 4. MEASUREMENT ERRORS 4. MEASUREMENT ERRORS Practically all measurements of continuums involve errors. Understanding the nature and source of these errors can help in reducing their impact. In earlier times it was thought that errors in measurement could be eliminated by improvements in technique and equipment, however most scientists now accept this is not the case. The types of errors include: systematic errors and random errors. Reference:

5 4. MEASUREMENT ERRORS. 4.1. Systematic errors
Systematic error are deterministic; they may be predicted and hence eventually removed from data. Their source is not the absolute exactness of the measurement model. Systematic errors may be traced by a careful examination of the measurement path: from measurement object, via the measurement system to the observer. Another way to reveal a systematic error is to use the repetition method of measurements. NB: Systematic errors may change with time, so it is important that sufficient reference data be collected to allow the systematic errors to be quantified. References: [1]

6 Example: Measurement of the voltage source value
4. MEASUREMENT ERRORS Systematic errors Example: Measurement of the voltage source value Temperature sensor Measurement system RS VS RIN VIN VIN  VS VIN = k·VS RIN k = RIN+RS

7 4.2. Random errors 4.2.1. Uncertainty and inaccuracy
4. MEASUREMENT ERRORS Random errors Uncertainty and inaccuracy 4.2. Random errors Uncertainty and inaccuracy Random error vary unpredictably for every successive measurement of the same physical quantity, made with the same equipment under the same conditions. We cannot correct random errors, since we have no insight into their cause and since they result in random (non-predictable) variations in the measurement result. When dealing with random errors we can only speak of the probability of an error of a given magnitude. Reference: [1]

8 4. MEASUREMENT ERRORS. 4. 2. Random errors. 4. 2. 1
4. MEASUREMENT ERRORS Random errors Uncertainty and inaccuracy NB: Random errors are described in probabilistic terms, while systematic errors are described in deterministic terms. Unfortunately, this deterministic character makes it more difficult to detect systematic errors. Reference: [1]

9 Example: Random and systematic errors
4. MEASUREMENT ERRORS Random errors Uncertainty and inaccuracy Example: Random and systematic errors f (x) Mean measurement result s True value 3s (0.14%) (0.14%) Systematic error Uncertainty Measurements, x Inaccuracy Measurements, x Amplitude, 0 -p rms Maximum random error t

10 Resolution xmax FS RES   D xmin ST
4. MEASUREMENT ERRORS Random errors Uncertainty and inaccuracy Resolution The resolution, RES, is defined as the smallest value D x of the measured signal x that exceeds the sensitivity threshold, ST. Usually: RES  D xmin  ST =s. The resolution can also be defined as the ratio of xmax (or full-scale value of x, FS) to D xmin: xmax D xmin FS ST RES   For example, if xmax = 10 V and D xmin = 150 mV, then RES  216, which corresponds to a resolution of 16 bit. Reference: [1]

11 Inaccuracy, accuracy, and precision
4. MEASUREMENT ERRORS Random errors Uncertainty and inaccuracy Inaccuracy, accuracy, and precision Inaccuracy is defined as IN  A+B, where A is the maximum random error (uncertainty of type A*), and B is the systematic error (uncertainty of type B*): f ( x ) s 3s X x Xtrue B A Inaccuracy, IN A+B * International Committee of Measures and Weights (CIPM), 1986

12 IN d  100% Xtrue ACC  100% -d RES P  1- 100% X
4. MEASUREMENT ERRORS Random errors Uncertainty and inaccuracy IN Xtrue the relative inaccuracy can be defined as: d  % the accuracy can be defined as: (The ability of a measurement to match the actual value of the quantity being measured.) ACC  100% -d and the precision can be defined as: RES X P  % (The ability of a measurement to be consistently reproduced.) More accurate, but less precise f (x), normalized More accurate and more precise s 3s X x Xtrue B A IN Reference: [4]

13 4.2.2. Crest factor 1 P[x - x  ks ]  , k2
4. MEASUREMENT ERRORS Random errors Crest factor Crest factor One can define the ‘maximum possible error’ for 100% of the measurements only for systematic errors. For random errors, an maximum random error (error interval) is a function of the ‘probability of excess deviations’. The upper (most pessimistic) limit of the error interval for any shape of the probability density function is given by the inequality of Chebyshev-Bienaymé: 1 P[x - x  ks ]  , k2 where k is so-called crest* factor (k0). This inequality asserts that the probability of deviations that exceed ks is not greater than one over the square of the crest factor. *Crest stands here for ‘peak’. Reference: [1]

14 ∫ ∫ ∫ ∫ Proof: P[|x - x| ks ]  f(x)dx + k2s 2 k2s 2f(x)dx = + 1
4. MEASUREMENT ERRORS Random errors Crest factor P[|x - x| ks ]  f(x)dx - x-ks + + x+ks k2s 2 Proof: k2s 2f(x)dx - x-ks = x+ks + + 1 k2s 2 (x-x)2f(x)dx - x-ks x+ks + + 1 k2s 2 x  x-ks  (x-x)2 k2s 2 x  x+ks  (x-x)2 k2s 2 s 2 (x-x)2f(x)dx + - x-ks (x-x)2f(x)dx x+ks + + 1 k2s 2 1 k2

15 Probability of excess deviations
4. MEASUREMENT ERRORS Random errors Crest factor Illustration: Probability of excess deviations Normal pdf Tchebyshev (most pessimistic) limit Any pdf 10 0 10-1 10-2 Probability of excess deviations 10-3 10-4 10-5 10-6 1 2 3 4 5 Crest factor, k

16 4.3. Error sensitivity analysis
4. MEASUREMENT ERRORS Error propagation 4.3. Error sensitivity analysis The sensitivity of a function to the errors in arguments is called error sensitivity analysis or error propagation analysis. We will discuss this analysis first for systematic errors and then for random errors. Systematic errors Let us define the absolute error as the difference between the measured, a, and true, a0, values of a physical quantity (argument), Da  a - a0 , Reference: [1]

17 a - a0 Da da  = . a0 a0 and the relative error as:
4. MEASUREMENT ERRORS Error propagation Systematic errors and the relative error as: a - a0 Da da  = a0 a0 If the final result, m, of a series of measurements is given by: m = f(a,b,c,…) , where a, b, c,…, are independent, individually measured physical quantities, then the absolute error of m is: Dm = f(a,b,c,…) - f(a0,b0,c0,…). Reference: [1]

18 + Db + …, Dm = f(a,b,c,…) - f(a0,b0,c0,…).
4. MEASUREMENT ERRORS Error propagation Systematic errors Dm = f(a,b,c,…) - f(a0,b0,c0,…). The Taylor expansion of the first term yields in which all higher-order terms have been neglected. This is permitted provided that the absolute errors of the arguments are small and the curvature of f(a,b,c,…) at the point (a0,b0,c0,…) is small. f(a,b,c,…) f(a,b,c,…) Dm  Da Db + …, a b (a0,b0,c0,…) (a0,b0,c0,…) Reference: [1]

19 4. MEASUREMENT ERRORS. 4. 3. Error propagation. 4. 3. 2
4. MEASUREMENT ERRORS Error propagation Systematic errors One never knows the actual value of Da, Db, Dc,… . Usually the individual measurements are given as a ± Damax, b ± Dbmax, … , in which Damax, Dbmax are the maximum possible errors. In this case f(a,b,c,…) f(a,b,c,…) Dmmax  Damax Dbmax + … . a b (a0,b0,c0,…) (a0,b0,c0,…) Reference: [1]

20 Dmmax  S maDamax + S mb Dbmax + … .
4. MEASUREMENT ERRORS Error propagation Systematic errors Defining the sensitivity factors: f S ma  , … , a (a0,b0,c0,…) this becomes: Dmmax  S maDamax + S mb Dbmax + … . Reference: [1]

21 Dmmax dmmax   + + … , m0 Dmmax dmmax   + + … , m0
4. MEASUREMENT ERRORS Error propagation Systematic errors This expression can be rewritten to obtain the maximal relative error: Dmmax f a0 Damax f b0 Dbmax dmmax   … , m0 a f0 a0 b f0 b0 or Dmmax f/f0 Damax f/f0 Dbmax dmmax   … , m0 a/a0 a0 b/a0 b0 Reference: [1]

22 dmmax  s madamax  +  s xb dbmax + … .
4. MEASUREMENT ERRORS Error propagation Systematic errors Dmmax f/f0 Damax f/f0 Dbmax dmmax   … , m0 a/a0 a0 b/a0 b0 Defining the relative sensitivity factors: df f / f0 s ma  = 100%, da a/a0 this becomes: dmmax  s madamax  +  s xb dbmax + … . Reference: [1]

23 Illustration: The rules that simplify the error sensitivity analysis
4. MEASUREMENT ERRORS Error propagation Systematic errors Illustration: The rules that simplify the error sensitivity analysis 1. sm m a = sma + sma 2. sma = k -1 s ma 3. sma = n sma 1 2 n k 4. sma = -s ma k 5. sma = sma k 6. sma = smb sba -k k The sensitivity factors help one to easy determine how systematic errors propagate through sums, products, ratios, etc., of measurement results. Homework: 1. Prove the above rules Do the rules hold for absolute sensitivity factors? Reference: [1]

24 Illustration: The rules that simplify the error sensitivity analysis
4. MEASUREMENT ERRORS Error propagation Systematic errors Illustration: The rules that simplify the error sensitivity analysis 1. sm m a = sma + sma 2. sma = k -1 s ma 3. sma = n sma 1 2 n k 4. sma = -s ma k 5. sma = sma k 6. sma = smb sba -k k ak a0 k a0k-1 a0 sma = sma saa= sma k = sma = k sma  sma = k -1 s ma k k a a0k k a0k k k f Sma  a f a0 sma  100% a f0 Reference: [1]

25 4. MEASUREMENT ERRORS. 4. 3. Error propagation. 4. 3. 2
4. MEASUREMENT ERRORS Error propagation Systematic errors Illustration: The rules that simplify the error sensitivity analysis a0 1% k a0  k Da k  1 da a0 1% a0n  n a0n -1 Da ^n  n da a0 1% a02  2 a0 Da ^2  2 da 0.5 a0 a0  Da a0 1%  0.5da 1 a0 ln(a0)  Da a0 1% 1 ln(a0) ln  da f Sma  a f a0 sma  100% a f0

26 4. MEASUREMENT ERRORS. 4. 3. Error propagation. 4. 3. 2
4. MEASUREMENT ERRORS Error propagation Systematic errors Illustration: The rules that simplify the error sensitivity analysis a + a0 + b0  1 Da  1 Db a0 a0+b0 b0 a0+b0  da  db b a a0 b0  b0 Da  a0 Db  1 da  1 db b 1 b0 a0 b02 a a0/b0  Da Db  1 da db, sa/aa = 0 b a a0 a0- k + a0 - k  1 Da -k  da =  ∞ da k = a0 m0  Smb Sba Da a f1 b f2  smb sba da f Sma  a f a0 m m b m b m a0 m a0 b b0 m b0 b a0 = = sma  100% = = a a b b a a m0 a m0 b b0 b m0 a b0 a f0

27 4.3.2. Random errors (examples)
4. MEASUREMENT ERRORS Error propagation Random errors Random errors (examples) Example 1: An unbiased estimator for variance s 2 Example 2: RMS and s for a zero-mean random process Example 3: Power spectral density Example 4: Stationary processes and ergodic processes Example 5: White noise Example 6: Bandlimited white noise Example 7: Nonstationary white noise

28 S S S Example 1: An unbiased estimator for variance s 2 1 n 1 n -1 1 n
4. MEASUREMENT ERRORS Error propagation Random errors Example 1: An unbiased estimator for variance s 2 For an independent identically distributed (iid) sequence of random variables X1, …, Xn, 1 n n S sample mean: Mn = Xi , i =1 mean: m  E[Mn] , 1 n -1 S n sample variance: sn2 = (Xi-Mn)2 , i =1 1 n S n variance: s 2  E (Xi-m)2 . i =1 An iid process is a stationary process.

29 S S S S S S 1 n 1 n = E (Xi - Mn + Mn - m)2 s 2  E (Xi - m)2 1 n 2
4. MEASUREMENT ERRORS Error propagation Random errors Let us prove that the sample variance sn2 is an unbiased estimator for the variance s 2. 1 n n 1 n S i =1 = E (Xi - Mn + Mn - m)2 S s 2  E (Xi - m)2 i =1 =0 1 n 2 = E S i =1 (Xi - Mn )2 + (Mn - m)(Mn - m) (Xi - Mn )(Mn - m) for j = k : s 2/n, for j  k : COV=0 1 n n2 = E S i =1 (Xi - Mn ) E [(Xj - m)(Xk - m)] j =1 k =1 sn2 1 n -1  s 2 =E S i =1 n (Xi - Mn )2 . 1 n = E S i =1 (Xi - Mn ) s 2

30 S S Example 2: RMS 2 and s 2 for a zero-mean random process 1 n 1 n -1
4. MEASUREMENT ERRORS Error propagation Random errors Example 2: RMS 2 and s 2 for a zero-mean random process 1 n n 1 n -1 S S n RMS 2  Xi2 = (Xi- 0)2  sn2  s 2 i =1 i =1 n >> 1

31 ∫ ∫ ∫ ∫ ∫ ∫ Example 3: Energy and power spectra of signals
4. MEASUREMENT ERRORS Error propagation Random errors Example 3: Energy and power spectra of signals For finite-energy (square integrable) signals, the energy spectrum density (ESD) is defined as ESD: S( f )2, [V2/Hz2, A2/Hz2] Signal energy* (norm2): where s(t) is a real signal is the ACF, Es = s2(t) d t = S( f )2d f , [V2s, A2s, ]. - - Rs(t ) = s(t) s(t -t ) d t - *Parseval's theorem For infinite-energy signals the power spectral density (PSD) is defined. PSD: P( f ), [V2/Hz, A2/Hz] T 1 T Average signal power: where is the ACF. Ws = lim s2(t) d t = Rs(0) = P( f ) d f , [V2, A2]. T - 1 T T Rs(t ) = lim s(t) s(t -t ) d t T

32 ∫ ∫ ∫ PX ( f ) = RX (t ) e-j2pf dt RX (t ) = PX ( f ) e j2pt df
4. MEASUREMENT ERRORS Error propagation Random errors For a continuous-time wide-sense stationary* (WSS) random process X(t), the PSD is given by the Einstein-Wiener-Khinchin theorem: PX ( f ) = RX (t ) e-j2pf dt - RX (t ) = PX ( f ) e j2pt df - For a zero-mean random process RX (0) =E [X 2(t)] = s 2 = RMS 2 = PX ( f ) d f - * The PSD of a signal exists if and only if the signal is a WSS random process. If the signal is not stationary, then the autocorrelation function must be a function of two variables, t and t, so no PSD exists, but a time-varying PSD can be estimated.

33 Not ergodic* WSS process
4. MEASUREMENT ERRORS Error propagation Random errors Example 4: Stationary and ergodic processes WSS process Not WSS process mX (t) = m, sX (t) = s, RX (t1, t2) = RX (t ) X (t) mX (t) = m, sX (t) = s, RX (t1, t2) = RX (T,t ) X (t) t t Ensemble t t T Single process, xi(t) Not ergodic* WSS process mX (t) = m, sX (t) = s, RX (t1, t2) = RX (t ) X (t) * It can be shown that a WSS random process with a RX (t ) ® 0 is an ergodic process. t ® ¥ t Ventzel, Probability theory.

34 ∫ ∫ ∫ 1 T mX = lim xi (t) d t 1 T sX = lim [ xi (t) - m]2 d t 1 T
4. MEASUREMENT ERRORS Error propagation Random errors For an ergodic random process: T 1 T mX = lim xi (t) d t T 1 T T sX = lim [ xi (t) - m]2 d t T T 1 T RX (t ) = lim xi (t) xi (t -t) d t - m2 T

35 Sampling frequency, 1/Dt (Hz)
4. MEASUREMENT ERRORS Error propagation Random errors Example 5: White noise as a zero-mean uncorrelated stationary process 3 2 1 X(t) - 1 - 2 - 3 100 200 300 400 500 Time, (s) 2000 1500 RX (t ) 1000 500 500 1000 1500 2000 Time, (s) 0.1 PX ( f )0.5 = Fourier[RX (t )]0.5 PX ( f )0.5 = Fourier[X(t)] 0.01 0.001 200 400 600 800 1000 Sampling frequency, 1/Dt (Hz)

36 Sampling frequency, 1/Dt (Hz)
4. MEASUREMENT ERRORS Error propagation Random errors Example 6: Bandlimited white noise: weakly correlated stationary process 3 2 X(t) 1 X(t) = X(t) after an LPF - 1 - 2 - 3 100 200 300 400 500 Time, (s) 1 0.8 RX (t ), normalized 0.6 RX (t ), normalized 0.4 0.2 20 40 60 80 100 Time, (s) 0.1 PX ( f )0.5 = Fourier[RX (t )]0.5 0.01 PX ( f )0.5 = Fourier[X(t)] 0.001 fc 1 5 10 50 100 500 1000 Sampling frequency, 1/Dt (Hz)

37 Sampling frequency, 1/Dt (Hz)
4. MEASUREMENT ERRORS Error propagation Random errors Example 7: Nonstationary white noise 6000 4000 2000 X ’(t) = X(t) · t - 2000 - 4000 - 6000 500 1000 1500 2000 Time, (s) 2 10 9 RX ’ (T,t ) 1 10 9 500 1000 1500 2000 Time, (s) 100 PX ’ ( f )0.5 = Fourier[RX ’ (T,t )]0.5 PX ’ ( f )0.5 = Fourier[X(t)] 10 1 200 400 600 800 1000 Sampling frequency, 1/Dt (Hz)

38 Homework: Repeat the simulations in the three previous slides.
4. MEASUREMENT ERRORS Error propagation Random errors Homework: Repeat the simulations in the three previous slides.

39 4.3.2. Random errors (a,b,c,…) (a,b,c,…) (a,b,c,…)
4. MEASUREMENT ERRORS Error propagation Random errors Random errors If the final result x of a series of measurements is given by: m = f(a,b,c,…) , where a, b, c, … are independent, individually measured physical quantities, then the absolute error of m is: Again, we have neglected the higher order terms of the Taylor expansion. f f f dm  da db dc + …. a b c (a,b,c,…) (a,b,c,…) (a,b,c,…) Reference: [1]

40 2 sm2  (dm)2  da + db + dc + … 2 2 = (da)2 + (db)2 + …+ da db + … 2
4. MEASUREMENT ERRORS Error propagation Random errors If we define dm  m - m, then the measurement variance f f f 2 sm2  (dm)2  da db dc + … a b c f 2 f 2 f f = (da) (db)2 + … da db + … a b a b squares cross products = 0 f 2 f 2 = (da) (db)2 + … . a b (a,b,c,…) (a,b,c,…) Reference: [1]

41 2 2 2 sm2 = sa2 + sb2 + sc2 + … (a,b,c,…) (a,b,c,…) (a,b,c,…)
4. MEASUREMENT ERRORS Error propagation Random errors Defining (da)2  sa2 …, the expression for sm2 can be written as (Gauss’ error propagation rule): f 2 f 2 f 2 sm2 = sa sb sc2 + … a b c (a,b,c,…) (a,b,c,…) (a,b,c,…) m = f(a,b,c,…) NB: In the above derivation, the shape of the pdf of the individual measurements a, b, c, … does not matter. Reference: [1]

42 or for the standard deviation of the end result:
4. MEASUREMENT ERRORS Error propagation Random errors Example A: Let us apply Gauss’ error propagation rule to the case of averaging in which sa = sa: i N 1 N m =  ai : i = 1 N (ai /N) 2 1 N 2 1 N sm2 = sa 2 = N sa 2 = sa 2, ai i i = 1 or for the standard deviation of the end result: 1  N sm = sa . Due to averaging, the measurement uncertainty decreases with the square root of the number of measurements.

43 or for the standard deviation of the end result:
4. MEASUREMENT ERRORS Error propagation Random errors Example B: Let us apply Gauss’ error propagation rule to the case of integration in which which sa = sa: i N m =  ai : i = 1 sm2 = Nsa 2 , or for the standard deviation of the end result: sm =  N sa . Due to integration, the measurement uncertainty increases with the square root of the number of measurements.

44 Averaging (10) and integration
4. MEASUREMENT ERRORS Error propagation Random errors Illustration: Noise averaging and integration Input Output Gaussian white noise Averaging (10) and integration What about SNR? Averaging Integration Averaging 1  N sm = snoise Integration sm = Nsnoise

45 Next lecture Next lecture:


Download ppt "1. Michelson (ether: 1887, 1907), Einstein (1905)."

Similar presentations


Ads by Google