Presentation is loading. Please wait.

Presentation is loading. Please wait.

Negative Power Law Noise, Reality vs Myth Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA Precise Time and Time Interval Systems.

Similar presentations


Presentation on theme: "Negative Power Law Noise, Reality vs Myth Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA Precise Time and Time Interval Systems."— Presentation transcript:

1 Negative Power Law Noise, Reality vs Myth Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA Precise Time and Time Interval Systems and Applications Meeting 2009

2 Page 2 PTTI 2009 -- V. S. Reinhardt Negative Power Law (Neg-p) Noise, Reality vs Myth Not questioning the reality that  -variancesNot questioning the reality that  -variances  Like Allan and Hadamard variances  Are convergent measures of neg-p noise But will show it is myth that neg-p divergences in other variances like standard & N-sampleBut will show it is myth that neg-p divergences in other variances like standard & N-sample  Are mathematical defects in these variances  That should be “fixed” by replacing them with  -variances without further action Will show each type of variance is a statistical answer to a different error questionWill show each type of variance is a statistical answer to a different error question  And variance divergences are real indicators of physical problems that can’t be glossed over by swapping variances PSD L P (f)  f p for p < 0

3 Page 3 PTTI 2009 -- V. S. Reinhardt Negative Power Law Noise, Reality vs Myth Will also show it is myth that one can properly separate polynomial deterministic behavior & (non-highpass filtered) neg-p noiseWill also show it is myth that one can properly separate polynomial deterministic behavior & (non-highpass filtered) neg-p noise  Using least squares & Kalman filters  Except under certain conditions Will show the reality is that such neg-p noise is infinitely correlated & non-ergodicWill show the reality is that such neg-p noise is infinitely correlated & non-ergodic  Ensemble averages  time averages  And this makes neg-p noise act more like systematic error than conventional noise in statistical estimation

4 Page 4 PTTI 2009 -- V. S. Reinhardt t Data Collection Interval T = x c (t n ) x r (t) = Noise + x r (t n ) x c (t) = True or Deterministic Behavior x(t n ) N Samples of Measured Data Simple Statistical Estimation Model Myth 1: Can “Fix” Variance Divergences Just by Swapping Variances Using a technique like a least squares or Kalman filterUsing a technique like a least squares or Kalman filter Note x(t) here is any data variableNote x(t) here is any data variable  Not necessarily the time error Generate x a,M (t) an M- Parameter Est of x c (t)

5 Page 5 PTTI 2009 -- V. S. Reinhardt t T Basic Error Measures and the Questions They Address Can form variances from these error measuresCan form variances from these error measures  Point Variance:   (t) 2 = E x  (t) 2 (Kalman)  E.. = Ensemble average  Assumes E x  (t) = 0  Average Variance:   2 = a weighted average of   (t) 2 over T (LSQF) Data Precision x j,M (t n ): Data variation from fit? x j,M (t) Accuracy x w.M (t n ): Error of fit from true behavior? x w,M (t) M th Order  -Measures  (  ) M x(t): Stability over  ?  (  ) 2 x(t)  (  )x(t+  )  (  )x(t)=x(t+  )-x(t)

6 Page 6 PTTI 2009 -- V. S. Reinhardt Interpreting  -Measures as M th Order Stability Measures Definition of M th Order StabilityDefinition of M th Order Stability  Extrapolation or interpolation error x j,M (t m ) at an (M+1) th point  After a removing a perfect M-parameter fit over only M of those points Can show when x a,M (t) = (M-1) th order polynomial & points separated by  Can show when x a,M (t) = (M-1) th order polynomial & points separated by   x j,M (t m )   (  ) M x(t 0 ) Thus  -variances are statistical measures of such stabilityThus  -variances are statistical measures of such stability  x j,M (t m ) x a,M (t) Passes thru t 0 …t M but not t m       x j,M (t M )  x a,M (t) Passes thru t 0 …t M-1 but not t M  x(t m )      Extrapolation Error Interpolation Error

7 Page 7 PTTI 2009 -- V. S. Reinhardt A Derived Error Measure: The Fit Precision (Error Bars) Fit Precision: A statistical estimate of accuracy based on measured data precision variance and correction factors based on a specific theoretical noise modelFit Precision: A statistical estimate of accuracy based on measured data precision variance and correction factors based on a specific theoretical noise model  wj,M (t) 2 =  d (t)  j,M (t) 2  wj,M 2 =  d  j,M 2 For uncorrelated noise & an unweighted linear least squares fit (LSQF) For uncorrelated noise & an unweighted linear least squares fit (LSQF)   d = M/(N-M)  Not true for correlated noise t x w,M (t) x j,M (t) 2  wj,M (t) or 2  wj,M

8 Page 8 PTTI 2009 -- V. S. Reinhardt The Neg-p Convergence Properties of Variances for Polynomial x a,M (t) For  -variancesFor  -variances K(f)  f 2M (|f|<<1) Also true for M th order data precisionAlso true for M th order data precision Both converge for 2M  -pBoth converge for 2M  -p But for accuracy K(f) is a lowpass filterBut for accuracy K(f) is a lowpass filter  Accuracy won’t converge unless |H s (f)| 2 provides sufficient HP filtering Thus the temptation to “fix” a neg-p variance divergence by swapping variancesThus the temptation to “fix” a neg-p variance divergence by swapping variances System Response Variance Kernel PSD 1 Log(fT) M=5  f 10 M=4  f 8 M=3  f 6 M=2  f 4 M=1  f 2 dB K(f) for Data Precision

9 Page 9 PTTI 2009 -- V. S. Reinhardt This Is No “Fix” Because Each Variance Addresses a Different Error Question Arbitrarily swapping variances just misleadingly changes the questionArbitrarily swapping variances just misleadingly changes the question Does not remove the divergence problem for the original questionDoes not remove the divergence problem for the original question t  w,M (t)  j,M (t) 2  wj,M (t)  , M (t) Accuracy: Error of fit from true behavior? Data Precision: Data variation from fit? Fit Precision: Est of accuracy from data/model? Stability: Extrap/Interp error over  ?

10 Page 10 PTTI 2009 -- V. S. Reinhardt T 2  w,1 2  j,1 x(t) = f -2 Ensemble Members x j,1 t0t0 Start Noise Here Then What Does a Neg-p Variance Divergence Mean? Accuracy variance   as t 0   while precision remains misleadingly finiteAccuracy variance   as t 0   while precision remains misleadingly finite Thus an accuracy variance infinity is a real indicator of a severe modeling problemThus an accuracy variance infinity is a real indicator of a severe modeling problem  To truly fix  Must modify the system design or the error question being asked Note for f -3 noise  j,1   but  j,2 remains finiteNote for f -3 noise  j,1   but  j,2 remains finite 1/f noise is a marginal case   w,M 2  ln(f h t 0 )1/f noise is a marginal case   w,M 2  ln(f h t 0 )  1/f contribution can be small even for t 0 = age of universe   as t 0  

11 Page 11 PTTI 2009 -- V. S. Reinhardt Myth 2: Can Use Least Squares & Kalman Filters to Properly Separate True polynomial behavior from data containing (non-HP filtered) neg-p noiseTrue polynomial behavior from data containing (non-HP filtered) neg-p noise Will show that such noise acts like systematic error in statistical estimationWill show that such noise acts like systematic error in statistical estimation  Which generates anomalous fit results  Except under certain conditions Will show this occursWill show this occurs  Because (non-HP filtered) neg-p noise is infinitely correlated & non-ergodic

12 Page 12 PTTI 2009 -- V. S. Reinhardt NS & WSS Pictures of a Random Process x p (t) x p (t) starts at finite timex p (t) starts at finite time Can have R p   as t g  Can have R p   as t g   Wigner-Ville functionWigner-Ville function  Is a t g dependent PSD Two important NS  WSS TheoremsTwo important NS  WSS Theorems x p (t) active for all t x p (t) must have a bounded steady state PSD   = Time Difference t g = Time from x p start E = Ensemble Average (Complex) Fourier Transform NS Picture x p (t) = 0 t<0 0 tgtg  WSS Picture x p (t)  0 all t 

13 Page 13 PTTI 2009 -- V. S. Reinhardt The Properties of (Non-Highpass Filtered) Neg-p Noise R p (t g,  ) is finite for finite t g R p (t g,  )   as t g   for all  WSS R p (  ) is infinite for all WSS R p (  ) is infinite for all  Can define L p (f) without R p (  ) Can define L p (f) without R p (  )  Its correlation  time  c is infiniteIts correlation  time  c is infinite It is inherently non-ergodicIt is inherently non-ergodic  Theorem (Parzan): A NS random process is ergodic if and only if Neg-p Noise tgtg  t=0 Unbounded as t g   R p (t g,  ) <  as t g   and  c <  E..  T  

14 Page 14 PTTI 2009 -- V. S. Reinhardt Ergodicity and Practical Fitting Behavior Fitting Theory is Based on EFitting Theory is Based on E Practical fits rely on TPractical fits rely on T So noise must be ergodic-like over T for a practical fit to work as theoretically predictedSo noise must be ergodic-like over T T  E for a practical fit to work as theoretically predicted Even if noise is strictly ergodicEven if noise is strictly ergodic T   = E  May not have for the T in question  May not have T  E for the T in question Most theory assumes for N  Most theory assumes any T = E for N    Called local ergodicity for T  0 For correlated noise is also required for ergodic-like behavior (intermediate ergodicity)For correlated noise T >>  c is also required for ergodic-like behavior (intermediate ergodicity) Ensemble Av E Time Av T  T 

15 Page 15 PTTI 2009 -- V. S. Reinhardt A single neg-p ensemble member has polynomial-like behavior over TA single neg-p ensemble member has polynomial-like behavior over T  x p (t) is systematic with polynomial-like x c (t)  Even an augmented Kalman filter can only separate non-systematic parts of x p (t) & x c (t) Fitting methodologies can only separate linearly independent variablesFitting methodologies can only separate linearly independent variables Cannot truly separate (non-HP filtered) neg-p noise and poly-like deterministic behaviorCannot truly separate (non-HP filtered) neg-p noise and poly-like deterministic behavior  Noise whitening cannot be used in such cases f -2 Noise f 0 Noise Kalman Filter f 0 Noise LSQF Practical Examples of Neg-p Non- ergodicity Fitting Effects long term error- 1.xls Augmented Kalman f -2 Noise Standard Kalman f -3 Noise x a,M ±  wj,M –x -- x c

16 Page 16 PTTI 2009 -- V. S. Reinhardt Non-Ergodic-Like Behavior in Correlated WSS Processes T/  c = N i  Number of independent samplesT/  c = N i  Number of independent samples  Must have N i >> 1 for fit to be meaningful For non-HP filtered neg-p noise  c = For non-HP filtered neg-p noise  c =   No T will produce ergodic-like fit behavior - -True behavior Kalman filter output Meas data Fit precision T T/  c =2000 T/  c =200T/  c =20T/  c =2

17 Page 17 PTTI 2009 -- V. S. Reinhardt Conclusions Arbitrarily swapping variances does not really fix a neg-p divergence problemArbitrarily swapping variances does not really fix a neg-p divergence problem  Such divergences indicate real problems that must be physically addressed Non-HP filtered neg-p noise acts like more like systematic error than conventional noiseNon-HP filtered neg-p noise acts like more like systematic error than conventional noise  When fitting to (full order) polynomials  The surest way to reduce such problems is to develop freq standards with lower neg-p noise  Good news for frequency standards developers See: http://www.ttcla.org/vsreinhardt/ for preprints & other related materialSee: http://www.ttcla.org/vsreinhardt/ for preprints & other related materialhttp://www.ttcla.org/vsreinhardt/


Download ppt "Negative Power Law Noise, Reality vs Myth Victor S. Reinhardt Raytheon Space and Airborne Systems El Segundo, CA, USA Precise Time and Time Interval Systems."

Similar presentations


Ads by Google