Download presentation
Presentation is loading. Please wait.
1
Lecture 18 advanced topics is spectral analysis Parsival’s Theorem Multi-taper spectral analysis Auto- and Cross- Correlation Phase spectra
2
Parsival’s Theorem Total power in the time-series Total power its spectrum i T i 2 i |C i | 2 Discrete Version T(t) 2 dt |C( )| 2 d Integral Version
3
Write the inverse DFT as Gm=d, where the data d k =T k are the timeseries, and the unknowns, m k =C k, are the Fourier coefficients, and G is N -1 times the matrix of complex exponentials Recall that G H G=N -1 I Note d H d = (Gm) H (Gm) = m H (G H G)m = N -1 m H m Or i T i 2 = N -1 i |C i | 2 ….. Parsival’s Theorem
4
Multi-taper spectral analysis
5
A criticism of the Hamming Taper is that you’re throwing away ‘hard-earned’ data at the ends of the interval, because the taper is near-zero there … Multi-taper spectral analysis offers a way to ‘save’ that data
6
Let’s try to design a good taper A good taper … has a fourier transform with … a lot of energy in an interval near =0 compared to the intervals far from =0
7
So try the formal maximization problem: find taper coefficients, T i i=0, N-1 with Fourier coefficents C( ) that maximize the ratio of the energy near =0, say -W< <W (green interval) to the total energy (green plus red intervals) R = -W W |C i | 2 d / - ny ny |C i | 2 d Here 2W is the width of the near =0 (green) interval WW WW
8
R = -W W |C i | 2 d / - ny ny |C i | 2 d Note that the denominator is proportional to i=0 N T i 2 by Parsival’s Theorem. So we can rewrite the problem Maximize R = -W W |C i | 2 d Subject to the constraint that the power in the timeseries is a constant, say unity i=0 N T i 2 = 1
9
R = -W W |C| 2 d = -W W C i * C i d But C( ) = n=-N/2 N/2 T n exp(-i n t ) and C * ( ) = m=-N/2 N/2 T m exp(+i m t ) R = -W W C k * C k d = n m T n T m - W W exp{ i(m-n) t } d n m T n T m M nm with M nm = - W W exp{ i(m-n) t } d
10
M nm = - W W exp{ i(m-n) t } d 0 W cos( (m-n) t ) d 2 sin{ (m-n) t W } / { (m-n) t} 2W sin{ (m-n) t W } / { W (m-n) t} = 2W sinc{ (m-n) t W } Note M is a symmetric matrix … We can do the integral analytically …
11
Does anyone remember how to do a constrained maximization ? Maximize: R = n m T n T m M nm with constraint C = n T n 2 – 1= 0
12
Method of Lagrange Multipliers Maximize R with constraint C=0 equivalent to Maximize =R- C with no constraint is new unknown, the Lagrange multiplier
13
Minimize = n m T n T m M nm - n T n 2 + With respect to T q d /dT q = 0 = n m dT n /dT q T m M nm + n m T n dT m /dT q M nm - 2 n T n dT n /dT q = n m nq T m M nm + m T n mq M nm - 2 n T n nq = m T m M qm - 2 T q
14
Minimize = n m T n T m M nm - n T n 2 – 1) with respect to T q Yields this equation for T q : m M qm T m - T q Does anyone recognize it?
15
M T = T The algebraic eigenvalue problem With symmetric N N matrix, M has N solutions, say T (n) each with a different eigenvalue, say (n) a distinct pair of solutions are mutually orthogonal if their eigenvalues are different
16
Interpretation of eigenvalues Start with M T = T and premultiply by T T T T M T = T T T and rearrange T T M T / T T T = R = amount of power near =0 The value of the quantity that we sought to maximize.
17
Tapering strategy Pick a W Find N tapers by solving the eigenvalue problem MT= T Sort the tapers, smallest eigenvalue first Choose the last few*, they have the best R values Compute spectra of time-series times each of these tapers Average the results WW WW It turns out that the number of dominant eigenvalues is about NW/ ny, which is called the Shannon number.
18
Example based on 128-point time-series W/ ny = 1/(N/2); tapers computed by MatLab [V,D] = eig( M ) function (in practice you would not do this, but rather use a custom multi-taper code that computes the tapers for you in a more efficient way)
19
Eigenvalues, smallest to biggest ’s. 0.0035 0.0430 0.2746 0.7218 0.9594 0.9976 0.9999 use
20
Tapers, T i time, t Similar to Hanning Taper T1T1 T2T2 T3T3 T4T4 T5T5 T6T6 Gives more emphasis to ends of time-series
21
time-series, x time, t cosine T 1 x T 2 x T 3 x T 4 x T 5 x T 6 x
22
frequency, spectrum of T 1 spectrum of T 2 spectrum of T 3 spectrum of T 4 spectrum of T 5 spectrum of T 6
23
frequency, spectrum of T 1 x spectrum of T 2 x spectrum of T 3 x spectrum of T 4 x spectrum of T 5 x spectrum of T 6 x sum of first 4 spectrum of x Results
24
Autocorrelation A( ) = - + T(t) T(t- ) dt called the lag T(t) t t Offset by , multiply and add to get A( )
25
Autocorrelation A( ) = - + T(t) T(t- ) dt called the lag A( ) A( ) = - + T(t) 2 dt = power in time-series A( )=A(- ) symmetrical function A( ) = T( ) * T(- ) Signal convolved with backwards-in-time signal
26
Example T(t) A( ) t
27
Another Example T(t) A( ) t Decay of tales due to finite length of time-series 2-point running average of random number sequence
28
MatLab Code t=dt*[0:N-1]'; x = whatever y=xcorr(x); tau=dt*[1-N:N-1]';
29
Fourier Autocorrelation Theorem FT of Autocorrelation is spectrum of T(t) A( ) = |T( )| 2
30
A( ) = - + - + T(t) T(t- ) dt exp(- ) d - + T(t) - + T(t- ) exp(- ) d dt = - - + T(t) + - T(t’) exp{- (t-t’)} dt’ dt = - + T(t) exp(- t) dt - + T(t’) exp(+ t’) dt’ = T( ) T( )* = |T( )| 2 t’= t- so =t-t’ and dt’=-d and t’ -
31
Stationary time-series has a well-defined autocorrelation to arbitrarily large lag length-N section cut from time-series autocorrelation non-zero only to lag N and furthermore more you lag it, the more you ‘lose the ends’
32
[T 0 T 1 T 2 T 3 T 4 T 5 T 6 T 7 T 8 ] =0 [T 0 T 1 T 2 T 3 T 4 T 5 T 6 T 7 T 8 ] multiply and add 9 terms [T 0 T 1 T 2 T 3 T 4 T 5 T 6 T 7 T 8 ] =4 [T 0 T 1 T 2 T 3 T 4 T 5 T 6 T 7 T 8 ] multiply and add only 5 terms … not as good an approximation …
33
So another way of understanding the problem of estimating the spectrum of a stationary time series is Autocorrelation is poorly estimated at large lags …
34
Crosscorrelation Auto-correlation A( ) = - + T(t) T(t- ) dt Cross-correlation X( ) = - + T 1 (t) T 2 (t- ) dt Generalization of the autocorrelation to two different time- series. In MatLab xcorr(x) is the autocorrelation and xcorr(x1,x2) is the crosscorrelation
35
Filter design and autocorrelation
36
y0y1y2…yNy0y1y2…yN f0f1…fNf0f1…fN x 0 0 0 0 0 0 x 1 x 0 0 0 0 0 x 2 x 1 x 0 0 0 0 … x N … x 3 x 2 x 1 x 0 = In matrix form this is the equation y = G f the least-squares estimate of y involves the two matrices G T G and G T y Recall this formulation of the convolution equation y = f * x
37
y0y1y2…yNy0y1y2…yN f0f1…fNf0f1…fN x 0 0 0 0 0 0 x 1 x 0 0 0 0 0 x 2 x 1 x 0 0 0 0 … x N … x 3 x 2 x 1 x 0 = but G T G are just the columns of G dotted with each other and G T y is just the columns of G dotted with y [GTy]1[GTy]1 [G T G] 3,5 By inspection: [G T G] i,j autocorrelation of x at lag i-j and [G T y] i crosscorrelation of x and y at lag i
38
X(0) X(1) X(2) … X(N) f0f1…fNf0f1…fN A(0) A(1) A(2) … A(1) A(0) A(1) … A(2) A(1) A(0) … … A(N) A(N-1) A(N-2) … = [G T G] f = G T y So least-squares filter design relies very heavilty on knowledge of the autocorrelation function of a time-series To find a filter of length N, you need to know the first N-1 lags of the autocorrelation function
39
Now back to spectral analysis Suppose that we want to compute the spectrum of a stationary time-series x(t) We think we know the only the first N-1 autocorrelation coefficients well Strategy: compute a filter of length N that tells you something about the spectrum of x(t)
40
The Autoregressive (AR) Spectral Method Assume that a stationary time-series x(t) is created by a filter, S(t) acting on uncorrelated random noise n(t) x(t) = S(t) * n(t) Now in the Fourier domain, x( )=S( ) n( ) and |x( )| 2 = |S( )| 2 |n( )| 2
41
But noise has a ‘white spectrum, that is ‘fluctuating but on-average constant’ So |x( )| 2 |S( )| 2 The AR method chooses S(t) so that S inv (t) is a short filter, f.
42
x(t) = S(t) * n(t) = f inv (t) * n(t) or x(t) * f(t) = n(t) The least-squares equations are j A ij f j = X i = 0 With A ij the autocorrelation of x at lag (i-j) and With X i the crosscorrelation of x and n at lag i But the cross-correlation of anything with random noise is zero
43
In order to be able to solve this equation, you need to impose some constraint on f, say f 0 =1. So the least-squares equation is: j A ij f j = 0 with constraint f 0 =1 One you get f, then the spectrum of x(t) is computed as |x( )| 2 = |f( )| -2
44
Example LGA Temperature Timeseries LGA time-series Hanning Taper Tapered time-series
45
AR spectrum, filter-length = 256 Standard Spectrum
46
Phase spectra If we write the Fourier Transform C( ) = |C( )| exp{ -i ( ) } The quantity ( ) is called the phase spectrum ( ) = -tan -1 ( C imag / C real )
47
shifting a spike We’ve seen something like this before … If we have a time-series p(t), and want to shift it in time by t 0, then we multiply its FT by exp(- t 0 )
48
p(t) ifft(exp(-i t 0 ) fft(p(t))) with t 0 =50 t t
49
shifting a spike Suppose we take p(t) to be spike at the origin The FT of a spike at the origin is just unity |C( )|=1 So the FT of a shifted spike is exp(- t 0 ) So a shifted spike as a phase spectrum of ( )= t 0 … this is called a phase ramp
50
But suppose we wanted to shift each frequency by a different amount? We might imagine that at position x=0, the time-series is a spike, but that at a position x>0 each frequency has shifted by t 0 ( )=x/v 0 ( ) where v( ) is a frequency- dependent velocity
51
Example v( ) ()() ()() x=100 x=200 v( )=1+exp(-| |/c) with c=2 ny
52
Time-series p(t,x=0) x=100 p(t,x=100) p(t,x=200) dispersed wave train
53
Attempt to recover phase spectrum from timeseries = x/v with x=100 Phase wrapping problem =atan2(imag(r),imag(r)) with r=fft(p100)./fft(p0)
54
Recovering phase spectrum is tricky Problem is that arc tangent cycles between – and + Whereas phase ( ) and increase out of the – to + range Even the best phase unwrapping algorithms are unreliable
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.