Estimation of the spectral density function

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Definitions Periodic Function: f(t +T) = f(t)t, (Period T)(1) Ex: f(t) = A sin(2Πωt + )(2) has period T = (1/ω) and ω is said to be the frequency (angular),
General Linear Model With correlated error terms  =  2 V ≠  2 I.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Estimating a Population Variance
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Periodograms Bartlett Windows Data Windowing Blackman-Tukey Resources:
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
The General Linear Model. The Simple Linear Model Linear Regression.
Environmental Data Analysis with MatLab Lecture 11: Lessons Learned from the Fourier Transform.
The Simple Linear Regression Model: Specification and Estimation
Meet the professor Friday, January 23 at SFU 4:30 Beer and snacks reception.
The Empirical FT continued. What is the large sample distribution of the EFT?
Point estimation, interval estimation
Chapter 6 Continuous Random Variables and Probability Distributions
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
The Fourier transform. Properties of Fourier transforms Convolution Scaling Translation.
Statistical Background
Continuous Random Variables and Probability Distributions
Review of Probability and Statistics
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Statistical Intervals Based on a Single Sample.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
The Neymann-Pearson Lemma Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ;  ) where  is either  1 or  2. Let g(x 1, …,
Review of Probability.
STAT 497 LECTURE NOTES 2.
1 Introduction to Estimation Chapter Concepts of Estimation The objective of estimation is to determine the value of a population parameter on the.
Estimation of the spectral density function. The spectral density function, f( ) The spectral density function, f(x), is a symmetric function defined.
CHEE825 Fall 2005J. McLellan1 Spectral Analysis and Input Signal Design.
Chapter 2 Statistical Background. 2.3 Random Variables and Probability Distributions A variable X is said to be a random variable (rv) if for every real.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Estimation of the spectral density function. The spectral density function, f( ) The spectral density function, f(x), is a symmetric function defined.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Multivariate Time Series Analysis
Review and Summary Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
4.3 Probability Distributions of Continuous Random Variables: For any continuous r. v. X, there exists a function f(x), called the density function of.
Computational Data Analysis
Fourier series With coefficients:.
Covariance, stationarity & some useful operators
Tatiana Varatnitskaya Belаrussian State University, Minsk
Time Series Analysis.
Normal Distribution and Parameter Estimation
Other confidence intervals
SIGNALS PROCESSING AND ANALYSIS
Inference for the mean vector
The distribution function F(x)
Model Building For ARIMA time series
The Empirical FT. What is the large sample distribution of the EFT?
Estimating Population Variance
Goodness-of-Fit Tests
Hidden Markov Autoregressive Models
PowerPoint Slides to Accompany “Applied Times Series Analysis (ATSA) with R, second edition” by Woodward, Gray, and Elliott NOTE: Some slides have.
Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
The Empirical FT. What is the large sample distribution of the EFT?
Second order stationary p.p.
Multivariate Time Series Analysis
4.3 Probability Distributions of Continuous Random Variables:
Functions of Random variables
The Spectral Representation of Stationary Time Series
The Empirical FT. What is the large sample distribution of the EFT?
6.3 Sampling Distributions
The Multivariate Normal Distribution, Part I
Chapter 2. Random Variables
Chapter 8 Estimation.
Continuous Distributions
Moments of Random Variables
Presentation transcript:

Estimation of the spectral density function

The spectral density function, f(l) The spectral density function, f(x), is a symmetric function defined on the interval [-p,p] satisfying and The spectral density function, f(x), can be calculated from the autocovariance function and vice versa.

Some complex number results: Use

Expectations of Linear and Quadratic forms of a weakly stationary Time Series

Expectations, Variances and Covariances of Linear forms

Theorem Let {xt:t  T} be a weakly stationary time series. Then and   where and Sr = {1,2, ..., T-r}, if r ≥ 0, Sr = {1- r, 2 - r, ..., T} if r ≤ 0.

Proof  

Also since   Q.E.D.

Theorem Let {xt:t  T} be a weakly stationary time series. and

Expectations, Variances and Covariances of Linear forms Summary

Theorem Let {xt:t  T} be a weakly stationary time series. Then and   where and Sr = {1,2, ..., T-r}, if r ≥ 0, Sr = {1- r, 2 - r, ..., T} if r ≤ 0.

Theorem Let {xt:t  T} be a weakly stationary time series. Let and Then where and

Then where and Also Sr = {1,2, ..., T-r}, if r ≥ 0, Sr = {1- r, 2 - r, ..., T} if r ≤ 0.

Expectations, Variances and Covariances of Quadratic forms

Theorem Let {xt:t  T} be a weakly stationary time series. Then

and

and Sr = {1,2, ..., T-r}, if r ≥ 0, Sr = {1- r, 2 - r, ..., T} if r ≤ 0, k(h,r,s) = the fourth order cumulant = E[(xt - m)(xt+h - m)(xt+r - m)(xt+s - m)] - [s(h)s(r-s)+s(r)s(h-s)+s(s)s(h-r)] Note k(h,r,s) = 0 if {xt:t  T}is Normal.

Theorem Let {xt:t  T} be a weakly stationary time series. Then

where and

Examples The sample mean

Thus and

Also

and where

Thus Compare with

Basic Property of the Fejer kernel: If g(•) is a continuous function then : Thus

The sample autocovariance function The sample autocovariance function is defined by:

or if m is known where

or if m is known where

Theorem Assume m is known and the time series is normal, then: E(Cx(h))= s(h),

and

Proof Assume m is known and the the time series is normal, then: and

and

where

since

hence

Thus

and Finally

Where

Thus

Expectations, Variances and Covariances of Linear forms Summary

Theorem Let {xt:t  T} be a weakly stationary time series. Then and   where and Sr = {1,2, ..., T-r}, if r ≥ 0, Sr = {1- r, 2 - r, ..., T} if r ≤ 0.

Theorem Let {xt:t  T} be a weakly stationary time series. Let and Then where and

Expectations, Variances and Covariances of Quadratic forms

Theorem Let {xt:t  T} be a weakly stationary time series. Then

and

and Sr = {1,2, ..., T-r}, if r ≥ 0, Sr = {1- r, 2 - r, ..., T} if r ≤ 0, k(h,r,s) = the fourth order cumulant = E[(xt - m)(xt+h - m)(xt+r - m)(xt+s - m)] - [s(h)s(r-s)+s(r)s(h-s)+s(s)s(h-r)] Note k(h,r,s) = 0 if {xt:t  T}is Normal.

Theorem Let {xt:t  T} be a weakly stationary time series. Then

Estimation of the spectral density function

The Discrete Fourier Transform

Let x1,x2,x3, ...xT denote T observations on a univariate one-dimensional time series with zero mean (If the series has non-zero mean one uses in place of xt). Also assume that T = 2m +1 is odd. Then

where with lk = 2pk/T and k = 0, 1, 2, ... , m.

The Discrete Fourier transform: k = 0, 1,2, ... ,m.

Note:

Since

Thus

Summary: The Discrete Fourier transform k = 0, 1,2, ... ,m.

Theorem E[Xk] = 0 with lk= 2p(k/T) with lk= 2p(k/T) and lh= 2p(h/T)

where

Proof Note Thus

Thus where

Thus Also

with q =2p(k/T)+l with f =2p(h/T)+l

Thus and

Defn: The Periodogram: k = 0,1,2, ..., m with lk = 2pk/T and k = 0, 1, 2, ... , m.

Periodogram for the sunspot data

note:

Theorem

In addition: If lk ≠ 0 If lk ≠ lh

Proof Note Let

Recall Basic Property of the Fejer kernel: If g(•) is a continuous function then : The remainder of the proof is similar

Consistent Estimation of the Spectral Density function f(l)

Smoothed Periodogram Estimators

Defn: The Periodogram: k = 0,1,2, ..., m

Properties: If lk ≠ 0 If lk ≠ lh

Spectral density Estimator

Properties: If lk ≠ 0 The second properties states that: is not a consistent estimator of f(l):

Periodogram Spectral density Estimator Properties: Asymptotically unbiased If lk ≠ 0 The second property states that: is not a consistent estimator of f(l):

Examples of using R

Example 1 – Sunspot data

Open the Data > sunData<-read.table("C:/Users/bill/Desktop/Sunspot.txt",header=TRUE) Set the vector y to the data “no” > y<-sunData[,"no"] Draw the raw periodogram Two commands achieve this > spectrum(y,method="pgram") or > spec.pgram(y, taper=0, log=“yes")

> spectrum(y,method="pgram") yields

> spec.pgram(y, taper=0, log="no") yields

> spec.pgram(y, taper=0, log=“yes") yields

Drawing the smoothed periodogram using Daniel window This is achieved using the command > spec.pgram(y, spans= 9, taper=0, log=“yes")

If one does not want the log-scale on y axis then use the command > spec.pgram(y, spans= 9, taper=0, log=“no")

If one want to use the Daniel window on two passes. Use the command. > spec.pgram(y, spans= c(9,9) , taper=0, log=“yes")

Periodogram Spectral density Estimator Properties: Asymptotically unbiased If lk ≠ 0 The second property states that: is not a consistent estimator of f(l):

Smoothed Estimators of the spectral density

The Daniell Estimator

Properties 1. 2. 3.

Now let T  ∞, d  ∞ such that d/T  0 Now let T  ∞, d  ∞ such that d/T  0. Then we obtain asymptotically unbiased and consistent estimators, that is

Choosing the Daniell option in SPSS

k = 5

k = 5

k = 9

k = 5

Other smoothed estimators

More generally consider the Smoothed Periodogram where and

Theorem (Asymptotic behaviour of Smoothed periodogram Estimators ) Let where {ut} are independent random variables with mean 0 and variance s2 with Let dT be an increasing sequence such that and

Then and Proof (See Fuller Page 292)

Weighted Covariance Estimators Note where

Proof

The Weighted Covariance Estimator where {wm(h): h = 0, ±1,±2, ...} are a sequence of weights such that: i) 0 ≤ wm(h) ≤ wm(0) = 1 ii) wm(-h) = wm(h) iii) wm(h) = 0 for |h| > m

The Spectral Window for this estimator is defined by: Properties : i) Wm(l) = Wm(-l) ii)

also (Using a Reimann-Sum Approximation) Note: also (Using a Reimann-Sum Approximation) = the Smoothed Periodogram Estimator

Asymptotic behaviour for large T 1. 2. 3.

Examples wm(h) = w(h/m) 1. Bartlett Note:

2. Parzen 3. Blackman-Tukey w(x) = 1 -2 a + 2a cos(px) with a = 0.23 (Hamming) , a = 0.25 (Hanning)

Daniell Tukey Parzen Bartlett

Approximate Distribution and Consistency 1. 2. 3.

Note: If Wm(l) is concentrated in a "peak" about l = 0 and f(l) is nearly constant over its width, then 1. and 2.

Confidence Limits in Spectral Density Estimation

Satterthwaites Approximation: where c and r are chosen so that 1. 2.

Thus = The equivalent df (EDF)

Now and Thus and

Confidence Limits for The Spectral Density function f(l): Let and denote the upper and lower critical values for the Chi-square distribution with r d.f. i.e. Then a [1- a]100 % confidence interval for f(l) is:

Estimation of the spectral density function Summary

The spectral density function, f(l) The spectral density function, f(x), is a symmetric function defined on the interval [-p,p] satisfying and

Periodogram Spectral density Estimator Properties: Asymptotically unbiased If lk ≠ 0 The second property states that: is not a consistent estimator of f(l):

Smoothed Estimators of the spectral density

Smoothed Periodogram Estimators where and The Daniell Estimator

The Weighted Covariance Estimator where {wm(h): h = 0, ±1,±2, ...} are a sequence of weights such that: i) 0 ≤ wm(h) ≤ wm(0) = 1 ii) wm(-h) = wm(h) iii) wm(h) = 0 for |h| > m

Choices for wm(h) = w(h/m) 1. Bartlett 2. Parzen 3. Blackman-Tukey w(x) = 1 -2 a + 2a cos(px) with a = 0.23 (Hamming) , a = 0.25 (Hanning)

The Spectral Window for this estimator is defined by: Properties : i) Wm(l) = Wm(-l) ii)

also (Using a Reimann-Sum Approximation) Note: also (Using a Reimann-Sum Approximation) = the Smoothed Periodogram Estimator

Approximate Distribution and Consistency 1. 2. 3.

Note: If Wm(l) is concentrated in a "peak" about l = 0 and f(l) is nearly constant over its width, then 1. and 2.

Confidence Limits for The Spectral Density function f(l): Let and denote the upper and lower critical values for the Chi-square distribution with r d.f. i.e. Then a [1- a]100 % confidence interval for f(l) is:

Now and Thus and

Confidence Limits for The Spectral Density function f(l): Let and denote the upper and lower critical values for the Chi-square distribution with r d.f. i.e. Then a [1- a]100 % confidence interval for f(l) is:

and