Two Nonlinear Models for Time Series n David A. Dickey n North Carolina State University n (joint with S. Hwang – Bank of Korea)

Slides:



Advertisements
Similar presentations
Chi-square and F Distributions
Advertisements

Topic 12: Multiple Linear Regression
Econometrics I Professor William Greene Stern School of Business
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Uncertainty and confidence intervals Statistical estimation methods, Finse Friday , 12.45–14.05 Andreas Lindén.
Seasonal Unit Root Tests in Long Periodicity Cases D. A. Dickey Ying Zhang.
Economics 20 - Prof. Anderson1 Testing for Unit Roots Consider an AR(1): y t =  +  y t-1 + e t Let H 0 :  = 1, (assume there is a unit root) Define.
1 MF-852 Financial Econometrics Lecture 11 Distributed Lags and Unit Roots Roy J. Epstein Fall 2003.
Regression Analysis Simple Regression. y = mx + b y = a + bx.
Econ 140 Lecture 81 Classical Regression II Lecture 8.
Regression Analysis Module 3. Regression Regression is the attempt to explain the variation in a dependent variable using the variation in independent.
Linear regression models
EPI 809/Spring Probability Distribution of Random Error.
Simple Linear Regression
How should these data be modelled?. Identification step: Look at the SAC and SPAC Looks like an AR(1)- process. (Spikes are clearly decreasing in SAC.
Multiple regression analysis
Time series analysis - lecture 5
T-test.
1 Ka-fu Wong University of Hong Kong Pulling Things Together.
Prediction and model selection
Chapter 11 Multiple Regression.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Fall 2006 – Fundamentals of Business Statistics 1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter 7 Estimating Population Values.
Chapter 7 Estimating Population Values
This Week Continue with linear regression Begin multiple regression –Le 8.2 –C & S 9:A-E Handout: Class examples and assignment 3.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Statistical Intervals Based on a Single Sample.
Simple Linear Regression and Correlation
Generalized Linear Models
Multivariate Analysis of Variance, Part 1 BMTRY 726.
Scot Exec Course Nov/Dec 04 Ambitious title? Confidence intervals, design effects and significance tests for surveys. How to calculate sample numbers when.
Topic 10: Miscellaneous Topics. Outline Joint estimation of β 0 and β 1 Multiplicity Regression through the origin Measurement error Inverse predictions.
Overview of Meta-Analytic Data Analysis
STAT 497 LECTURE NOTES 2.
Xavier Sala-i-Martin Columbia University June 2008.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Inferences in Regression and Correlation Analysis Ayona Chatterjee Spring 2008 Math 4803/5803.
Topic 7: Analysis of Variance. Outline Partitioning sums of squares Breakdown degrees of freedom Expected mean squares (EMS) F test ANOVA table General.
Lecture 4 SIMPLE LINEAR REGRESSION.
Topic 14: Inference in Multiple Regression. Outline Review multiple linear regression Inference of regression coefficients –Application to book example.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation.
6-1 Introduction To Empirical Models Based on the scatter diagram, it is probably reasonable to assume that the mean of the random variable Y is.
FORECASTING. Minimum Mean Square Error Forecasting.
Regression Analysis Part C Confidence Intervals and Hypothesis Testing
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
VI. Regression Analysis A. Simple Linear Regression 1. Scatter Plots Regression analysis is best taught via an example. Pencil lead is a ceramic material.
Chapter 6 Simple Regression Introduction Fundamental questions – Is there a relationship between two random variables and how strong is it? – Can.
Analysis Overheads1 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between.
Time Series Analysis Lecture 11
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Seasonal Unit Root Tests in Long Periodicity Cases David A. Dickey North Carolina State University U v Tilburg, Tinbergen Inst
1 Mean Analysis. 2 Introduction l If we use sample mean (the mean of the sample) to approximate the population mean (the mean of the population), errors.
Normally Distributed Seasonal Unit Root Tests D. A. Dickey North Carolina State University Note: this presentation is based on the paper “Normally Distributed.
ISEN 315 Spring 2011 Dr. Gary Gaukler. Forecasting for Stationary Series A stationary time series has the form: D t =  +  t where  is a constant.
1 Experimental Statistics - week 11 Chapter 11: Linear Regression and Correlation.
Review of Unit Root Testing D. A. Dickey North Carolina State University (Previously presented at Purdue Econ Dept.)
The “Big Picture” (from Heath 1995). Simple Linear Regression.
Announcements There’s an in class exam one week from today (4/30). It will not include ANOVA or regression. On Thursday, I will list covered material and.
Decomposition of Sum of Squares
Dynamic Models, Autocorrelation and Forecasting
REGRESSION (CONTINUED)
Statistical Methods For Engineers
CHAPTER 29: Multiple Regression*
Chi-square and F Distributions
Let A = {image} and B = {image} . Compare A and B.
Understanding Nonstationarity and Recognizing it When You See it
Inference about the Slope and Intercept
Table 2. Regression statistics for independent and dependent variables
Decomposition of Sum of Squares
Presentation transcript:

Two Nonlinear Models for Time Series n David A. Dickey n North Carolina State University n (joint with S. Hwang – Bank of Korea)

Kinston Goldsboro

Model 1: Transfer Function Upstream: G t = log(Goldsboro flow) G t *: deviation from mean Downstream: K t = log(Kinston flow) Model: K t =  0 +  1  (G t-2 *) G t-1 *+  2 (1 -  (G t-2 *) )G t-2 *+ Z t  (G t *) = exp(  +  G t *)/(1+exp(  +  G t *) )  Logistic  0 <  (G t *) < 1 Straightforward nonlinear regression problem! Z t =  1 Z t-1 +  2 Z t-2 +  3 Z t-3 +  4 Z t-4 + e t

ANOVA Table SSE is 2.25 with 385 df, Corrected total SS is 366. Parameter Estimate Approx 95% CI beta beta beta delta gamma alpha alpha alpha alpha

Lag 1 Dynamic Constant Dynamic Lag 2

AR(1) Case 1: |  |<1 “stationary” normal limits for  estimator. Case 2:  =1 “random walk,” “unit root process” limit distributions non-standard

Model 2:  =f(Y t-1 ) Can we span -1<  <1 or 0<  <1 ?? Can we span -1<  <1 or 0<  <1 ?? n Logistic »exp(  Y) / (exp(  Y) +1) »exp(  Y t-1 ) / (exp(  Y t-1 ) +1) n Hyperbolic tangent »2(Logistic)-1 »(exp(  Y) -1) / (exp(  Y) +1) »(exp(  Y t-1 ) -1) / (exp(  Y t-1 ) +1) n Related to “smooth transition” models (Tong, 1990) n Harder problem than transfer function! - Why??

Can make progress under H 0 :  =0 [  (Y)=  0 ] Can make progress under H 0 :  =0 [  (Y t-1 )=  0 ] n Estimates: Use Taylor’s series F n = derivative matrix, hyperbolic tangent model. Estimates of   N(0,G  ) Estimates of   N(0,G -1  2 )

Example 1:  = 0,  = 1

Example 2:  =3,  =3

600 obs.  =0,  =1 SAS, PROC NLIN 600 obs.  =0,  =1 PROC SAS is the registered trademark of SAS Institute, Cary, NC Approx 95% Parameter Estimate Confidence Limits A B MU (converged in 6 iterations)  =3,  =3 case did not even converge

Tong: Skeleton of Process * Recursion without the e’s * y t =  (y t-1 ) y t-1  =0,  =1 |y  (y)|<|y| y  (y) even function t

Usual path to normality: stationarity + ergodicity One of Tong’s conditions to show ergodicity: There exist K>0, 0<  <1 such that from any y 0 the skeleton y t is bounded by |y 0 | K  t Skeleton ratios y t /y 0 (  =0,  =1) : Blue is y1 Green is y2 Etc.

Hyperbolic tangent case (increasing) Suppose (K,  ) exist – eventually (T), must have K  T <1. Let M=K  T < 1 and B=M 1/T. Note M < B = M 1/T Pick y -1 with  (y -1 )>B and y 0 = y -1 B -T > y -1 Note 1: y > y -1   (y)>B (monotonicity) so  (y 0 ) > B Note 2: B t y 0 = y -1 B t-T > y -1 for t  T   (B t y 0 ) > B for t<=T Thus y 1 = y 0  (y 0 )>B y 0 > y -1, y 2 = y 1  (y 1 )>B 2 y 0 > y -1 etc. But y T > B T y 0 (=My 0 =K  T y 0 ) is a contradiction!

Skeleton ratios y t /y 0  = 2.0,  = 0.5  = -1.0,  = -0.8

Where is “good” (  ) region? Symmetries: Generate hyperbolic tangent model with symmetric e t. Now use –  and –e t s  (Y t-1 ) = [exp(  (-Y t-1 ))-1]/ [exp(  (-Y t-1 ))+1] -Y t =  (Y t-1 ) (-Y t-1 ) – e t Now –e t and e t have same distribution so (  ) in “good” region  (  ) also in “good” region & distribution of -  estimate is mirror image of  estimate (  estimates are same). For  (|Y t-1 |), symmetry is (  ) and (-  ). (proper conf. int. coverage and good convergence rate)

Y t /  =  (Y t-1 ) Y t-1 /  + e t /  where  (Y t-1 ) = [exp(  (Y t-1 /  ))-1]/ [exp(  (Y t-1 /  ))+1]  can assume e~(0,1) and slope is  Hwang’s simulations on (-4,4)x(-4,4) suggest (for hyperbolic tangent) approximately 0<  <3 and -4<  <3-7  /3 along with their symmetric counterparts |  | -4 

Example 1: N.C. Weekly Soybean Prices (Prof. Nick Piggott) AR(1) using hyperbolic tangent Sum of Mean Source DF Squares Square Model Error Uncor Tot Parm Estimate Approximate 95% CI a mu b

Example 2: Kinston log(flow) model 1:sinusoid & AR(2)

AR(2) lag 2 coefficient = product of roots Replace with -1<  (Y t-2 )<1 Fitted Model Y t = log(flow) – S-.25C (sine-cos) Y t = 1.53Y t-1 +  (Y t-2 ) Y t-2 + e t Parameter Estimate Approx 95% CI A B Mu S1b C1b D Plug-in forecasts

Roots of Forecasts & 95% intervals from m m-  (Y t-2 ) 15,000 simulated futures --- plug in forecast --- simulation forecast

Thanks ! Questions ?