Stochastic Volatility Models: Bayesian Framework

Slides:



Advertisements
Similar presentations
Introduction to Monte Carlo Markov chain (MCMC) methods
Advertisements

State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Chapter 2. Unobserved Component models Esther Ruiz PhD Program in Business Administration and Quantitative Analysis Financial Econometrics.
Probabilistic Reasoning over Time
Hidden Markov Model based 2D Shape Classification Ninad Thakoor 1 and Jean Gao 2 1 Electrical Engineering, University of Texas at Arlington, TX-76013,
Presenter: Yufan Liu November 17th,
Visual Recognition Tutorial
Forecasting JY Le Boudec 1. Contents 1.What is forecasting ? 2.Linear Regression 3.Avoiding Overfitting 4.Differencing 5.ARMA models 6.Sparse ARMA models.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Lecture 5: Learning models using EM
PREDICTABILITY OF NON- LINEAR TRADING RULES IN THE US STOCK MARKET CHONG & LAM 2010.
Modeling of Mel Frequency Features for Non Stationary Noise I.AndrianakisP.R.White Signal Processing and Control Group Institute of Sound and Vibration.
End of Chapter 8 Neil Weisenfeld March 28, 2005.
Optimal Filtering of Jump Diffusions: Extracting Latent States from Asset Prices Jonathan Stroud, Wharton, U. Pennsylvania Stern-Wharton Conference on.
A First Peek at the Extremogram: a Correlogram of Extremes 1. Introduction The Autocorrelation function (ACF) is widely used as a tool for measuring Serial.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
Anomaly detection with Bayesian networks Website: John Sandiford.
Chapter 13 Wiener Processes and Itô’s Lemma
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
The Examination of Residuals. Examination of Residuals The fitting of models to data is done using an iterative approach. The first step is to fit a simple.
CS Statistical Machine learning Lecture 24
Analyzing wireless sensor network data under suppression and failure in transmission Alan E. Gelfand Institute of Statistics and Decision Sciences Duke.
Beam Sampling for the Infinite Hidden Markov Model by Jurgen Van Gael, Yunus Saatic, Yee Whye Teh and Zoubin Ghahramani (ICML 2008) Presented by Lihan.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
Tea – Time - Talks Every Friday 3.30 pm ICS 432. We Need Speakers (you)! Please volunteer. Philosophy: a TTT (tea-time-talk) should approximately take.
Chapter 13 Wiener Processes and Itô’s Lemma 1. Stochastic Processes Describes the way in which a variable such as a stock price, exchange rate or interest.
The simple linear regression model and parameter estimation
(joint work with Ai-ru Cheng, Ron Gallant, Beom Lee)
Wiener Processes and Itô’s Lemma
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Nonparametric Bayesian Learning of Switching Dynamical Processes
(5) Notes on the Least Squares Estimate
Hidden Markov Models.
Advanced Statistical Computing Fall 2016
Chapter 6: Autoregressive Integrated Moving Average (ARIMA) Models
Non-Parametric Models
Sample Mean Distributions
Hidden Markov Models - Training
Jun Liu Department of Statistics Stanford University
Stochastic Volatility Models: High Frequency Data & Large Volatilities
Sequential Pattern Discovery under a Markov Assumption
Auxiliary particle filtering: recent developments
Hidden Markov chain models (state space model)
Predictive distributions
Hidden Markov Models Part 2: Algorithms
Modern Spectral Estimation
Econ201FS- New Jump Test Haolan Cai.
Econometric Models The most basic econometric model consists of a relationship between two variables which is disturbed by a random error. We need to use.
Filtering and State Estimation: Basic Concepts
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
10701 / Machine Learning Today: - Cross validation,
Stochastic Volatility Model: High Frequency Data
Chapter 14 Wiener Processes and Itô’s Lemma
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Biointelligence Laboratory, Seoul National University
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Confidence Intervals
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Applied Statistics and Probability for Engineers
Uncertainty Propagation
Presentation transcript:

Stochastic Volatility Models: Bayesian Framework Haolan Cai

Introduction Idea: model returns using the volatility Important: must capture the persistence of the volatilities (i.e. volatility clusters) along with other characteristics Use: a class of Hidden Markov Models (HMM) known as Stochastic Volatility Models (SV models)

Basic Model Where θ = (φ, v) is the parameter space for the AutoRegressive process of order 1 (i.e. linear). φ is the persistence of the model.

Transformation of Model Previous model is non-linear which creates complications. When we apply the following transformation: We get nice linear form: Where is the error term with the following form:

The Problem Child does not have a close form from which it is easy to sample. However it can be accurately approximated with a discrete mixture of normals. In this case the optimal J is equal to 7. Kim, Shephard and Chib (1998)

Bayesian Framework Now all the parameters have nice distributions from which they can be sampled using a Gibbs sampling algorithm. Use semi-informative priors (above) with parameters loosely developed from data. Imposes some but little structure to the sampling. The algorithm was ran for 500 iterations with a burn in period of 50.

The Problem Child (again) In order to sample we sample from the mixture of normals. This is done by a Forward Filtering, Backwards Sampling (FFBS) algorithm. A Kalman filter is applied from t = 0 to t = n. Then the states (xn, xn-1 … x0) are simulated in the backwards order. The reasoning for this more complicated sampling measure is the high AR dependence of this type of data. φ is close to 1.

Initial Conditions For the mixture of normals, 7 normals are chosen to fix the log chi-squared distribution. For the other parameters, initial values were chosen to sufficiently cover the parameter space as to be semi-informative but not restrictive. For example, parameters for μ are g and G; where g is the mean and G the standard deviation. Here there are chosen to be 0 and 9 respectively.

Data 1-minute prices from General Electric and Intel Corporation GE: April 9, 2007 9:35 am to Jan 24, 2008 3:59 pm Used Daily Returns for SV model

Checking Autocorrelation Structure

Results φ is steady around .956

Results: μ = .0037

Results: ν = .4150

Results:

Further Analysis Try to build in autoregressive of high order. Allow J, the number of normals used to fit the error term, to vary. What kind of predictive value does this model produce for stock returns? Does using higher frequency data improve predictive and/or fit value?