Scenario Generation for the Asset Allocation Problem Diana Roman Gautam Mitra EURO XXII Prague July 9, 2007.

Slides:



Advertisements
Similar presentations
Angelo Dalli Department of Intelligent Computing Systems
Advertisements

1 Hidden Markov Model Xiaole Shirley Liu STAT115, STAT215, BIO298, BIST520.
Hidden Markov Model 主講人:虞台文 大同大學資工所 智慧型多媒體研究室. Contents Introduction – Markov Chain – Hidden Markov Model (HMM) Formal Definition of HMM & Problems Estimate.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Lecture 8: Hidden Markov Models (HMMs) Michael Gutkin Shlomi Haba Prepared by Originally presented at Yaakov Stein’s DSPCSP Seminar, spring 2002 Modified.
Introduction to Hidden Markov Models
Tutorial on Hidden Markov Models.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Ch 9. Markov Models 고려대학교 자연어처리연구실 한 경 수
Statistical NLP: Lecture 11
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Hidden Markov Models Theory By Johan Walters (SR 2003)
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Hidden Markov Models in NLP
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Apaydin slides with a several modifications and additions by Christoph Eick.
INTRODUCTION TO Machine Learning 3rd Edition
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Visual Recognition Tutorial
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Face Recognition Using Embedded Hidden Markov Model.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Elze de Groot1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Fall 2001 EE669: Natural Language Processing 1 Lecture 9: Hidden Markov Models (HMMs) (Chapter 9 of Manning and Schutze) Dr. Mary P. Harper ECE, Purdue.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Ch10 HMM Model 10.1 Discrete-Time Markov Process 10.2 Hidden Markov Models 10.3 The three Basic Problems for HMMS and the solutions 10.4 Types of HMMS.
Lecture 7: Simulations.
Isolated-Word Speech Recognition Using Hidden Markov Models
Gaussian Mixture Model and the EM algorithm in Speech Recognition
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Hidden Markov Models in Keystroke Dynamics Md Liakat Ali, John V. Monaco, and Charles C. Tappert Seidenberg School of CSIS, Pace University, White Plains,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
HMM - Part 2 The EM algorithm Continuous density HMM.
CS Statistical Machine learning Lecture 24
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
CHAPTER 8 DISCRIMINATIVE CLASSIFIERS HIDDEN MARKOV MODELS.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
Lecture 2: Statistical learning primer for biologists
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,... Si Sj.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Other Models for Time Series. The Hidden Markov Model (HMM)
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
1 Hidden Markov Model Xiaole Shirley Liu STAT115, STAT215.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
CONTEXT DEPENDENT CLASSIFICATION
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Scenario Generation for the Asset Allocation Problem Diana Roman Gautam Mitra EURO XXII Prague July 9, 2007

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG The asset allocation problem x i =fraction of wealth invested in asset i  portfolio (x 1,…,x n ) R i =the return of asset i at time T The portfolio return at time T: R x =x 1 R 1 +…+x N R N (also r.v.!) How to choose between portfolios? A modelling issue! An amount of money to invest N stocks with known current prices S 1 0,…,S N 0 Decision to take: how much to invest in each asset Goal: to get a profit as high as possible after a certain time T The stock prices (returns) at time T are not known: random variables (stochastic processes)

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Mean-risk models for portfolio selection Mean – risk models: maximize expected value, minimise risk Risk: Conditional Value-at-Risk (CVaR) = the expected value of losses in a prespecified number of worst cases. The optimisation problem: Min CVaR(R x ) over x 1,…,x n S.t.: E(R x )  d ………… Max (E(R x ),- CVaR(R x )) over x 1,…,x n (1) Confidence level  =0.01  consider the worst 1% of cases

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Scenario Generation The (continuous) distribution of stock returns: approximated by a discrete multivariate distribution with a limited number of outcomes, so that (1) can be solved numerically: scenario generation.  scenario set (single-period case) or a scenario tree (multi-period case ).

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Scenario Generation S Scenarios: p i =probability of scenario i occurring; r ij =the return of asset j under scenario i; The (continuous) distribution of (R 1,…,R N ) is replaced with a discrete one

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG The mean-CVaR model Scenarios  a LP (Rockafellar and Uryasev 2000) Min Subject to: r ij = the scenarios for assets’s returns We only solve an approximation of the original problem; The quality of the solution obtained is directly linked to the quality of the scenario generator (“garbage in, garbage out”).

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG The quality of scenario generators The goal of scenario models is to get a good approximation of the “true” optimal value and of the “true” optimal solutions of the original problem (NOT necessarily a good approximation of the distributions involved, NOT good point predictions). Difficult to test There are several conditions required for a SG

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG In-sample stability: different runs of a scenario generator should give about the same results. If we generate several scenario sets (or scenario trees) with the same number of scenarios and solve the approximation LP with these discretisations, we should get about the same optimal value. (not necessarily the same optimal solutions: the objective function in a SP can be “flat”, i.e. different solutions giving similar objective values) The quality of scenario generators

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Out-of-sample stability: -Generate scenario sets of the same size -Solve the optimisation problem on each  different optimal solutions -These solutions are evaluated on the “true” distributions  “true” objective values -The true objective values should be similar The quality of scenario generators In practice: use a very large scenario set generated with an exogenuous SG method as the “true” distribution

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG -Out-of-sample stability: the important one -No (simple) relation between in-sample and out-of- sample stability The quality of scenario generators

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Hidden Markov Models applied in various fields, e.g. speech recognition still experimental for financial scenario generation Motivation: financial time series are not stationary; unexpected jumps, changing behaviour

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Hidden Markov Models Real world processes produce observable outputs – a sequence of historical prices, returns… A set of N distinct states: S 1,…,S N System changes state at equally spaced discrete times: t=1,2,… Each state produces outputs according to its “output distribution” (different states ->different parameters) The “true” state of the system at a certain time point is “hidden”: only observe the output.

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Assumptions: First order Markov chain: at any time point, system’s state depends only on the previous state and not the whole history: P(q t =S i | q t-1 =S j, q t-2 =S k,….)= P(q t =S i | q t-1 =S j ) with q t =system’s state at time t Time independence: a ij =probability of changing from state i to state j: the same at any time t. Output-independence assumption: the output generated at a time t depends solely on the system’s state at time t (not on the previous outputs) Hidden Markov Models

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG The output distributions: mixtures of normal distributions Mixtures of normal density functions can approximate any finite continuous density function. Hidden Markov Models M mixtures: =the normal density function with mean vector  j and covariance matrix  j

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Hidden Markov Models c 11,…,c 1M  11,…,  1M  11,…,  1M 11 22 33 a 11 a 12 a 21 a 13 a 31 N=3 M mixtures a 32 a 23  c 31,…,c 3M  31,…,  3M  31,…,  3M c 21,…,c 2M  21,…,  2M  21,…,  2M

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG The parameters of a HMM: Number of states N Number of mixtures M Initial probabilities of states:  1,…,  N Transition probabilities: A=(a ij ), i,j=1…N For each state i, parameters of the output distributions: Hidden Markov Models  Mixture coefficients c i1,…,c iM  Mean vectors  i1,...,  iM  Covariance matrices  i1,…,  i1.

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Historical data: O=(O 1,…,O T )=(r tj, t=1…T,j=1…N) is used to “train” the HMM. Meaning: Find the parameters =( , A, C, ,  ) s.t. P(O| ) maximised Cannot be solved analytically and no best way to find Iterative procedures (e.g. EM, Baum-Welch) can be used to find a local maximum. Parameters N and M are supposed to be known! Training Hidden Markov Models

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Training HMM’s Start with some initial parameters 0 ; compute P(O| 0 ) Re-estimate parameters  1 ; compute P(O| 1 )  P(O| 0 ) Obtain sequence 0, 1, 2 … with P(O| i )  P(O| i-1 ) (P(O| i )) i converges towards a local maximum Limited knowledge about the convergence speed Observed sharp increase in the first few iterations, then relatively little improvement Practically: stop when P(O| i )- P(O| i-1 ) is small enough Use final i for generation of scenarios

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Training HMMs: initial parameters How choose 0 ? Not important for  i and a ij (could be 1/N or random) Very important for C,  and  – but no”best” way to estimate them k-means clustering algorithm: separate historical data into M clusters starting parameters: Based on the mean vectors and covariance matrices of the clusters

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Training HMM: parameters re- estimation Forward probabilities: time t, state i Need to calculate additional quantities: Backward probabilities: time t, state i In calculus: the multi-variate normal density: Calculated recursively after time Use Baum-Welch algorithm (EM):

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Additional quantities: Probability of the historical observation to be generated by the current model: Training HMM: parameters re- estimation

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Training HMM: parameters re- estimation

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG HMM: estimation of the current state The state of the system at the current time? Via Viterbi algorithm Given an observation sequence O=(O 1,…,O T ) and a model, find an “optimal” state sequence Q=(q 1,…,q T ) i.e., that best “explains” the observations: maximises P(Q|O, )

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG HMM for scenario generation  A scenario: a path of returns for times T+1,…,T+TP  Estimate the current state (time T); say, q t =S i { Transit to a next state S j according to transition probabilities a ij Generate a return conform to the distribution of state j } …. t=1t=2t=3t=Tt=T+1t=T+2t=T+TP Historical data: estimation of Estimation of the system’s state at time T Generation of scenarios

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG HMM – implementation issues Number of states? Still a very much unsolved problem. The observation distributions for each state? The initial estimates of the model’s parameters Computational issues: lots!! For large number of assets, large covariance matrices (at every step of re-estimation: determinants, inverses); The quantities calculated recursively get smaller and smaller Or the opposite: get larger and larger

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Computational results Historical Dataset: 5 stocks from FTSE monthly returns: Jan 1993-Dec 2003  Generate scenario returns for 1 month ahead  500, 700, 1000, 2000, 3000 scenarios

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Computational results For each scenario size: -Run 30 times  generate 30 different discretisations for the assets’ returns (R 1,…,R N ) -Solve mean-CVaR model with these discretisations  get 30 solutions x 1,…,x 30 -Similar solutions as scenario size increases: (x 2 =x 3 =0, x 5 >=50%) -Evaluate these solutions on the “true” distribution?

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Computational results Out-of-sample stability: -The “true” distribution: generated with Geometric Brownian motion, scenarios -Each of the 30 solutions was evaluated on this distribution  30 “true” objective values (=portfolio CVaRs)

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Computational results Geometric Brownian motion (GBM) -The standard in finance for modelling stock prices -Stock prices are approximated by continuous time stochastic processes (accepted by practitioners…) S 0 : the current price  : the expected rate of return  : the standard deviation of rate of return {W t }: Wiener process - the “noise” in the asset’s price.

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Computational results Statistics for the series of “true” objective functions 500 scen700 scen1000 scen2000 scen3000 scen Mean St Deviation Range Minimum Maximum Quality of solutions improve with larger scenario sets (as expected!) Reasonably small spread; pretty similar objective values

Asset Allocation Problem Mean-CVaR Model Hidden Markov Models Computational results Financial SG Conclusions and final remarks For the mean-CVaR model: SG that can capture extreme price movements Stability is a necessary condition for a “good” SG HMM is a discrete-time model; experimental for financial SG Motivated by non-stationarity of financial time series Two stochastic processes: one of them describes the “state of the system” Implementation problems, especially when the number of assets is large An initial “good” estimate for HMM parameters is essential The number of states: supposed to be known in advance Good results regarding out-of-sample stability The “true” distribution when testing out-of-sample stability: with GBM - standard in finance.