Download presentation
Presentation is loading. Please wait.
Published byKelly Stanley Modified over 8 years ago
1
Advanced Risk Management I Lecture 7 Non normal returns and historical simulation
2
Non normality of returns The assumption of normality of of returns is typically not borne out by the data. The reason is evidence of –Asimmetry –Leptokurtosis Other casual evidence on non-normality –People make a living on that, so it must exist –If nornal distribution of retruns were normal the crash of 1987 would have a probability of 10 –160, almost zero…
3
Why non-normal? Leverage… One possible reason for non normality, particularly for equity and corporate bonds, is leverage. Take equity, of a firm whose asset value is V and debt is B. Limited liability implies that at maturity Equity = max(V(T) – B, 0) Notice that if at some time t the call option (equity) is at the money, the return is not normal.
4
Why not normal? Volatility Saying that a distribution is not normal amounts to saying that volatility is constant. Non normality may mean that variance either –Does not exist –It is a stochastic variable
5
Dynamic volatility The most usual approach to non normality amounts to assuming that the volatility changes in time. The famous example is represented by GARCH models h t = + shock 2 t-1 + h t -1
6
Arch/Garch extensions In standard Arch/Garch models it is assumed that conditional distribution is normal, i.e. H(.) is the normal distribution In more advanced applications one may assume that H be nott normally distributed either. For example, it is assumed that it be Student-t or GED (generalised error distribution). Alternatively, one can assume non parametric conditonal distribution (semi-parametric Garch)
7
Volatility asymmetry A flow of GARCH model is that the response of the return to an exogenous shock is the same no matter what the sign of the shock. Possible solutions consist in –distinguishing the sign in the dynamic equation of volatility. Threshold-GARCH (TGARCH) h t = + shock 2 t-1 + D shock 2 t-1 + h t -1 D = 1 if shock is positive and zero otherwise. –modelling the log of volatility (EGARCH) log(h t ) = + g (shock t-1 / h t -1 ) + log( h t -1 ) with g(x) = x + ( x - E( x )).
8
High frequency data For some markets high frequency data is available (transaction data or tick-by-tick). –Pros: possibility to analyze the price dynamics on very small time intervals –Cons: data may be noisy because of microstructure of financial markets. “Realised variance”: using intra-day statistics to represent variance, instead of the daily variation.
9
Subordinated stochastic processes Consider the sequence of log-variation of prices in a given price interval. The cumulated return R = r 1 + r 2 +… r i + …+ r N is a variable that depends on the stoochastic processes a) log-returns r i. b) the number of transactions N. R is a subordinated stochastic process and N is the subordinator. Clark (1973) shows that R is a fat-tail process. Volatility increases when the number of transactions increases, and it is then correlated with volumes.
10
Stochastic clock The fact that the number of transactions induces non normality of returns suggest the possibility to use a variable that, changing the pace of time, could restore normality. This variable is called stochastic clock. The technique of time change is nowadays one of the most used tools in mathematical finance.
11
Lévy processes Not only prices are recorded at time intervals that are not continuous. Price changes are also discrete, and change by tick movements of finite dimension. Fot this reason, a possible model of representing prices is by pure jumps. Mixed stochastic processes (diffusive and jumps) are known as Levy processes. Examples of Levy processes: Variance-Gamma models, CGMY (Carr-Geman-Madan-Yor) models.
12
Copula functions A function z = C(u,v) is called copula iff z, u and v are [0,1] C(0,v) = C(u,0) = 0, C(1,v) = v, C(u,1) = u C(u 2, v 2 ) – C(u 1, v 2 ) – C (u 2, v 1 ) – C (u 1, v 1 ) 0 for all values u 2 > u 1 and v 2 > v 1 Sklar theorem: every joint diistribution can be written as a copula function taking marginal distributions as arguments and whatever copula function taking probabilities as arguments gives a joint distribution
13
Copula function: examples Two risks A and B with joint probability H(A,B) and marginal probability H a (A) and H b (B) H(A,B) = C(H a, H b ), and C is a copula function. Cases: 1) C ind (H a, H b ) = H a H b, independent risks 2) C max (H a, H b ) =min(H a,H b ) perfect positive dependence 3) C min (H a, H b ) =max(H a + H b –1,0) perfect negative dependence Imperfect dependence (Fréchet limits) max(H a + H b –1,0) C(H a, H b ) min(H a,H b )
14
Risk measurement wth fat tails Addressing non-normality of returns calls for the solution of three problems –Compression data techniques –Choice of the information source –Choice of the model to be used to substitute the normal distribution.
15
Data compression First option: re-evaluation of the current portfolio on historical data and estimation and simulation of the distribution of losses Second option: estimation of the sensitivity to the most relevant risk factors of the assets and the portfolios. Third option: traditional technical techniques (principal components and factor models)
16
Distribution of the returns First option: choosing a new model, or a class of new models of distributions Second option: simulating the distribution using historical data Third option: determining extreme scenarios for the distribution.
17
Classical historical simulation Re-valuation of the portfolio on historical data –Every set of historical data represents a possible market scenario P&L computation under each scenario Sorting scenarios by dimension of losses –Empircal P& L distribution Quantile computation of the empirical distribution –I.e.. on 100 data the worst represents the 1% VaR
18
Histogram FIAT
19
Classical historical simulation Problems –Data may fail to be not i.i.d. –In particular, the distribution of future returns may vary with market conditions –High and low volatility periods may cluster (volatility clustering) Effects –Under or over-valuation of VaR.
20
Volatility clustering
21
Filtered historical simulation Barone-Adesi and Giannopoulos Barone-Adesi and Giannopoulos proposed a modification of the algorithm based on a filtering process of data. Filteres historical simulation –Re-valuation of the portfolio on historical data –Estimation of a Garch model on this series –Use of the estimate to filter data –Use of bootstrap techniques to simulate the evolution of returns and volatility
22
Filtered historical simulation: algorithm Step 1. Re-valuation of the portfolio on historical data, and P&L comnputation Step 2. Specification and estimation of a GARCH model. i.e.
23
Data filtering Step 3. Computing and saving the time series of residuals t, for t = 0, 1, …,T Step 4. Computing and saving the time series of volatilities t, for t = 1, …,T + 1 Step 5. Computing of the time seris of filtered innovation z t = t / t for t = 1, …,T
24
Bootstrap algorithm Step 6. Extract n filtered residuals from the time series z t = z(1), z(2), …,z(n) –n represents the unwinding period Step 7. Set the simulated return for time T + 1 equal to R T+1 = z(1) T+1 = T+1 Step 8. Compute volatility T+2.
25
Step 9. Repeat step 7 and 8 computing R T+i = z(i) T+i = T+i for i = 2, …,n – 1 Step 10. Compute and save R T+n = z(n) T+n = T+n R T+1 + R T+2 + … + R T+i … + R T+n …first iteration
26
…repeart NITER times Step 11. Repeat steps from 6 to 10 a number NITER (i.e. 1000) of iterations. Step 12. Sort the scenarios by loss dimension Step 13. Compute the empirical quantile
27
Applications This methodology was applied to margin determination of the London Clearing House In a companion paper Barone-Adesi, Engle and Mancini, apply the same methodology to pricing options.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.