Download presentation
Presentation is loading. Please wait.
Published byWendy Barber Modified over 9 years ago
1
Local Probabilistic Sensitivity Measure By M.J.Kallen March 16 th, 2001
2
2 Presentation outline Definition of the LPSM Problems with calculating the LPSM Possible solution: Isaco’s method Results Conclusions
3
3 LPSM Definition The following local sensitivity measure was proposed by R.M. Cooke and J. van Noortwijk: For a linear model this measure agrees with the FORM method. Therefore this measure can be used to capture the local sensitivity of a non-linear model to the variables X i.
4
4 Problem with calculating the LPSM The derivative of the conditional expectation can only be analytically determined for a few simple models. Using a Monte Carlo simulation introduces many problems resulting in a significant error.
5
5 Using Monte Carlo Algorithm: 1.Save a large number of samples 2.Compute E(X|Z=z 0 ± ) and divide by 2 For good results needs to be small, but then the number of samples used in step 2 is small and a large error is introduced after dividing by 2 .
6
6 Alternative: Isaco’s method An alternative to calculating is proposed by Isaco Meilijson. The idea is to expand E(X|Z) around z 0 using the Taylor expansion:
7
7 Isaco’s method (cont.) We can then calculate the covariance:
8
8 Isaco’s method (cont.) The main idea in this algorithm is to now take a ‘local distribution’ Z* such that the term is equal to zero. By doing this we get
9
9 Choosing Z* We want to take Z* such that Z* should be as close as possible to Z, therefore we want to minimize the relative information. This results in a entropy optimization problem.
10
10 Relative information Definition: the relative information of Q with respect to P is given by: “The distribution with minimum information with respect to a given distribution under given constraints is the smoothest distribution with has a density similar to the given distribution.”
11
11 Entropy optimization (EO)
12
12 Solving the EO problem 1.Newton’s method 2.the MOSEK toolbox for MATLAB There are a number of ways to implement this entropy optimization problem. We have tried the following:
13
13 Newton’s method The implementation of Newton’s method requires a lot of work. Since you have to solve a system, a matrix has to be inverted and this introduces large errors in many cases. There are a number of reasons not to use Newton’s method for solving the EO problem:
14
14 MOSEK The MOSEK toolbox has a special function for entropy optimization problems, therefore the variables and constraints are easily set up. No long calculations needed, constraints can be changed in a few seconds. A much easier way of solving the EO problem is by using MOSEK created by Erling Andersen:
15
15 Some results Modelz0z0 Correct answer Isaco’s method (10000 samples) X,Y ~ N(0,1) Z = X+Y 10.50.5079 20.50.4886 X,Y ~ N(0,1) Z = 2X+Y 00.40.3998 20.40.4064 X,Y ~ U(0,1) Z = 2X+Y 0.50.250.2513 10.250.3723 1.50.40.4021
16
16 Even worse results… Modelz0z0 Correct answer Isaco’s method (10000 samples) X,Y ~ U(0,1) Z = -ln(X)/Y 0.5-0.34712-0.43205 0.75-0.28221-0.35226 1-0.22862-0.29663 2-0.10417-0.16034 3-0.05432-0.09732 4-0.03179-0.06767
17
17 Attempts to fix Isaco We’ve tried many things to get better results. These attempts mostly consisted of adding and/or changing constraints. Using only the samples from a small interval around z 0. A few different approaches to this problem have been tried, but these seem to give similar results.
18
18 Conclusions Until now the results cannot be trusted, therefore I recommend not to use this method. We need to gain insight into what is going wrong and why it’s behaving in this way. Maybe Isaco Meilijson has an idea!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.