Local Probabilistic Sensitivity Measure By M.J.Kallen March 16 th, 2001.

Slides:



Advertisements
Similar presentations
The Simple Regression Model
Advertisements

Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Transportation Problem (TP) and Assignment Problem (AP)
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Probabilistic Re-Analysis Using Monte Carlo Simulation
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Data mining and statistical learning - lecture 6
Aspects of Conditional Simulation and estimation of hydraulic conductivity in coastal aquifers" Luit Jan Slooten.
Visual Recognition Tutorial
© Janice Regan, CMPT 102, Sept CMPT 102 Introduction to Scientific Computer Programming The software development method algorithms.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Importance Sampling. What is Importance Sampling ? A simulation technique Used when we are interested in rare events Examples: Bit Error Rate on a channel,
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Recurrence Relations Reading Material –Chapter 2 as a whole, but in particular Section 2.8 –Chapter 4 from Cormen’s Book.
Function Optimization Newton’s Method. Conjugate Gradients
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Chapter 1 Introduction The solutions of engineering problems can be obtained using analytical methods or numerical methods. Analytical differentiation.
Engineering Optimization
Dynamical Systems Analysis I: Fixed Points & Linearization By Peter Woolf University of Michigan Michigan Chemical Process Dynamics.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Position Error in Assemblies and Mechanisms Statistical and Deterministic Methods By: Jon Wittwer.
Study Group Randomized Algorithms Jun 7, 2003 Jun 14, 2003.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Newton's Method for Functions of Several Variables
Why Function Optimization ?
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
1 Assessment of Imprecise Reliability Using Efficient Probabilistic Reanalysis Farizal Efstratios Nikolaidis SAE 2007 World Congress.
Collaborative Filtering Matrix Factorization Approach
Objectives of Multiple Regression
LINEAR PROGRAMMING SIMPLEX METHOD.
Component Reliability Analysis
Operations Research Models
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
MEGN 537 – Probabilistic Biomechanics Ch.7 – First Order Reliability Methods Anthony J Petrella, PhD.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Chapter 9 Risk Management of Energy Derivatives Lu (Matthew) Zhao Dept. of Math & Stats, Univ. of Calgary March 7, 2007 “ Lunch at the Lab ” Seminar.
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
GG 313 Geological Data Analysis Lecture 13 Solution of Simultaneous Equations October 4, 2005.
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
Part 4 Nonlinear Programming 4.5 Quadratic Programming (QP)
Sensitivity derivatives Can obtain sensitivity derivatives of structural response at several levels Finite difference sensitivity (section 7.1) Analytical.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 23.
Monte-Carlo based Expertise A powerful Tool for System Evaluation & Optimization  Introduction  Features  System Performance.
Branch and Bound Algorithms Present by Tina Yang Qianmei Feng.
Advanced Engineering Mathematics, 7 th Edition Peter V. O’Neil © 2012 Cengage Learning Engineering. All Rights Reserved. CHAPTER 4 Series Solutions.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited)
Computational Intelligence: Methods and Applications Lecture 24 SVM in the non-linear case Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Computacion Inteligente Least-Square Methods for System Identification.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
The simple linear regression model and parameter estimation
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Boundary Element Analysis of Systems Using Interval Methods
Tachyon vacuum in Schnabl gauge in level truncation
Collaborative Filtering Matrix Factorization Approach
Chapter 27.
Ying shen Sse, tongji university Sep. 2016
Well, just how many basic
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Part 4 Nonlinear Programming
Presentation transcript:

Local Probabilistic Sensitivity Measure By M.J.Kallen March 16 th, 2001

2 Presentation outline Definition of the LPSM Problems with calculating the LPSM Possible solution: Isaco’s method Results Conclusions

3 LPSM Definition The following local sensitivity measure was proposed by R.M. Cooke and J. van Noortwijk: For a linear model this measure agrees with the FORM method. Therefore this measure can be used to capture the local sensitivity of a non-linear model to the variables X i.

4 Problem with calculating the LPSM The derivative of the conditional expectation can only be analytically determined for a few simple models. Using a Monte Carlo simulation introduces many problems resulting in a significant error.

5 Using Monte Carlo Algorithm: 1.Save a large number of samples 2.Compute E(X|Z=z 0 ±  ) and divide by 2  For good results  needs to be small, but then the number of samples used in step 2 is small and a large error is introduced after dividing by 2 .

6 Alternative: Isaco’s method An alternative to calculating is proposed by Isaco Meilijson. The idea is to expand E(X|Z) around z 0 using the Taylor expansion:

7 Isaco’s method (cont.) We can then calculate the covariance:

8 Isaco’s method (cont.) The main idea in this algorithm is to now take a ‘local distribution’ Z* such that the term is equal to zero. By doing this we get

9 Choosing Z* We want to take Z* such that Z* should be as close as possible to Z, therefore we want to minimize the relative information. This results in a entropy optimization problem.

10 Relative information Definition: the relative information of Q with respect to P is given by: “The distribution with minimum information with respect to a given distribution under given constraints is the smoothest distribution with has a density similar to the given distribution.”

11 Entropy optimization (EO)

12 Solving the EO problem 1.Newton’s method 2.the MOSEK toolbox for MATLAB There are a number of ways to implement this entropy optimization problem. We have tried the following:

13 Newton’s method The implementation of Newton’s method requires a lot of work. Since you have to solve a system, a matrix has to be inverted and this introduces large errors in many cases. There are a number of reasons not to use Newton’s method for solving the EO problem:

14 MOSEK The MOSEK toolbox has a special function for entropy optimization problems, therefore the variables and constraints are easily set up. No long calculations needed, constraints can be changed in a few seconds. A much easier way of solving the EO problem is by using MOSEK created by Erling Andersen:

15 Some results Modelz0z0 Correct answer Isaco’s method (10000 samples) X,Y ~ N(0,1) Z = X+Y X,Y ~ N(0,1) Z = 2X+Y X,Y ~ U(0,1) Z = 2X+Y

16 Even worse results… Modelz0z0 Correct answer Isaco’s method (10000 samples) X,Y ~ U(0,1) Z = -ln(X)/Y

17 Attempts to fix Isaco We’ve tried many things to get better results. These attempts mostly consisted of adding and/or changing constraints. Using only the samples from a small interval around z 0. A few different approaches to this problem have been tried, but these seem to give similar results.

18 Conclusions Until now the results cannot be trusted, therefore I recommend not to use this method. We need to gain insight into what is going wrong and why it’s behaving in this way. Maybe Isaco Meilijson has an idea!