- 1 - Summary of P-box Probability bound analysis (PBA) PBA can be implemented by nested Monte Carlo simulation. –Generate CDF for different instances.

Slides:



Advertisements
Similar presentations
Structural reliability analysis with probability- boxes Hao Zhang School of Civil Engineering, University of Sydney, NSW 2006, Australia Michael Beer Institute.
Advertisements

Bayesian inference of normal distribution
Materials for Lecture 11 Chapters 3 and 6 Chapter 16 Section 4.0 and 5.0 Lecture 11 Pseudo Random LHC.xls Lecture 11 Validation Tests.xls Next 4 slides.
Sensitivity Analysis In deterministic analysis, single fixed values (typically, mean values) of representative samples or strength parameters or slope.
Sampling: Final and Initial Sample Size Determination
Sampling Distributions (§ )
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009.
BCOR 1020 Business Statistics Lecture 17 – March 18, 2008.
ENBIS/1 © Chris Hicks University of Newcastle upon Tyne An analysis of the use of the Beta distribution for planning large complex projects Chris Hicks,
8 Statistical Intervals for a Single Sample CHAPTER OUTLINE
Statistics and Probability Theory Prof. Dr. Michael Havbro Faber
8-1 Introduction In the previous chapter we illustrated how a parameter can be estimated from sample data. However, it is important to understand how.
Properties of Random Numbers
1 Confidence Intervals for Means. 2 When the sample size n< 30 case1-1. the underlying distribution is normal with known variance case1-2. the underlying.
Lecture II-2: Probability Review
Probability distribution functions
Continuous Probability Distributions  Continuous Random Variable  A random variable whose space (set of possible values) is an entire interval of numbers.
Traffic Modeling.
Statistics for Data Miners: Part I (continued) S.T. Balke.
Estimation in Sampling!? Chapter 7 – Statistical Problem Solving in Geography.
Standard Normal Distribution
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
Exam I review Understanding the meaning of the terminology we use. Quick calculations that indicate understanding of the basis of methods. Many of the.
Chanyoung Park Raphael T. Haftka Paper Helicopter Project.
1 Statistical Distribution Fitting Dr. Jason Merrick.
Normal Distribution Introduction. Probability Density Functions.
MEGN 537 – Probabilistic Biomechanics Ch.5 – Determining Distributions and Parameters from Observed Data Anthony J Petrella, PhD.
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
Limits to Statistical Theory Bootstrap analysis ESM April 2006.
Calculating Risk of Cost Using Monte Carlo Simulation with Fuzzy Parameters in Civil Engineering Michał Bętkowski Andrzej Pownuk Silesian University of.
Probability = Relative Frequency. Typical Distribution for a Discrete Variable.
5-1 ANSYS, Inc. Proprietary © 2009 ANSYS, Inc. All rights reserved. May 28, 2009 Inventory # Chapter 5 Six Sigma.
Testing Random-Number Generators Andy Wang CIS Computer Systems Performance Analysis.
Chapter 6 Chapter 16 Sections , 4.0, Lecture 16 GRKS.XLSX Lecture 16 Low Prob Extremes.XLSX Lecture 16 Uncertain Emp Dist.XLSX Lecture 16 Combined.
Summarizing Risk Analysis Results To quantify the risk of an output variable, 3 properties must be estimated: A measure of central tendency (e.g. µ ) A.
Validation of uncertain predictions against uncertain observations Scott Ferson, 16 October 2007, Stony Brook University, MAR 550, Challenger.
Validation of uncertain predictions against uncertain observations Scott Ferson, William Oberkampf and Lev Ginzburg 20 February 2008, REC 2008, Savannah,
Ch4: 4.3The Normal distribution 4.4The Exponential Distribution.
- 1 - Matlab statistics fundamentals Normal distribution % basic functions mew=100; sig=10; x=90; normpdf(x,mew,sig) 1/sig/sqrt(2*pi)*exp(-(x-mew)^2/sig^2/2)
- 1 - Computer model under uncertainty In previous lecture on accuracy assessment –We considered mostly deterministic models. – We did not distinguish.
Statistics Presentation Ch En 475 Unit Operations.
1 LES of Turbulent Flows: Lecture 2 (ME EN ) Prof. Rob Stoll Department of Mechanical Engineering University of Utah Spring 2011.
Louisiana Department of Transportation and Development Forecasting Construction Cost Index Values Using Auto Regression Modeling Charles Nickel, P.E. Cost.
Lecture 3 Types of Probability Distributions Dr Peter Wheale.
How do we classify uncertainties? What are their sources? – Lack of knowledge vs. variability. What type of measures do we take to reduce uncertainty?
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
MEGN 537 – Probabilistic Biomechanics Ch.5 – Determining Distributions and Parameters from Observed Data Anthony J Petrella, PhD.
Modeling and Simulation CS 313
More on Inference.
Sampling Distributions
Questions from lectures
BAE 5333 Applied Water Resources Statistics
Sampling Distributions and Estimation
Chapter 9 One and Two Sample Estimation
Modeling and Simulation CS 313
Chapter 7 Sampling Distributions.
More on Inference.
Chapter 7 Sampling Distributions.
Chapter 7 Sampling Distributions.
Chapter 7 Sampling Distributions.
Sampling Distributions (§ )
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
DESIGN OF EXPERIMENT (DOE)
Further Topics on Random Variables: Derived Distributions
Chapter 7 Sampling Distributions.
Further Topics on Random Variables: Derived Distributions
Further Topics on Random Variables: Derived Distributions
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Presentation transcript:

- 1 - Summary of P-box Probability bound analysis (PBA) PBA can be implemented by nested Monte Carlo simulation. –Generate CDF for different instances of the epistemic uncertainty –P-ox is given by bounds of probability distribution. –As a result, probability is interval valued quantity instead of single value.

- 2 - Validation metric in case of aleatory uncertainty Continuous model response and discrete experimental observations –Cumulative distribution function (CDF) from continuous PDF –Empirical distribution function (EDF) from finite number of data Example –Single data x=1.5 –3 data x= –100 data x= normrnd(3.5,1.2,1,100) dx=0.2; xx=0:dx:8; nx=length(xx); n=100; xp=normrnd(3.5,1.2,1,n); xp=sort(xp); %xp=[ ] for i=1:nx; yy(i)=sum(heaviside(xx(i)-xp))/n; end stairs(xx,yy,'LineWidth',2); ylim([0 1])

- 3 - How to compare the two, and what is the metric ? –Compare visually the shape of the two distributions. –Compare values of moments such as mean and variance. –Compare maximum difference of CDF, Kolmogorov-Smirnov distance: –In this Oberkampf and Roy, area metric is suggested. area between model distribution F and data distribution S n as the measure of mismatch between the model and the experiment.

- 4 - Validation metric is better than comparing moments Figures explain: –Even if we have agreement with mean, we can still have large area. –Even if we have agreement with both mean & stdev, we can still have the large area metric. –Only when the two agree closely at every point, we have small area.

- 5 - area metric is better than KS metric –In the 1 st figure below, according to the K-S metrics, the right one is better than the left. But from engineering viewpoint, the left one is better. –In the 2 nd figure below, both give same K-S metric of unity. Area metric does not.

- 6 - area metric applies even when both are discrete Sometimes, we also have only a few simulation output from the model due to the computational cost.

Vicente Romero’s metric and critique Area metric does not distinguish between left and middle cases. Prefers metric in real space (see notes page for source)

- 8 - P-box in case of epistemic uncertainty –Case (a): p-box for interval uncertainty, i.e., no aleatory, only pure epistemic uncertainty. Then SRQ will also be in an interval. In terms of probability, CDF = 0 below left. CDF = 1 above right. –Case (c): degenerate p-box, i.e., precise CDF is given when there is only aleatory uncertainty. This is of course typical of traditional probability distribution. –Case (b): p-box for a mixture of aleatory & epistemic uncertainty. As a result, we get bounded CDF. e.g., at SRQ = 34, CDF is in the range [0.2, 0.6]. Compare with case (c) where the CDF is a single value.

- 9 - Area metric d under epistemic uncertainty –Consider only the case of single observation. –For easier understanding, think about the extreme case of aleatory uncertainty, in which the lower bound and upper bound is identical.

Homework Consider the model used in previous lecture that simulates a response y(x) as a function of input x. At x=3.0, we have experimentally measured the performance of y three times to get 5.7, 5.84, Determine the confidence interval of the unknown mean of the difference between the model and experimental data. Plot the sample mean and its confidence interval. Based on the result, is the model is valid ? 2.Now the input x is found to be random variable, following normal distribution with the mean at 3, standard deviation 0.5. Due to this, the performance y is no longer deterministic but is given by the distribution. Use the crude Monte Carlo simulation to obtain samples of y and plot the histogram. From the samples, calculate the mean and standard deviation of output y. How much is the difference between the mean of y and deterministic output y ? 3.Plot empricial cumulative distribution function from experimental data as in slide 2 of this lecture. 4.Plot the two cdf's, one being the cdf of model, the other the empirical cdf in one figure. 5.Calculate the validation metric as defined by Kolmogorov-Smirnov test, which is the maximum difference of the two CDF's. 6.Calculate the validation metric as defined by area between two CDF as given by page 9.

Homework -continued 7.Draw the Romero real validation metric based on the confidence intervals. 8.Now the model is changed to the following equation, which includes a parameter u, instead of a fixed value 2. and the value of u is known to be between (1,3). Assuming this as uniformly distributed, by employing double loop Monte-Carlo process, draw p-box of the cdf. 9.Plot empricial cumulative distribution function on top of the p-box. 10.Calculate the validation metric as defined by area between the two CDFs The terminology used in this lecture is very important!