Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh,

Similar presentations


Presentation on theme: "1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh,"— Presentation transcript:

1 1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh, Ph.D. jason@arlut.utexas.edu Applied Research Laboratories University of Texas at Austin Jeffrey W. Herrmann, Ph.D. jwh2@umd.edu Department of Mechanical Engineering and Institute for Systems Research University of Maryland Third International Workshop on Reliable Engineering Computing, NSF Workshop on Imprecise Probability in Engineering Analysis & Design, Savannah, Georgia, February 20-22, 2008.

2 2 Motivation Need to estimate reliability of system with components of uncertain reliability. Which components should we test to reduce uncertainty about system reliability?

3 3 Introduction Data Existing information Is it relevant? Is it accurate? Prior characterization Updated / posterior characterization New experiments Statistical modeling and updating approach

4 4 Statistical Approaches Compare the following approaches:  (Precise) Bayesian  Robust Bayesian sensitivity analysis of prior  Imprecise probabilities actual “true” probability is imprecise the imprecise beta model } Different philosophical motivations, but equivalent math. for this problem

5 5 Is precise probability sufficient? Problem: equiprobable  Know nothing or know they are equally likely? Why does it matter?  Engineer A states that input values 1 and 2 have equal probabilities  Engineer B is designing a component that is very sensitive to this input  Should Engineer B proceed with a costly but versatile design, or study the problem further? Case 1: Engineer A had no idea, so stated equal. Study =good Case 2: Engineer A performed substantial analysis. Additional study = wasteful.

6 6 Moving beyond precise probability Start with well established principles and mathematics  Conclude it is insufficient Abandon probability completely? Relax conditions, extend applicability? Think sensitivity analysis. How much do deviations from a precise prior matter?

7 7 Robust Bayes, Imprecise Beta Model Instead of one prior, consider many (a set) Cumulative Probability θ

8 8 Problem Description A simple parallel-series system, some info Assume we can test 12 more components  How should these tests be allocated?  A single test plan can have different outcomes Compare different scenarios of existing information

9 9 Multiple Outcomes of Experiement Precise probability  Consider one outcome: test A 12 times, 2 fail Get one new posterior; precise parameters  Consider all possible outcomes: test A, get… Get a new posterior for each possible outcome; sets of parameters Imprecise probability  One outcome, one SET of posteriors  Multiple outcomes, SET of SETS of posteriors How measure uncertainty? How make comparisons and decisions?

10 10 Metrics of Uncertainty: Precise Distributions Variance-based sensitivity analysis (SV i ) (Sobol, 1993; Chan et al., 2000)  variance of the conditional expectation / total variance  focuses on status quo, next (local) piece of info  testing a component with a large sensitivity analysis should reduce variance of system reliability estimate Mean and variance observations Posterior variance

11 11 Metrics of Uncertainty: Imprecise Distributions Imprecise variance-based sensitivity analysis (Hall, 2006)  Does not worry about outcomes; local metric Mean and variance dispersion Imprecision in the mean Imprecision in the variance

12 12 Scenarios with Precise Distributions Components have beta distributions for the prior distributions of failure probability Scenario 1  System failure probability: mean = 0.2201 variance = 0.0203 Scenario 2  System failure probability: mean = 0.1691 variance = 0.0116 ABC Scenario 1 priors ABC Scenario 2 priors X X

13 13 Scenario 1 Results Variance-based sensitivity analysis: Posterior variance: Best worst-case Best best-case

14 14 Scenario 1 Results 1 2

15 15 Scenario 2 Results Variance-based sensitivity analysis: Posterior variance: Best worst-case Best best-case

16 16 Scenario 2 Results 1 2

17 17 Scenario 3: Imprecise Distributions Component failure probabilities are modeled using imprecise beta distributions System failure probability an imprecise distribution:  Mean: 0.2201 to 0.4640  Variance: 0.0136 to 0.0332 Imprecise variance-based sensitivity analysis: ABC Since failure probability of B is poorly known, we allow for a range. Scenario 3 comparable to precise scenario 1.

18 18 Posterior Variance Analysis Smallest variances, and smallest imprecision in variances.

19 19 Results for Scenario 3 Sample results: [12, 0, 0], [0, 12, 0], [6, 6, 0] Convex hull of results: [12, 0, 0], [0, 12, 0], [6, 6, 0] Convex hull of results: [0, 0, 12], [6, 0, 6], [0, 6, 6] Convex hull of results: [0, 12, 0], [4, 4, 4], [6, 6, 0], [0, 6, 6]

20 20 Scenario 4: Imprecise Distributions Component failure probabilities are modeled using imprecise beta distributions System failure probability is also an imprecise distribution:  Mean: 0.1691 to 0.2880  Variance: 0.0100 to 0.0173 Imprecise variance-based sensitivity analysis: ABC Compared to scenario 3, the failure probability of C is reduced. This makes it comparable to precise scenario 2.

21 21 Results for Scenario 4 Convex hull of results: [12, 0, 0], [0, 12, 0], [6, 6, 0] Convex hull of results: [0, 0, 12], [6, 0, 6], [0, 6, 6] Convex hull of results: [12, 0, 0], [4, 4, 4], [0, 6, 6]

22 22 Discussion / Future Work Multiple sources of uncertainty  Existing knowledge  Results of future tests How do we prioritize different aspects?  Variance or imprecision reduction?  Best case, worst case, average case of results?  Incorporate economic/utility metrics? Other imprecision/total uncertainty measures?  “Breadth” of p-boxes (Ferson and Tucker, 2006 )  Aggregate uncertainty, others(Klir and Smith, 2001)

23 23 Summary Shown how to use different statistical approaches for evaluating experimental test plans Used direct uncertainty metrics  Variance-based sensitivity analysis Precise and imprecise  Posterior variance  Dispersion of the mean and variance  Imprecision in the mean and variance

24 24 Thank you for your attention. Questions? Comments? Discussion? This work supported in part by the Applied Research Laboratories at UT-Austin Internal IR&D grant 07-09

25 25 SVi

26 26 Formulae ; ;. The mathematical model for the reliability of the system shown in Figure 1 follows.


Download ppt "1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh,"

Similar presentations


Ads by Google