Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using expert knowledge elicitation to quantify uncertainty

Similar presentations


Presentation on theme: "Using expert knowledge elicitation to quantify uncertainty"— Presentation transcript:

1 Using expert knowledge elicitation to quantify uncertainty
Abigail Colson, Management Science UTOPIAE Training School 23 Nov 2017

2 Expert Judgment short course, NASA 17,18 June 2009
Roger Cooke’s “Unholy trinity” UNCERTAINTY AMBIGUITY INDECISION

3 Expert Judgment short course, NASA 17,18 June 2009
UNCERTAINTY How much shale gas will the US produce in 2025? AMBIGUITY Is John (1.88 m) “tall” ? INDECISION Evacuate?

4 Expert Judgment short course, NASA 17,18 June 2009
AMBIGUITY What means? INDECISION What’s best? UNCERTAINTY What Is?

5 Expert Judgment short course, NASA 17,18 June 2009
UNCERTAINTY Experts’ job AMBIGUITY Analysts’ job INDECISION Problem owners/ Decision makers’ job

6 What is expert judgement?
The use of structured or unstructured inputs from one or more individuals who have specialist knowledge of a particular domain. Where is it used? Problem structuring Model bounding Model structuring Model quantification

7 What is expert judgement?
The use of structured or unstructured inputs from one or more individuals who have specialist knowledge of a particular domain. Where is it used? Problem structuring Model bounding Model structuring Model quantification

8 Why use expert judgement?
Some questions don’t have a clear answer in existing data and models because of issues around… Emerging/changing technology New setting Shifting environment Interested in understanding how things might change, but no data on alternatives Poor quality data

9 Why use expert judgement?
Some questions don’t have a clear answer in existing data and models … What is the impact of breastfeeding on cognitive development? What proportion of pathogens will be resistant to antibiotics in 2026? How much will ice sheets contribute to sea level rise by 2100?

10 Uncertainty is expressed through subjective probability
Approaches to probability Combinatorial (eg Laplace) Based on symmetries of a problem Frequency-based (eg Von Mises) Defined as relative number of occurrences of event of interest, within a class of infinite sequences Subjective (eg Savage) Individual assessment measured through expressions of rational preference

11 Uncertainty is expressed through subjective probability
Comparison of approaches The probability of getting heads when you toss a coin is 0.5 The coin is completely symmetrical so heads and tails are equally likely. The long term proportion of coin tosses giving heads converges to 0.5. A bet where I lose 1p if heads, and win 1p if tails, is as attractive as one where I lose 1p if tails, and win 1p if heads.

12 Uncertainty is expressed through subjective probability (Ramsey 1926, Borel, DeFinetti 1937, von Neumann & Morgenstern 1944, Savage 1954) Consider a statement with clear truth conditions. You may be uncertain about whether this is true or not, and your subjective uncertainty (degree of belief) can be measured – in principle - through your preferences between “lottery comparisons.”

13 Subjective probability
Consider two events: C: Colts wins next Super Bowl. P: Patriots win next Super Bowl. Two lottery tickets: L(C): worth $10,000 if C, worth $100 otherwise. L(P): worth $10,000 if P, worth $100 otherwise. You may choose ONE. My degree belief (P)  My degree belief (C) is operationalized as I choose L(P) in the above choice situation

14 Limits to subjective probability
Consider two lotteries: Receive $1,000,000 if you clean your attic next weekend, otherwise $0. Receive $1,000,000 if the Patriots win the Super Bowl, otherwise $0. If prefer first then P(clean attic) > P(Patriots win Super Bowl)

15 Limits to subjective probability
Consider two new lotteries: Receive $1 if you clean your attic next weekend, otherwise $0. Receive $1 if the Patriots win the Super Bowl, otherwise $0. If prefer second then P(Patriots win Super Bowl) > P(clean attic)

16 Advantages/disadvantages to subjective probability
Cons It doesn’t work perfectly for everything (eg uncertainty about my own actions). It is based on a theory for rational preference, but not everyone has rational preferences. It is subjective and can be different for different rational actors. Pros It can be operationalized. It is linked to rational theory of decision making. The notion of uncertainty we get through rational preferences satisfies Komogorov’s Axioms of probability.

17 What is meant by structured expert judgement?
EJ gives data about systems which may be just as applicable, or more applicable than traditional “statistical data” Procedures used for traditional data to ensure “clean” collection process Consistency of interpretation Data gathering under same conditions, etc Structured EJ aims to do the same with EJ data

18 What is meant by structured expert judgement?
In SEJ, the decision maker is seeking to adopt uncertainties provided by experts. But… Each expert will probably think differently. SEJ is needed when the science isn’t set. May need to combine different assessments in a reasonable way Some experts will be better at assessing uncertainty than others, and this is testable So we seek a “rational consensus” about the uncertainty (as opposed to a census or political consensus).

19 Features of structured expert judgement
Identification Neutrality Role separation Transparency Equality Accountability Eliminating ambiguity Rationales Training Empirical control

20 A sample of expert judgement approaches
Ask the nearest expert Listen to a pundit (the loudest expert) Assemble a committee of experts Delphi Nominal Group Technique Stanford Research Institute Process Cooke’s Classical Model Sheffield Elicitation Framework (SHELF) Prediction markets Superforecasters – IARPA competition

21 Degree of understanding
Cooke model Low High Excellent explanatory models and relevant empirical data, giving good predictive power in relevant contexts Lack of relevant data or models with explanatory value Competingmodels with explanatory value Models with explanatory value and some relevant empirical data

22 Time available for application
Cooke model Low High Hours Days Months Years

23 Legitimation burden Low High Cooke model
External validation and evidence of quality of the process and validators Internal expertise, small numbers of experts with an interest in outcome and no external validation Consensus driven, but with experts who have no interest in outcome External validation and quality process but small number of experts

24 Challenges with multiple experts: How to aggregate judgements?
Is aggregation necessary? Behavioural vs. mathematical aggregation Example mathematical aggregation techniques Equal weights: Just puts back problem to choosing experts Self weighting: Are experts really objective about themselves? Weight experts according to preference?: Is analyst able to do this? “Proper scoring rule”: Maximum score gained (on average) by giving own probabilities

25 Challenges with multiple experts: How to handle group interaction?
Expert interaction positives Ensure everyone understands the questions Can eliminate incorrect assumptions Experts can discuss potential mechanisms, base rates, comparative classes etc, highlighting aspects that should be considered Experts can share information Expert interaction negatives Development of “groupthink” - Focus on one or two mechanisms, or comparative classes Non-expertise based influences (eg ability to articulate, dominant personality, peer esteem, job level)

26 Very High Information, Very Poor Statistical Accuracy
What makes a good expert? Very High Information, Very Poor Statistical Accuracy 90% Confidence True value P-value = 6.39 E-7; Inf = 2.282 Statistical Accuracy: How likely is this (or worse) agreement with data, if 50% & 90% statements are correct? Informativeness: 2.282

27 Low Information, Good Statistical Accuracy
What makes a good expert? Low Information, Good Statistical Accuracy P-value=0.5707, Inf = Statistical Accuracy: How likely is this (or worse) agreement with data, if 50% & 90% statements are correct? 0.57 Informativeness: 0.53

28 High Information, Decent Statistical Accuracy
What makes a good expert? High Information, Decent Statistical Accuracy P value = ; Inf= 1.774 Statistical Accuracy: How likely is this (or worse) agreement with data, if 50% & 90% statements are correct? 0.12 Informativeness: 1.77

29 Example for Cyber Security expert study
Is EJ falsifiable? Example for Cyber Security expert study 90% certain Hypothesis: Expert’s 90% probability statements are accurate Hypothesis falsified by 8 out of ten realizations outside 90% bounds

30 Cyber Security; Performance-based Decision Maker

31 Projecting the future trajectory of antimicrobial resistance in Europe
The challenge: Resistance is tough to model as it depends on a range of uncertain, highly dependent factors. The solution: Experts can make projections while considering expected policy changes, technological developments, and other environmental shifts that statistical forecasts miss.

32 Estimating the impact of gains in IQ on lifetime earnings in India
The challenge: Studies on IQ and earnings are plagued by confounding and measurement challenges, producing conflicting results. Most of the existing studies and data are from the U.S. or Europe. The solution: Experts have knowledge about relationships that are tough to isolate in observational data and how the different setting could impact outcomes. They considered a hypothetical, randomized controlled trial to assess the potential effect of IQ gains.

33 Estimating the effectiveness of nitrogen removal stormwater management practices
The challenge: Existing data and models do not capture the variability in performance of best management practices in the range of environments in which they’re used. The solution: Experts created simple models and adjusted the results to account for varying factors in a set of hypothetical storm events.

34 Other application areas
Nuclear energy, aviation, civil engineering Risks from volcanos and other geohazards Cost of invasive species in the U.S. Great Lakes Contribution of ice sheets to future sea level rise GDP growth in Mexico (as in input in greenhouse gas projections) many more …

35 Research frontiers in expert judgement
How to improve sharing knowledge between experts? How to improve processes and procedures for aggregating expert judgements? How to elicit uncertainty about dependence? How to better train experts and improve their performance as uncertainty assessors?

36 In conclusion… Expert judgement is needed to supplement existing data and models… …but it shouldn’t be the last word. Uncertainty is represented by subjective probability. Need to focus on clear operational definitions and reduce ambiguity. Aggregating judgements is a challenge, but consensus distributions can be made more credible by testing expert capability.

37 Thanks! expertsinuncertainty.net

38


Download ppt "Using expert knowledge elicitation to quantify uncertainty"

Similar presentations


Ads by Google