Forecasting and Decision Making Under Uncertainty Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy.

Slides:



Advertisements
Similar presentations
SUNY at Albany System Dynamics Colloquium, Spring 2008 Navid Ghaffarzadegan Effect of Conditional Feedback on Learning Navid Ghaffarzadegan PhD Student,
Advertisements

Chapter 8 Flashcards.
Copyright © Cengage Learning. All rights reserved. 7 Probability.
14 May 2001QPF Verification Workshop Verification of Probability Forecasts at Points WMO QPF Verification Workshop Prague, Czech Republic May 2001.
Chapter 10 Decision Making © 2013 by Nelson Education.
Uncertainty in Engineering The presence of uncertainty in engineering is unavoidable. Incomplete or insufficient data Design must rely on predictions or.
Decision Errors and Statistical Power Overview –To understand the different kinds of errors that can be made in a significance testing context –To understand.
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Data and the Nature of Measurement
Duality of error Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany State University.
VALIDITY.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 4: Modeling Decision Processes Decision Support Systems in the.
Topic 7 Sampling And Sampling Distributions. The term Population represents everything we want to study, bearing in mind that the population is ever changing.
Examples of Choosing the Right Test Variables Available (examples from National Survey of Student Engagement) Demographics: –gender, major, year-in-school…
Evaluating Hypotheses
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Smith/Davis (c) 2005 Prentice Hall Chapter Eight Correlation and Prediction PowerPoint Presentation created by Dr. Susan R. Burns Morningside College.
Monte Carlo Schedule Analysis The Concept, Benefits and Limitations Intaver Institute Inc. 303, 6707, Elbow Drive S.W, Calgary, AB, Canada Tel: +1(403)
Creating Empirical Models Constructing a Simple Correlation and Regression-based Forecast Model Christopher Oludhe, Department of Meteorology, University.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 9. Hypothesis Testing I: The Six Steps of Statistical Inference.
BASIC STATISTICS: AN OXYMORON? (With a little EPI thrown in…) URVASHI VAID MD, MS AUG 2012.
Determining Sample Size
Evaluating decadal hindcasts: why and how? Chris Ferro (University of Exeter) T. Fricker, F. Otto, D. Stephenson, E. Suckling CliMathNet Conference (3.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Understanding Statistics
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
CSD 5100 Introduction to Research Methods in CSD Observation and Data Collection in CSD Research Strategies Measurement Issues.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Reliability & Validity
Components of judgmental skill Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany.
Validity RMS – May 28, Measurement Reliability The extent to which a measurement gives results that are consistent.
Managerial Economics Demand Estimation & Forecasting.
AMERICAN METEOROLOGICAL SOCIETY 1 Harvey Stern 22 nd Conference on Interactive Information and Processing Systems, Atlanta, 30 January, Combining.
Correlation Assume you have two measurements, x and y, on a set of objects, and would like to know if x and y are related. If they are directly related,
Issues concerning the interpretation of statistical significance tests.
Overconfidence in judgment: Why experience might not be a good teacher Tom Stewart September 24, 2007.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each variable Participants or subjects are the.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Mission of EuroSCORE II How can we improve the EuroSCORE project? Are there different options and if so which?
17 TH BMRC MODELLING WORKSHOP – OCTOBER Harvey Stern 17 th BMRC Modelling Workshop, Bureau of Meteorology, Melbourne, 6 October, Generating.
Feedback Intervention Theory Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany.
Analyzing the task Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany State University.
Measurement MANA 4328 Dr. Jeanne Michalski
URBDP 591 I Lecture 4: Research Question Objectives How do we define a research question? What is a testable hypothesis? How do we test an hypothesis?
11 Ahti Salo, Juuso Liesiö and Eeva Vilkkumaa Department of Mathematics and Systems Analysis Aalto University School of Science and Technology P.O. Box.
Chapter 13 Understanding research results: statistical inference.
Data Analysis. Qualitative vs. Quantitative Data collection methods can be roughly divided into two groups. It is essential to understand the difference.
Importance of the task: Frost protection example Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University.
Heuristics and Biases Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany State University.
Statistics & Evidence-Based Practice
Standards for Decision Making
Coherence and correspondence
Cognitive feedback Public Administration and Policy
Hypothesis Testing: One Sample Cases
Expected Value Public Administration and Policy
Concept of Test Validity
Understanding Results
Verifying and interpreting ensemble products
Monte Carlo Schedule Analysis
Introduction to Instrumentation Engineering
Probabilistic forecasts
Bootstrapping Public Administration and Policy
TESTING AND EVALUATION IN EDUCATION GA 3113 lecture 1
basic probability and bayes' rule
Presentation transcript:

Forecasting and Decision Making Under Uncertainty Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany State University of New York Warning Decision Making II Workshop

Forecasting and Decision Making Under Uncertainty 2 Outline Uncertainty Decision making and judgment Inevitable error Problem 1: Choosing the warn/no warn cutoff Problem 2: Reducing error by improving forecast accuracy

Forecasting and Decision Making Under Uncertainty 3 Uncertainty Uncertainty occurs when, given current knowledge, there are multiple possible states of nature.

Forecasting and Decision Making Under Uncertainty 4 Probability is a measure of uncertainty Relative frequency Subjective probability (Bayesian)

Forecasting and Decision Making Under Uncertainty 5 Uncertainty 1 - States (events) and probabilities of those events are known – Coin toss – Dice toss – Precipitation forecasting (approximately) Uncertainty

Forecasting and Decision Making Under Uncertainty 6 Uncertainty 2 - States (events) are known, probabilities are unknown – Elections – Stock market – Forecasting severe weather Uncertainty

Forecasting and Decision Making Under Uncertainty 7 Uncertainty 3 - States (events) and probabilities are unknown – Y2K – Global climate change The differences among the types of uncertainty are a matter of degree. Uncertainty

Forecasting and Decision Making Under Uncertainty 8 Picturing uncertainty There are many ways to depict uncertainty. For example, Continuous events: scatterplot Discrete events: decision table

Forecasting and Decision Making Under Uncertainty 9 Scatterplot: Correlation =.50

Forecasting and Decision Making Under Uncertainty 10 Scatterplot: Correlation =.20

Forecasting and Decision Making Under Uncertainty 11 Scatterplot: Correlation =.80

Forecasting and Decision Making Under Uncertainty 12 Scatterplot: Correlation = 1.00 The perfect forecast

Forecasting and Decision Making Under Uncertainty 13 Base rate = 20/100 =.20 Decision table: Data for an imperfect categorical forecast over 100 days (uncertainty)

Forecasting and Decision Making Under Uncertainty 14 Base rate = 20/100 =.20 Decision table terminology: Data for an imperfect categorical forecast over 100 days (uncertainty)

Forecasting and Decision Making Under Uncertainty 15 Uncertainty, Judgment, Decision, Error Taylor-Russell diagram – Decision cutoff – Criterion cutoff (linked to base rate) – Correlation (uncertainty) – Errors False positives (false alarms) False negatives (misses)

Taylor-Russell diagram Forecasting and Decision Making Under Uncertainty 16

Forecasting and Decision Making Under Uncertainty 17 Tradeoff between false positives and false negatives

Forecasting and Decision Making Under Uncertainty 18 Uncertainty, Judgment, Decision, Error Another view: ROC analysis – Decision cutoff – False positive proportion – True positive proportion – A z measures forecast quality

Forecasting and Decision Making Under Uncertainty 19 ROC Curve

Forecasting and Decision Making Under Uncertainty 20 Problem 1: Optimal decision cutoff Given that it is not possible to eliminate both false positives and false negatives, what decision cutoff gives the best compromise? – Depends on values – Depends on uncertainty – Depends on base rate Decision analysis is one optimization method.

Forecasting and Decision Making Under Uncertainty 21 Decision tree

Forecasting and Decision Making Under Uncertainty 22 Expected value Expected Value = P(O 1 )V(O 1 ) + P(O 2 )V(O 2 ) + P(O 3 )V(O 3 ) + P(O 4 )V(O 4 ) V(O i ) is the value of outcome i P(O i ) is the probability of outcome i

Forecasting and Decision Making Under Uncertainty 23 Expected value One of many possible decision making rules Used here for illustration because it’s the basis for decision analysis Intended to illustrate principles

Forecasting and Decision Making Under Uncertainty 24 Where do the values come from?

Forecasting and Decision Making Under Uncertainty 25 Descriptions of outcomes True positive (hit--a warning is issued and the storm occurs as predicted) – Damage occurs, but people have a chance to prepare. Some property and lives are saved, but probably not all. False positive (false alarm--a warning is issued but no storm occurs) – No damage or lives lost, but people are concerned and prepare unnecessarily, incurring psychological and economic costs. Furthermore, they may not respond to the next warning.

Forecasting and Decision Making Under Uncertainty 26 Descriptions of outcomes (cont.) False negative (miss--no warning is issued, but the storm occurs) – People do not have time to prepare and property and lives are lost. NWS is blamed. True negative (no warning is issued and storm occurs) – No damage or lives lost. No unnecessary concern about the storm.

Forecasting and Decision Making Under Uncertainty 27 Values depend on your perspective Forecaster Emergency manager Public official Property owner Business owner Many others...

Forecasting and Decision Making Under Uncertainty 28 Which is the best outcome?  True positive?  False positive?  False negative?  True negative? Give the best outcome a value of 100. Measuring values

Forecasting and Decision Making Under Uncertainty 29 Which is the worst outcome?  True positive?  False positive?  False negative?  True negative? Give the worst outcome a value of 0. Measuring values

Forecasting and Decision Making Under Uncertainty 30 Rate the remaining two outcomes  True positive?  False positive?  False negative?  True negative? Rate them relative to the worst (0) and the best (100) Measuring values

Forecasting and Decision Making Under Uncertainty 31 Values reflect different perspectives True positive? False positive? False negative? True negative? Perspective Measuring values

Forecasting and Decision Making Under Uncertainty 32 Expected value Expected Value = P(O 1 )V(O 1 ) + P(O 2 )V(O 2 ) + P(O 3 )V(O 3 ) + P(O 4 )V(O 4 ) V(O i ) is the value of outcome i P(O i ) is the probability of outcome i

Forecasting and Decision Making Under Uncertainty 33 Expected value depends on the decision cutoff

Forecasting and Decision Making Under Uncertainty 34 Expected value depends on the value perspective

Forecasting and Decision Making Under Uncertainty 35 Whose values? Forecasting weather is a technical problem. Issuing a warning to the public is a social act. Each warning has an implicit set of values. Should those values be made explicit and subject to public scrutiny?

Forecasting and Decision Making Under Uncertainty 36 Problem 2: Improving forecast accuracy Examine the components of forecast skill. This requires a detailed analysis of the forecasting task. Address those components that are problematic, but be aware that solving one problem may create others. Problems are addressed by changing the forecast environment and by training. Training alone has little effect.

Forecasting and Decision Making Under Uncertainty 37 Problem 2: Improving forecast accuracy Metatheoretical issue: Correspondence vs. coherence

Forecasting and Decision Making Under Uncertainty 38 Coherence research Coherence research measures the quality of judgment against the standards of logic, mathematics, and probability theory. Coherence theory argues that decisions under uncertainty should be coherent, with respect to the principles of probability theory.

Forecasting and Decision Making Under Uncertainty 39 Correspondence research Correspondence research measures the quality of judgment against the standards of empirical accuracy. Correspondence theory argues that decisions under uncertainty should result in the least number of errors possible, within the limits imposed by irreducible uncertainty.

Forecasting and Decision Making Under Uncertainty 40 Coherence and correspondence theories of competence Coherence theory of competence Uncertainty irrationality error Correspondence theory of competence Uncertainty inaccuracy error What is the relation between coherence and correspondence?

Forecasting and Decision Making Under Uncertainty 41 Fundamental tenet of coherence research "Probabilistic thinking is important if people are to understand and cope successfully with real-world uncertainty."

Forecasting and Decision Making Under Uncertainty 42 Fundamental tenet of correspondence research "Human competence in making judgments and decisions under uncertainty is impressive. Sometimes performance is not. Why? Because sometimes task conditions degrade the accuracy of judgment." Hammond, K. R. (1996). Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. New York, Oxford University Press (p. 282).

Forecasting and Decision Making Under Uncertainty 43 Brunswik's lens model Event Forecast X Cues Y e Y s

Forecasting and Decision Making Under Uncertainty 44 True Descriptors Subjective Event Forecast Cues Expanded lens model

Environmental predictability Fidelity of the information system Match between environment and judge Reliability of information acquisition Reliability of information processing True Descriptors Subjective Event Forecast Cues Forecasting and Decision Making Under Uncertainty 45 Components of skill and the lens model

Forecasting and Decision Making Under Uncertainty 46

Forecasting and Decision Making Under Uncertainty 47 Decomposition of skill score Skill score =

Forecasting and Decision Making Under Uncertainty 48

Forecasting and Decision Making Under Uncertainty Environmental predictability Environmental predictability is conditional on current knowledge and information. It can be improved through research that results in improved information and improved understanding of environmental processes. Environmental predictability determines an upper bound on forecast performance and therefore indicates how much improvement is possible through attention to other components. Components of skill

Forecasting and Decision Making Under Uncertainty 50 Environmental predictability limits accuracy of forecasts

Forecasting and Decision Making Under Uncertainty Fidelity of information system Forecasting skill may be degraded if the information system that brings data to the forecaster does not accurately represent actual conditions, i.e., if the cues do not accurately measure the true descriptors. Fidelity of the information system refers to the quality, not the quantity, of information about the cues that are currently being used. Fidelity is improved by developing better measures, e.g., though improved instrumentation or increased density in space or time. Components of skill

Forecasting and Decision Making Under Uncertainty Match between environment and forecaster The match between the model of the forecaster and the environmental model is an estimate of the potential skill that the forecaster's current strategy could achieve if the environment were perfectly predictable (given the cues) and the forecasts were unbiased and perfectly reliable. This component might be called “knowledge.” It is addressed by forecaster training and experience. If the forecaster learns to rely on the most relevant information and ignore irrelevant information, this component will generally be good. Components of skill

Forecasting and Decision Making Under Uncertainty 53 Reliability Reliability is high if identical conditions produce identical forecasts. Humans are rarely perfectly reliable. There are two sources of unreliability: – Reliability of information acquisition – Reliability of information processing Components of skill

Forecasting and Decision Making Under Uncertainty 54 Reliability Reliability decreases as amount of information increases. Components of skill Theoretical relation between amount of information and accuracy of forecasts

Reliability decreases as environmental predictability decreases. Components of skill Forecasting and Decision Making Under Uncertainty 55

Forecasting and Decision Making Under Uncertainty Reliability of information acquisition Reliability of information acquisition is the extent to which the forecaster can reliably interpret the objective cues. It is improved by organizing and presenting information in a form that clearly emphasizes relevant information. Components of skill

Forecasting and Decision Making Under Uncertainty Reliability of information processing  Decreases with increasing information and with increasing environmental uncertainty  Methods for improving reliability of information processing:  Limit the amount of information used in judgmental forecasting. Use a small number of very important cues.  Use mechanical methods to process information (e.g. MOS).  Combine several forecasts (consensus).  Require justification of forecasts. Components of skill

Forecasting and Decision Making Under Uncertainty 58 Theoretical relation between amount of information and accuracy of forecasts

Forecasting and Decision Making Under Uncertainty 59 The relation between information and accuracy depends on environmental uncertainty Theoretical limit of accuracy ——— Actual accuracy Theoretical limit of accuracy ——— Actual accuracy

Forecasting and Decision Making Under Uncertainty 60 6 and 7. Bias -- Conditional (regression bias) and unconditional (base rate bias)  Together, the two bias terms measure forecast "calibration” (sometimes called “reliability” in meteorology).  Reducing bias:  Experience  Statistical training  Feedback about nature of biases in forecast  Search for discrepant information  Statistical correction for bias Components of skill

Forecasting and Decision Making Under Uncertainty 61 Calibration (a.k.a. reliability) of forecasts depends on the task Calibration data for precipitation forecasts (Murphy and Winkler, 1974) Heideman (1989)

Forecasting and Decision Making Under Uncertainty 62 Reading about judgmental forecasting Components of skill – Stewart, T. R., & Lusk, C. M. (1994). Seven components of judgmental forecasting skill: Implications for research and the improvement of forecasts. Journal of Forecasting, 13, Principles of Forecasting Project – – Principles of Forecasting: A Handbook for Researchers and Practitioners, J. Scott Armstrong (ed.): Norwell, MA: Kluwer Academic Publishers, (scheduled for publication in 1999). – Stewart, Improving Reliability of Judgmental Forecasts (

Forecasting and Decision Making Under Uncertainty 63 Conclusion Problem 1: Choosing the warn/no warn cutoff – Value tradeoffs are unavoidable. – Warnings are based on values that should be critically examined. Problem 2: Improving forecast accuracy – Understanding and improving forecasts requires understanding the task and the forecasting environment. – Decomposing skill can aid in identifying the factors that limit forecasting accuracy.