Graham Loomes, University of Warwick Undergraduate at Essex 1967-70 Modelling Decision Making: Combining Economics, Psychology and Neuroscience Thanks.

Slides:



Advertisements
Similar presentations
Tests of Hypotheses Based on a Single Sample
Advertisements

Utility Theory.
Statistics in Science  Role of Statistics in Research.
Testing Theories: Three Reasons Why Data Might not Match the Theory.
4 Why Should we Believe Politicians? Lupia and McCubbins – The Democratic Dilemma GV917.
Certainty Equivalent and Stochastic Preferences June 2006 FUR 2006, Rome Pavlo Blavatskyy Wolfgang Köhler IEW, University of Zürich.
Multi-Attribute Utility Models with Interactions
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Behavioural Science II Week 1, Semester 2, 2002
Using Statistics in Research Psych 231: Research Methods in Psychology.
What z-scores represent
Hypothesis Testing: Type II Error and Power.
Evaluating Hypotheses
Cbio course, spring 2005, Hebrew University (Alignment) Score Statistics.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Basic Business Statistics.
BCOR 1020 Business Statistics
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Tests of Hypotheses Based on a Single Sample.
AM Recitation 2/10/11.
Chapter 10 Hypothesis Testing
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 9. Hypothesis Testing I: The Six Steps of Statistical Inference.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
Estimation and Hypothesis Testing. The Investment Decision What would you like to know? What will be the return on my investment? Not possible PDF for.
Chapter 8 Hypothesis testing 1. ▪Along with estimation, hypothesis testing is one of the major fields of statistical inference ▪In estimation, we: –don’t.
4-1 Statistical Inference The field of statistical inference consists of those methods used to make decisions or draw conclusions about a population.
Forecasting inflation; The Fan Chart CCBS/HKMA May 2004.
Theory testing Part of what differentiates science from non-science is the process of theory testing. When a theory has been articulated carefully, it.
ANOVA Greg C Elvers.
1 Today Null and alternative hypotheses 1- and 2-tailed tests Regions of rejection Sampling distributions The Central Limit Theorem Standard errors z-tests.
Testing Theories: Three Reasons Why Data Might not Match the Theory Psych 437.
Understanding the Variability of Your Data: Dependent Variable Two "Sources" of Variability in DV (Response Variable) –Independent (Predictor/Explanatory)
The Argument for Using Statistics Weighing the Evidence Statistical Inference: An Overview Applying Statistical Inference: An Example Going Beyond Testing.
Step 3 of the Data Analysis Plan Confirm what the data reveal: Inferential statistics All this information is in Chapters 11 & 12 of text.
Sequential Expected Utility Theory: Sequential Sampling in Economic Decision Making under Risk Andrea Isoni Andrea Isoni (Warwick) Graham Loomes Graham.
Statistics (cont.) Psych 231: Research Methods in Psychology.
IE241: Introduction to Hypothesis Testing. We said before that estimation of parameters was one of the two major areas of statistics. Now let’s turn to.
A Stochastic Expected Utility Theory Pavlo R. Blavatskyy June 2007.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
Ellsberg’s paradoxes: Problems for rank- dependent utility explanations Cherng-Horng Lan & Nigel Harvey Department of Psychology University College London.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Fundamentals of Hypothesis Testing: One-Sample Tests Statistics.
Buying and Selling Prices under Risk, Ambiguity and Conflict Michael Smithson The Australian National University Paul D. Campbell Australian Bureau of.
Experimental Psychology PSY 433 Appendix B Statistics.
Issues concerning the interpretation of statistical significance tests.
Question paper 1997.
Decision theory under uncertainty
Stats 845 Applied Statistics. This Course will cover: 1.Regression –Non Linear Regression –Multiple Regression 2.Analysis of Variance and Experimental.
Axiomatic Theory of Probabilistic Decision Making under Risk Pavlo R. Blavatskyy University of Zurich April 21st, 2007.
C82MST Statistical Methods 2 - Lecture 1 1 Overview of Course Lecturers Dr Peter Bibby Prof Eamonn Ferguson Course Part I - Anova and related methods (Semester.
CHAPTER OVERVIEW Say Hello to Inferential Statistics The Idea of Statistical Significance Significance Versus Meaningfulness Meta-analysis.
Inferential Statistics Introduction. If both variables are categorical, build tables... Convention: Each value of the independent (causal) variable has.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
1 Probability and Statistics Confidence Intervals.
URBDP 591 A Lecture 16: Research Validity and Replication Objectives Guidelines for Writing Final Paper Statistical Conclusion Validity Montecarlo Simulation/Randomization.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
Review Statistical inference and test of significance.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 1 FINAL EXAMINATION STUDY MATERIAL III A ADDITIONAL READING MATERIAL – INTRO STATS 3 RD EDITION.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Decisions Under Risk and Uncertainty
Chapter 9 Hypothesis Testing.
Significance Tests: The Basics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Chapter 7: The Normality Assumption and Inference with OLS
Inferential Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Presentation transcript:

Graham Loomes, University of Warwick Undergraduate at Essex Modelling Decision Making: Combining Economics, Psychology and Neuroscience Thanks to many co-authors over the years – Bob Sugden and Mike Jones-Lee in particular For today’s talk, thanks to Dani Navarro-Martinez, Andrea Isoni and David Butler Thanks to ESRC for a Professorial Fellowship; more recently, the ESRC Network for Integrated Behavioural Science; and the Leverhulme Trust ‘Value’ Programme

Graham Loomes, University of Warwick Undergraduate at Essex Modelling Decision Making: Combining Economics, Psychology and Neuroscience A number of things attracted me to Essex Progressive attitudes Impressive people Common first year involving Econ Gov Soc Stats No Psych, unfortunately (and still none at u/g level... ?)

Positive economics: emphasis on evidence Downside: de-emphasised internal processes – what went on in the head was (then) unobservable ‘black box’ activity Upside: favoured empirical testing For decision making under risk, (S)EU model ruled in economics As if assign subjective ‘utility’ to payoffs and weight by probabilities and decide according to expectation But the evidence contradicted the theory in certain ‘phenomenal’ or ‘paradoxical’ respects

But actually ‘positive’ economists didn’t do much testing (one or two notable exceptions) Most testing was done by psychologists and statisticians (and the odd engineer) And clear gaps between economists’ models and observed behaviour were apparent from the early days (and stubbornly persist) Models Deterministic Parsimonious/restricted Procedurally invariant Behaviour Probabilistic Multi-faceted Sensitive to framing/procedure

What if we were starting from what we know now? Summarise some key facts Choices are systematically probabilistic over some range Option B: £40, 0.8; 0, 0.2 Option A: X, 1

Response times (RTs) are related to these probabilities, as are judgments of difficulty / confidence Option B: £40, 0.8; 0, 0.2 Option A: X, 1

I offer you a choice between Lottery A: 90% chance of £15; 10% chance of 0 Lottery B: 35% chance of £50; 65% chance of 0 on the understanding that the one you pick will be played out and you will get paid (or not) accordingly Which one do you pick? How did you reach that decision? Decision making involves brain activity that looks like the sampling and accumulation of evidence until an action is triggered How might that apply to risky choice?

Lottery A: 90% chance of £15; 10% chance of 0 Lottery B: 35% chance of £50; 65% chance of 0 A fairly general model (with some eyetracking support) entails numerous (often repeated) binary comparisons: Positive payoff comparison is evidence for B Chance of 0 is evidence for A Not just a matter of the direction of the argument but also the force – involving judgments sampled from memory and/or perception which may vary in strength from sample to sample

Up favours A, down favours B Vertical axis represents force / valence Choice made when accumulated evidence reaches threshold More evenly balanced, liable to more vacillation, longer RT and greater judged difficulty The process of sampling and accumulating evidence has often been represented as follows:

Natural variability even when same action triggered

With some sequences possibly leading to a different choice

Same choice made independently on 10 occasions: A chosen 7 times, B chosen 3 times Intrinsic variability, not error – simulations

Often such models are depicted in terms of a fixed threshold An alternative is to suppose that choice is triggered when we feel ‘confident enough’ about the imbalance of evidence This involves trading off between the level of confidence we feel we want and the amount of time spent deliberating (and the opportunity costs entailed – mind/time/attention is a scarce resource – Simon)

So modelling individual decision making as a process requires us to specify: What he/she samples How the evidence is weighed and accumulated What the stopping/trigger rule is

Boundedly Rational Expected Utility Theory – BREUT Aim: to illustrate the idea by taking the industry standard model and embedding it in a deliberative process (Other models/assumptions are available... e.g. Busemeyer & Townsend’s 1993 Decision Field Theory – the pathbreaking application to preferential choice)

What he/she samples The sampling frame is the underlying acquired set of various memories/impressions/perceptions of relative subjective values of payoffs, represented by a set of vNM utility functions (say, a distribution of coefficients of RRA) A draw entails picking a u(.) at random and applying it to the pair of options under consideration

How the evidence is weighed and accumulated A sampled u(.) corresponds with a preference for A or B – the direction on the vertical axis But what about the strength of the evidence? Proxied by the CE difference: + for A, – for B As sampling progresses, mean and variance are updated

What the stopping/trigger rule is When the options are first presented, the null hypothesis is that neither is preferred: that there is zero imbalance of evidence either way This is maintained until rejected with sufficient confidence An individual may be characterised as having an initial desired level of confidence which he/she lowers as time passes in order to make this decision and gets on to the next decision / rest of life

Some Results/Implications 1. Observed choices do not necessarily reveal the structure of underlying preferences EU is not the only possible ‘core’ – can embed other assumptions – but BREUT shows that underlying preferences can ALL be vNM and yet modal choices in ‘Common Ratio Effect’ pairs violate independence: £30, 1 preferred to £40, 0.8 in more than 50% of choices Yet £40, 0.2 is the modal choice over £30, 0.25 This pattern has done more than any other to discredit independence – yet it COULD be compatible with core EU Challenge to RP

2. Can’t just stick a noise term on each option Variability of the kind discussed here is intrinsic, so a simple ‘add-on’ error term cannot capture it adequately Two lotteries B and C, each 50% likely to be chosen when paired with sure A 6 BREUT allows different frequencies versus other sure sums Contrary to Luce formulation B C

It might seem that all we need is to allow  C to have higher variance than  B. But when the As are lotteries with a bigger payoff range than B and C... B C

The two curves flip positions But that would entail  C having a lower variance than  B. So independent add-on noise model ruled out C B

3. Context/frame/procedure effects are endemic If sampling and accumulation are key, anything which influences the process may affect the outcome Equivalence tasks compared with choice tasks: how is the ‘response mode’ influential? Do we ‘anchor and adjust’? Reference/endowment effects – WTP vs WTA: does endowment change the initial null? Range-frequency effects in multiple choice lists: do these edit/overwrite our sampling frames (as in DbS)?

3. Context/frame/procedure effects are endemic Lab experiments may show effects most sharply – but all these effects may have ‘real world’ counterparts People may be most susceptible in contexts where they are least familiar/experienced – but these are important non- market areas (e.g. health, safety, environment) where survey elicitation informs policy Since ALL production of responses involves SOME process, can we separate ‘true preference’ from ‘procedural bias’?

Concluding Remarks Parsimonious deterministic models played their role in the days when we knew little about brain processes and when limited computing power made analytical results desirable But we now have dozens of such models, each only accounting for a subset of behaviour and with considerable overlap/redundancy Crucially, they neglect the reality of probabilistic responses. This cannot be ‘fixed’ by some arbitrary add-on noise (which in any case provides no explanation for the RT/difficulty/confidence data) The ‘positive’ future lies in multiple-influence probabilistic process- based models harnessing computing power and simulation methods to integrate insights from psychology and neuroscience with the social sciences

Graham Loomes, University of Warwick Undergraduate at Essex Modelling Decision Making: Combining Economics, Psychology and Neuroscience