Download presentation
Presentation is loading. Please wait.
Published byRose Harris Modified over 9 years ago
1
Graham Loomes, University of Warwick Undergraduate at Essex 1967-70 Modelling Decision Making: Combining Economics, Psychology and Neuroscience Thanks to many co-authors over the years – Bob Sugden and Mike Jones-Lee in particular For today’s talk, thanks to Dani Navarro-Martinez, Andrea Isoni and David Butler Thanks to ESRC for a Professorial Fellowship; more recently, the ESRC Network for Integrated Behavioural Science; and the Leverhulme Trust ‘Value’ Programme
2
Graham Loomes, University of Warwick Undergraduate at Essex 1967-70 Modelling Decision Making: Combining Economics, Psychology and Neuroscience A number of things attracted me to Essex Progressive attitudes Impressive people Common first year involving Econ Gov Soc Stats No Psych, unfortunately (and still none at u/g level... ?)
3
Positive economics: emphasis on evidence Downside: de-emphasised internal processes – what went on in the head was (then) unobservable ‘black box’ activity Upside: favoured empirical testing For decision making under risk, (S)EU model ruled in economics As if assign subjective ‘utility’ to payoffs and weight by probabilities and decide according to expectation But the evidence contradicted the theory in certain ‘phenomenal’ or ‘paradoxical’ respects
4
But actually ‘positive’ economists didn’t do much testing (one or two notable exceptions) Most testing was done by psychologists and statisticians (and the odd engineer) And clear gaps between economists’ models and observed behaviour were apparent from the early days (and stubbornly persist) Models Deterministic Parsimonious/restricted Procedurally invariant Behaviour Probabilistic Multi-faceted Sensitive to framing/procedure
5
What if we were starting from what we know now? Summarise some key facts Choices are systematically probabilistic over some range Option B: £40, 0.8; 0, 0.2 Option A: X, 1
6
Response times (RTs) are related to these probabilities, as are judgments of difficulty / confidence Option B: £40, 0.8; 0, 0.2 Option A: X, 1
7
I offer you a choice between Lottery A: 90% chance of £15; 10% chance of 0 Lottery B: 35% chance of £50; 65% chance of 0 on the understanding that the one you pick will be played out and you will get paid (or not) accordingly Which one do you pick? How did you reach that decision? Decision making involves brain activity that looks like the sampling and accumulation of evidence until an action is triggered How might that apply to risky choice?
8
Lottery A: 90% chance of £15; 10% chance of 0 Lottery B: 35% chance of £50; 65% chance of 0 A fairly general model (with some eyetracking support) entails numerous (often repeated) binary comparisons: Positive payoff comparison is evidence for B Chance of 0 is evidence for A Not just a matter of the direction of the argument but also the force – involving judgments sampled from memory and/or perception which may vary in strength from sample to sample
9
Up favours A, down favours B Vertical axis represents force / valence Choice made when accumulated evidence reaches threshold More evenly balanced, liable to more vacillation, longer RT and greater judged difficulty The process of sampling and accumulating evidence has often been represented as follows:
10
Natural variability even when same action triggered
11
With some sequences possibly leading to a different choice
12
Same choice made independently on 10 occasions: A chosen 7 times, B chosen 3 times Intrinsic variability, not error – simulations
13
Often such models are depicted in terms of a fixed threshold An alternative is to suppose that choice is triggered when we feel ‘confident enough’ about the imbalance of evidence This involves trading off between the level of confidence we feel we want and the amount of time spent deliberating (and the opportunity costs entailed – mind/time/attention is a scarce resource – Simon)
14
So modelling individual decision making as a process requires us to specify: What he/she samples How the evidence is weighed and accumulated What the stopping/trigger rule is
15
Boundedly Rational Expected Utility Theory – BREUT Aim: to illustrate the idea by taking the industry standard model and embedding it in a deliberative process (Other models/assumptions are available... e.g. Busemeyer & Townsend’s 1993 Decision Field Theory – the pathbreaking application to preferential choice)
16
What he/she samples The sampling frame is the underlying acquired set of various memories/impressions/perceptions of relative subjective values of payoffs, represented by a set of vNM utility functions (say, a distribution of coefficients of RRA) A draw entails picking a u(.) at random and applying it to the pair of options under consideration
17
How the evidence is weighed and accumulated A sampled u(.) corresponds with a preference for A or B – the direction on the vertical axis But what about the strength of the evidence? Proxied by the CE difference: + for A, – for B As sampling progresses, mean and variance are updated
18
What the stopping/trigger rule is When the options are first presented, the null hypothesis is that neither is preferred: that there is zero imbalance of evidence either way This is maintained until rejected with sufficient confidence An individual may be characterised as having an initial desired level of confidence which he/she lowers as time passes in order to make this decision and gets on to the next decision / rest of life
19
Some Results/Implications 1. Observed choices do not necessarily reveal the structure of underlying preferences EU is not the only possible ‘core’ – can embed other assumptions – but BREUT shows that underlying preferences can ALL be vNM and yet modal choices in ‘Common Ratio Effect’ pairs violate independence: £30, 1 preferred to £40, 0.8 in more than 50% of choices Yet £40, 0.2 is the modal choice over £30, 0.25 This pattern has done more than any other to discredit independence – yet it COULD be compatible with core EU Challenge to RP
20
2. Can’t just stick a noise term on each option Variability of the kind discussed here is intrinsic, so a simple ‘add-on’ error term cannot capture it adequately Two lotteries B and C, each 50% likely to be chosen when paired with sure A 6 BREUT allows different frequencies versus other sure sums Contrary to Luce formulation B C
21
It might seem that all we need is to allow C to have higher variance than B. But when the As are lotteries with a bigger payoff range than B and C... B C
22
The two curves flip positions But that would entail C having a lower variance than B. So independent add-on noise model ruled out C B
23
3. Context/frame/procedure effects are endemic If sampling and accumulation are key, anything which influences the process may affect the outcome Equivalence tasks compared with choice tasks: how is the ‘response mode’ influential? Do we ‘anchor and adjust’? Reference/endowment effects – WTP vs WTA: does endowment change the initial null? Range-frequency effects in multiple choice lists: do these edit/overwrite our sampling frames (as in DbS)?
24
3. Context/frame/procedure effects are endemic Lab experiments may show effects most sharply – but all these effects may have ‘real world’ counterparts People may be most susceptible in contexts where they are least familiar/experienced – but these are important non- market areas (e.g. health, safety, environment) where survey elicitation informs policy Since ALL production of responses involves SOME process, can we separate ‘true preference’ from ‘procedural bias’?
25
Concluding Remarks Parsimonious deterministic models played their role in the days when we knew little about brain processes and when limited computing power made analytical results desirable But we now have dozens of such models, each only accounting for a subset of behaviour and with considerable overlap/redundancy Crucially, they neglect the reality of probabilistic responses. This cannot be ‘fixed’ by some arbitrary add-on noise (which in any case provides no explanation for the RT/difficulty/confidence data) The ‘positive’ future lies in multiple-influence probabilistic process- based models harnessing computing power and simulation methods to integrate insights from psychology and neuroscience with the social sciences
26
Graham Loomes, University of Warwick Undergraduate at Essex 1967-70 Modelling Decision Making: Combining Economics, Psychology and Neuroscience
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.