The Physics of Decision-Making: Cognitive Control as the Optimization of Behavior Gary Aston-Jones ∞ Rafal Bogacz * † ª Eric Brown † Jonathan D. Cohen.

Slides:



Advertisements
Similar presentations
Fast Algorithms For Hierarchical Range Histogram Constructions
Advertisements

Piéron’s Law holds in conditions of response conflict Tom Stafford, Kevin N. Gurney & Leanne Ingram Department of Psychology, University of Sheffield
Institute for Theoretical Physics and Mathematics Tehran January, 2006 Value based decision making: behavior and theory.
1.Exams due 9am 16 th. (grades due 10am 19 th ) 2.Describe the organization of visual signals in extra-striate visual cortex and the specialization of.
Quasi-Continuous Decision States in the Leaky Competing Accumulator Model Jay McClelland Stanford University With Joel Lachter, Greg Corrado, and Jim Johnston.
Decision Dynamics and Decision States: the Leaky Competing Accumulator Model Psychology 209 March 4, 2013.
Sequential Hypothesis Testing under Stochastic Deadlines Peter Frazier, Angela Yu Princeton University TexPoint fonts used in EMF. Read the TexPoint manual.
Near-Optimal Decision-Making in Dynamic Environments Manu Chhabra 1 Robert Jacobs 2 1 Department of Computer Science 2 Department of Brain & Cognitive.
Reward processing (1) There exists plenty of evidence that midbrain dopamine systems encode errors in reward predictions (Schultz, Neuron, 2002) Changes.
How facilitation influences an attractor model of decision making Larissa Albantakis.
From T. McMillen & P. Holmes, J. Math. Psych. 50: 30-57, MURI Center for Human and Robot Decision Dynamics, Sept 13, Phil Holmes, Jonathan.
Inference in Dynamic Environments Mark Steyvers Scott Brown UC Irvine This work is supported by a grant from the US Air Force Office of Scientific Research.
Distinguishing Evidence Accumulation from Response Bias in Categorical Decision-Making Vincent P. Ferrera 1,2, Jack Grinband 1,2, Quan Xiao 1,2, Joy Hirsch.
Does Math Matter to Gray Matter? (or, The Rewards of Calculus). Philip Holmes, Princeton University with Eric Brown (NYU), Rafal Bogacz (Bristol, UK),
Theory of Decision Time Dynamics, with Applications to Memory.
The free-energy principle: a rough guide to the brain? K Friston Summarized by Joon Shik Kim (Thu) Computational Models of Intelligence.
Seeing Patterns in Randomness: Irrational Superstition or Adaptive Behavior? Angela J. Yu University of California, San Diego March 9, 2010.
BsysE595 Lecture Basic modeling approaches for engineering systems – Summary and Review Shulin Chen January 10, 2013.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Prediction in Human Presented by: Rezvan Kianifar January 2009.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Optimality, robustness, and dynamics of decision making under norepinephrine modulation: A spiking neuronal network model Joint work with Philip Eckhoff.
NEUROBIOLOGY OF DECISION-MAKING, CSHL, May 2005 Choice, decision and action investigated with visually guided saccades. Jeffrey D. Schall With Leanne Boucher,
Biological Modeling of Neural Networks: Week 12 – Decision models: Competitive dynamics Wulfram Gerstner EPFL, Lausanne, Switzerland 12.1 Review: Population.
Decision Making Theories in Neuroscience Alexander Vostroknutov October 2008.
Collective neural dynamics and drift-diffusion models for simple decision tasks. Philip Holmes, Princeton University. Eric Brown (NYU), Rafal Bogacz (Bristol,
Motor Control. Beyond babbling Three problems with motor babbling: –Random exploration is slow –Error-based learning algorithms are faster but error signals.
Dynamic Decision Making in Complex Task Environments: Principles and Neural Mechanisms Annual Workshop Introduction August, 2008.
Dynamic Decision Making in Complex Task Environments: Principles and Neural Mechanisms Progress and Future Directions November 17, 2009.
A View from the Bottom Peter Dayan Gatsby Computational Neuroscience Unit.
Pattern Classification of Attentional Control States S. G. Robison, D. N. Osherson, K. A. Norman, & J. D. Cohen Dept. of Psychology, Princeton University,
Neural Modeling - Fall NEURAL TRANSFORMATION Strategy to discover the Brain Functionality Biomedical engineering Group School of Electrical Engineering.
The Computing Brain: Focus on Decision-Making
Modeling interactions between visually responsive and movement related neurons in frontal eye field during saccade visual search Braden A. Purcell 1, Richard.
1 Multiagent Teamwork: Analyzing the Optimality and Complexity of Key Theories and Models David V. Pynadath and Milind Tambe Information Sciences Institute.
What’s optimal about N choices? Tyler McMillen & Phil Holmes, PACM/CSBMB/Conte Center, Princeton University. Banbury, Bunbury, May 2005 at CSH. Thanks.
Classification Ensemble Methods 1
Chapter 3. Stochastic Dynamics in the Brain and Probabilistic Decision-Making in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning.
Eye Movements – Target Selection & Control READING Schall JD (2002) The neural selection and control of saccades by frontal eye field. Philosophical Transactions.
What is meant by “top-down” and “bottom-up” processing? Give examples of both. Bottom up processes are evoked by the visual stimulus. Top down processes.
Dynamic Causal Model for evoked responses in MEG/EEG Rosalyn Moran.
Response dynamics and phase oscillators in the brainstem
Dynamics of Reward and Stimulus Information in Human Decision Making Juan Gao, Rebecca Tortell & James L. McClelland With inspiration from Bill Newsome.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Simultaneous integration versus sequential sampling in multiple-choice decision making Nate Smith July 20, 2008.
Neural correlates of risk sensitivity An fMRI study of instrumental choice behavior Yael Niv, Jeffrey A. Edlund, Peter Dayan, and John O’Doherty Cohen.
Psychology and Neurobiology of Decision-Making under Uncertainty Angela Yu March 11, 2010.
Does the brain compute confidence estimates about decisions?
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Dynamics of Reward Bias Effects in Perceptual Decision Making Jay McClelland & Juan Gao Building on: Newsome and Rorie Holmes and Feng Usher and McClelland.
Optimal Decision-Making in Humans & Animals Angela Yu March 05, 2009.
Neural Coding of Basic Reward Terms of Animal Learning Theory, Game Theory, Microeconomics and Behavioral Ecology Wolfram Schultz Current Opinion in Neurobiology.
Mechanisms of Simple Perceptual Decision Making Processes
Dynamics of Reward Bias Effects in Perceptual Decision Making
Brain Initiative Informational Conference Call
Jay McClelland Stanford University
Contribution of spatial and temporal integration in heading perception
Dynamical Models of Decision Making Optimality, human performance, and principles of neural information processing Jay McClelland Department of Psychology.
A Classical Model of Decision Making: The Drift Diffusion Model of Choice Between Two Alternatives At each time step a small sample of noisy information.
Dynamical Models of Decision Making Optimality, human performance, and principles of neural information processing Jay McClelland Department of Psychology.
Using Time-Varying Motion Stimuli to Explore Decision Dynamics
Marius Usher, Phil Holmes, Juan Gao, Bill Newsome and Alan Rorie
Banburismus and the Brain
Confidence as Bayesian Probability: From Neural Origins to Behavior
C. Shawn Green, Alexandre Pouget, Daphne Bavelier  Current Biology 
Decision Making as a Window on Cognition
Interaction of Sensory and Value Information in Decision-Making
Wallis, JD Helen Wills Neuroscience Institute UC, Berkeley
Redmond G. O’Connell, Michael N. Shadlen, KongFatt Wong-Lin, Simon P
Perceptual learning Nisheeth 15th February 2019.
Presentation transcript:

The Physics of Decision-Making: Cognitive Control as the Optimization of Behavior Gary Aston-Jones ∞ Rafal Bogacz * † ª Eric Brown † Jonathan D. Cohen * ª Mark Gilzenrat * Philip Holmes † Patrick Simen ª *Department of Psychology ªCenter for the Study of Brain, Mind and Behavior † Program in Applied and Computational Mathematics Princeton University ∞ Department of Psychiatry, University of Pennsylvania Silvio Conte Center for Neuroscience Research, NIMH

Cognitive Control Definition: The ability to flexibly guide decision making and behavior in accord with internally represented goals or intentions

Time Scales of Decision Making & Control Single decisions: 100s of milleseconds - seconds “Should I swing the bat?” Real time dynamics of information integration and decision making Adaptive regulation of decision making: seconds - minutes “Was that last swing too fast? Should I wait longer this time?” Adaptive adjustment of decision making parameters Learning Long term decisions, prospective control: hours - years “Should I get some coaching help with my batting, or just retire?” Strategies

Time Scales of Decision Making & Control Single decisions: 100s of milleseconds - seconds “Should I swing the bat?” Real time dynamics of information integration and decision making Adaptive regulation of decision making: seconds - minutes “Was that last swing too fast? Should I wait longer this time?” Adaptive adjustment of decision making parameters Learning Long term decisions, prospective control: hours - years “Should I get some coaching help with my batting, or just retire?” Strategies ☞ ☞

Allocate Control Evaluate Outcomes Performance Control: Adaptive Regulation of Behavior Adjust Monitor Select / Bias

Allocate Control Prefrontal Areas Evaluate Outcomes Performance Poseterior Frontal, Parietal & Temporal Areas Control: Adaptive Regulation of Behavior Select / Bias Monitor Adjust

Control Response PFC Bias Studied in a variety of tasks – Selective attention tasks – Response inhibition tasks – Working memory tasks – Task switching Computational models have specified some of the mechanisms involved – Prefrontal cortex and attentional control (Cohen et al., 1990; Cohen & Servan-Schreiber, 1992) – Dopamine / basal ganglia and updating of task representations (Braver & Cohen, 2000; Frank, Loughry & O’Reilly, 2004) – Task switching (Gilbert & Shallice, 2002; Yeung & Monsell, 2003) Control: Adaptive Regulation of Behavior

Allocate Control Prefrontal Areas Evaluate Outcomes Oribitofrontal Anterior Cingulate Performance Poseterior Frontal, Parietal & Temporal Areas Adjust Monitor Control: Adaptive Regulation of Behavior

Studied in a variety of tasks – Simple reaction time tasks – Two alternative forced choice decision tasks – Attention tasks – Learning tasks Computational models have specified some of the mechanisms involved – Reinforcement learning (Montague et al., 1996) – Conflict monitoring (Botvinick et al., 2001) – Mismatch detection (Holroyd et al., 2002) Monitoring and Adjustments of Control Control Response Conflict Monitoring Anterior Cingulate Modulation of Control PFC Reinforcement Learning / Gating VTA

Lack of a single coherent framework for understanding control: – Multiple models; each addresses a different task How do their mechanisms relate to and interact with one another? –Parameterization problems –What are the fundamental principles of operation? –Relationship of neural implementation to behavior Need a more precise, formal definition of control… Shortcomings

Control as Optimization Refine definition of control: –adjustment of processing parameters to optimize task performance –assumes processing mechanisms are capable of (near) optimal function

Control as Optimization Refine definition of control: –adjustment of processing parameters to optimize task performance –assumes processing mechanisms are capable of (near) optimal function Precedence: This approach has been used productively in a variety of fields: -  Economics — utility maximization (standard economic model, homo economicus) -  Behavioral ecology — evolution of social behavior (e.g., Nowak, Boyd)  -  Psychology — rational analysis (John Anderson ) -  Neuroscience — perceptual, motor and learning systems (e.g., Barlow, Bialek, Gallistel)

Control as Optimization Refine definition of control: –adjustment of processing parameters to optimize task performance –assumes processing mechanisms are capable of (near) optimal function Precedence: This approach has been used productively in a variety of fields: -  Economics — utility maximization (standard economic model, homo economicus) -  Behavioral ecology — evolution of social behavior (e.g., Nowak, Boyd)  -  Psychology — rational analysis (John Anderson ) -  Neuroscience — perceptual, motor and learning systems (e.g., Barlow, Bialek, Gallistel) However: Scarcity of work that uses this approach to address cognitive control, and bridge between behavior and its neural mechanisms

Outline Simple behavioral task: – Two alternative forced choice (2AFC) decision task Current “state of play:” – Behavioral findings – Neurobiological findings – Neural network models Formal analysis: – Drift diffusion model (DDM) of decision making – Control as optimization of the DDM – Predictions and behavioral findings

Two Alternative Force Choice Task Are the dots moving left or right?

<> See dots moving left (<), press the left button See dots moving right (>), press the right button Measure reaction time (RT) and accuracy Two Alternative Force Choice Task Are the dots moving left or right?

Characteristically skewed reaction time distribution: Speed / accuracy tradeoff: fast responding → less accurate more accurate → slower responding Behavioral Findings Probability Reaction Time

Neural Findings Area MT (temporal cortex) motion sensitive visual cortex Areas LIP (parietal cortex) and SEF (supplementary eye fields): control of eye movements

Simple Neural Network Model of Two Alternative Decision Task Information Competition Processing –Flow of activity from stimulus inputs to a pair decision units –Each decision unit accumulates / integrates input –Decision units compete Decision –Occurs when activity of one decision unit exceeds a specified threshold >< Integration Usher & McClelland, 2001

Some Problems with Neural Network Models Large parameter space: difficult to parameterize –How to set stimulus strength, connection weights, thresholds, etc. Complex dynamics: hard to characterize and compare Theoretical degeneracy: proliferation of models…

Some Problems with Neural Network Models Large parameter space: difficult to parameterize –How to set stimulus strength, connection weights, thresholds, etc. Complex dynamics: hard to characterize and compare Theoretical degeneracy: proliferation of models…

Simplification & Analysis Activity of R 2 Attractors (stable fixed points, R 1 correct) Step 1: Construct geometric representation of model’s behavior Activity of R 1 Threshold for R 2 Threshold for R 1 >< R1R1 R2R2

Step 2: Examine Dynamics Activity of R 1 Activity of R 2 Threshold for R 2 Threshold for R 1 Simplification & Analysis

Step 2: Examine Dynamics Activity of R 1 Activity of R 2 Threshold for R 2 Threshold for R 1 Simplification & Analysis

Threshold for R 2 Step 3: Note that there are two components of the trajectory… Activity of R 1 Activity of R 2 Threshold for R 1 Decision (difference in activity) Co-activation (sum of activity) Simplification & Analysis

Threshold for R 2 and that they have different dynamics Activity of R 1 Activity of R 2 Threshold for R 1 Decision ➠ slower Co-activation ➠ faster

Focus only on one dimension Brown & Holmes (Stochastics and Dynamics, 2001) –assume that most of the “action” is along the decision line –therefore, decision process can be approximated by a one-dimensional process (difference in activity) Simplification & Analysis Activity of R 1 Activity of R 2 Decision Co-activation (faster) Step 4: Dimensional Reduction

Focus only on one dimension Brown & Holmes (Stochastics and Dynamics, 2001) –assume that most of the “action” is in along the decision line –therefore, decision process can be approximated by a one-dimensional process (difference in activity) Focus on linear range of function –assume that units in the “focus of attention” are on the linear part of their activation function (i.e., most sensitive part of their dynamic range) Cohen et al (Psychological Review, 1990) Simplification & Analysis Activity of R 1 Activity of R 2 Decision Co-activation (faster) Step 5: Linearization

Simplification Drift Diffusion Model (DDM): x = A + c A = drift rate c = noise P(x,t) = N(At, c√t) Process ends when x exceeds ± z Activity of R 1 Activity of R 2 x -Z +Z

Simplification Drift Diffusion Model (DDM): –This process is described by the Fokker-Planck equation for the evolution of a Gaussian probability distribution toward a pair of boundaries = mathematical description of diffusion of an ideal gas x = A + c A = drift rate c = noise P(x,t) = N(At, c√t) Process ends when x exceeds ± z Drift (A) Diffusion (c) Response 2Response 1 Thresholds Activity of R 1 Activity of R 2 x -Z +Z +z+z -z-z Early Middle Late

Simplification Drift Diffusion Model (DDM): x = A + c A = drift rate c = noise P(x,t) = N(At, c√t) Process ends when x exceeds ± z Drift (A) Diffusion (c) Response 2Response 1 Thresholds Activity of R 1 Activity of R 2 x -Z +Z +z+z -z-z Early Middle Late

Simplification Drift Diffusion Model (DDM): Can analytically solve for Error Rate and Decision Time: x = A + c A = drift rate c = noise P(x,t) = N(At, c√t) Process ends when x exceeds ± z Error Rate (ER) = Decision Time (DT) = Tanh( ) Threshold Drift DriftThreshold Noise e 2DriftThreshold Noise 2 1 Drift (A) Diffusion (c) Response 2Response 1 Thresholds Activity of R 1 Activity of R 2 x -Z +Z +z+z -z-z Early Middle Late

Evidence Threshold +Z Threshold -Z Probability Reaction Time Accurately describes reaction time distributions and error rates in simple decision making tasks (Ratcliff, 1978, 1999) Drift Diffusion Model Drift (A)

Accurately describes reaction time distributions and error rates in simple decision making tasks (Ratcliff, 1978, 1999) Accurately describes dynamics of firing among stimulus- and response-selective neurons in simple decision making tasks (Schall, 1994; Gold & Schadlen, 2002) Drift Diffusion Model

Theoretical Traction Formal reduction of neural network models: (Bogacz et al., 2006)

Theoretical Traction Formal reduction of neural network models (Bogacz et al., 2006) Optimal decision making process –Formally equivalent to sequential probability ratio test (SPRT) (used by Turing to crack German Enigma code in WWII) –Fastest to reach a decision for given threshold and error rate and most accurate for a given decision time (Wald, 1948; Turing [Good, 1979]; Rouder, 1996) –Guarantees arbitrarily low error rate as threshold is increased (Bogacz et al., 2006)

Theoretical Traction Formal reduction of neural network models (Bogacz et al., 2006) Optimal decision making process –Formally equivalent to sequential probability ratio test (SPRT) (used by Turing to crack German Enigma code in WWII) –Fastest to reach a decision for given threshold and error rate and most accurate for a given decision time (Wald, 1948; Turing [Good, 1979]; Rouder, 1996) –Guarantees arbitrarily low error rate as threshold is increased (Bogacz et al., 2006) However, presents optimization problem of its own: –How to set parameters (e.g., threshold and starting point)? –Here is where control comes in…

DDM and Control DDM specifies psychologically relevant control parameters –Starting point: expectations (priors) –Drift rate: signal strength / attention –Threshold: speed-accuracy trade-off

DDM and Control DDM specifies psychologically relevant control parameters –Starting point: expectations (priors) –Drift rate: signal strength / attention –Threshold: speed-accuracy trade-off Empirical question: –Do people in fact adjust these parameters to optimize performance?

DDM and Control DDM specifies psychologically relevant control parameters –Starting point: expectations (priors) –Drift rate: signal strength / attention –Threshold: speed-accuracy trade-off Empirical question: –Do people adjust these parameters to optimize performance? –We can analyze the DDM to determine optimal parameters under various experimental conditions, and use this to generate testable predictions. For example, what is the optimal threshold, and do people use this? –First, however, must assume an “objective (utility) function” — that is, the function that control seeks to optimize

Reward rate: 1-Error Rate Reaction Time + Delay Reward Rate Optimization RR =

Reward rate: Re-express RT and ER in terms of DDM parameters: 1-Error Rate Reaction Time + Delay Reward Rate Optimization RR = Error Rate (ER) = Decision Time (DT) = Tanh( ) Threshold Drift DriftThreshold Noise e 2DriftThreshold Noise 2 1

Reward rate: Re-express RT and ER in terms of DDM parameters: RR = 1-Error Rate Reaction Time + Delay Reward Rate Optimization - Delay - Delay- Drift Threshold Drift Threshold e 2DriftThreshold Noise 2 RR = Error Rate (ER) = Decision Time (DT) = Tanh( ) Threshold Drift DriftThreshold Noise e 2DriftThreshold Noise 2 1

Reward rate: Re-express RT and ER in terms of DDM parameters: RR = Solve for threshold that maximizes RR: 1-Error Rate Reaction Time + Delay Reward Rate Optimization - Delay - Delay- Drift Threshold Drift Threshold e 2DriftThreshold Noise 2 Drift * Delay 2 Delay 2 Thresh*Drift = Noise 2 Drift 2 Drift NoiseDelay RR =

Reward rate: Re-express RT and ER in terms of DDM parameters: RR = Solve for threshold that maximizes RR: Predict changes in speed-accuracy tradeoff (threshold) as a function of task parameters (delay, drift, and noise) (Bogacz et al., 2006) Reward Rate Optimization Drift * Delay 2 Delay 2 Thresh*Drift = Noise 2 Drift 2 Drift NoiseDelay 1-Error Rate Reaction Time + Delay - Delay - Delay- Drift Threshold Drift Threshold e 2DriftThreshold Noise 2 RR =

Reward rate (and ∴ optimal threshold) varies with total delay: where “ ” (D total )= + (ER*D pen ) Predictions: Effects of Delay 1-Error Rate + Reward Rate =

Reward rate (and ∴ optimal threshold) varies with total delay: where “ ” (D total )= + (ER*D pen ) ⇒ Optimal threshold is same for the following two conditions: ITI = 0.5 secITI = 2 sec D pen = 1.5 secNo D pen D total = 2 sec and 1-Error Rate + Reward Rate = Effects of Delay

Empirical Results Reaction Time Error Rates Thresholds Effects of Delay

If one stimulus is more frequent than the other, it is optimal to move the starting point, not the threshold assuming constant drift (SNR) Predictions: Effects of Stimulus Frequency threshold B threshold A Stimulus A = 50%Stimulus B = 50% Stimulus A = 25% Stimulus B = 75% Starting Point

Effects of Stimulus Frequency threshold B threshold A Stimulus A = 50%Stimulus B = 50% Stimulus A = 25% Stimulus B = 75% Stimulus A = 10% Stimulus B = 90% For sufficiently extreme frequencies: –the optimal starting point exceeds the optimal threshold –the model predicts a switch from integration to stereotyped responding –the stimulus frequency at which this occurs varies according to delay and drift Starting Point

Effects of Stimulus Frequency Empirical Data

The fact that there is a single optimal threshold for a given set of task parameters means that the DDM equations: – Error Rate (ER) = – Decision Time (DT) = – Reward Rate (RR) = can be solved for DT as a function of ER: Optimal Performance Curve - Delay -Delay- Drift Threshold Drift Threshold e 2DriftThreshold Noise 2 Tanh( ) Threshold Drift DriftThreshold Noise e 2DriftThreshold Noise 2 1

The fact that there is a single optimal threshold for a given set of task parameters means that the DDM equations: – Error Rate (ER) = – Decision Time (DT) = – Reward Rate (RR) = can be solved for DT as a function of ER: In other words, there is a single, optimal speed-accuracy curve that should quantitatively define performance… Optimal Performance Curve - Delay -Delay- Drift Threshold Drift Threshold e 2DriftThreshold Noise 2 Tanh( ) Threshold Drift DriftThreshold Noise e 2DriftThreshold Noise 2 1

Optimal Performance Curve ER DT (mean normalized) Theoretical Prediction

Optimal Performance Curve ER DT (mean normalized) Empirical Data

Optimal Performance Curve ER DT (mean normalized) Empirical Data accuracy weight increasing

Optimal Performance Curve ER DT (mean normalized) Empirical Data accuracy weight increasing

Summary Drift Diffusion Model: –explains human RT distributions and accuracy in simple decision tasks –explains dynamics of neural firing in simple decision tasks –formally equivalent to neural network models of simple decision tasks –describes parameters of optimal performance (maximizing reward rate) –predicts influence of task parameters on speed-accuracy tradeoff that approximate those observed empirically Defines, in formal and principled terms, the mechanisms underlying decision making and its interaction with evaluation in simple two alternative forced choice tasks Defines, in formal and principled terms, variables that are subject to regulation by control mechanisms to optimize outcomes

Current Directions Probe neural mechanisms of control (fMRI/EEG): –Information integration process (posterior and frontal mechanisms) –Outcome monitoring (OFC? ACC?) –Threshold adjustment and starting point biases (BG? Supplementary Motor Areas?) –Drift control / attention (PFC?) Extend to more interesting behavioral domains –Competition / selection tasks — time varying drift (attentional control) –Multi-choice decisions Contact with other normative approaches –Bayesian theory: optimal computation –Information theoretic approaches: distribution free analyses

Acknowledgements Investigators: Rafal Bogacz - Bristol University Eric Brown - NYU Patrick Simen - Princeton Phil Holmes - Princeton Jeff Moehlis - UC Santa Barbara Tyler McMillen - Princeton Phil Eckhoff - Princeton Angela Yu - Princeton Neuroscience of Cognitive Control Laboratory Department of Psychology Program in Applied and Computational Mathematics Center for the Study of Brain, Mind and Behavior Princeton University Funding Support: NIMH NSF Center for the Study of Brain, Mind and Behavior