C. Shawn Green, Alexandre Pouget, Daphne Bavelier  Current Biology 

Slides:



Advertisements
Similar presentations
From T. McMillen & P. Holmes, J. Math. Psych. 50: 30-57, MURI Center for Human and Robot Decision Dynamics, Sept 13, Phil Holmes, Jonathan.
Advertisements

Distinguishing Evidence Accumulation from Response Bias in Categorical Decision-Making Vincent P. Ferrera 1,2, Jack Grinband 1,2, Quan Xiao 1,2, Joy Hirsch.
Thomas Andrillon, Sid Kouider, Trevor Agus, Daniel Pressnitzer 
Piercing of Consciousness as a Threshold-Crossing Operation
A Classical Model of Decision Making: The Drift Diffusion Model of Choice Between Two Alternatives At each time step a small sample of noisy information.
Attention Narrows Position Tuning of Population Responses in V1
Volume 20, Issue 5, Pages (May 1998)
Backward Masking and Unmasking Across Saccadic Eye Movements
Rei Akaishi, Kazumasa Umeda, Asako Nagase, Katsuyuki Sakai  Neuron 
Araceli Ramirez-Cardenas, Maria Moskaleva, Andreas Nieder 
Decision Making during the Psychological Refractory Period
Aaron R. Seitz, Praveen K. Pilly, Christopher C. Pack  Current Biology 
Ji Dai, Daniel I. Brooks, David L. Sheinberg  Current Biology 
Thomas Andrillon, Sid Kouider, Trevor Agus, Daniel Pressnitzer 
Caspar M. Schwiedrzik, Winrich A. Freiwald  Neuron 
Choice Certainty Is Informed by Both Evidence and Decision Time
Braden A. Purcell, Roozbeh Kiani  Neuron 
Volume 20, Issue 5, Pages (May 1998)
Volume 27, Issue 20, Pages e3 (October 2017)
Alteration of Visual Perception prior to Microsaccades
Probabilistic Population Codes for Bayesian Decision Making
Banburismus and the Brain
Confidence as Bayesian Probability: From Neural Origins to Behavior
A Switching Observer for Human Perceptual Estimation
Caspar M. Schwiedrzik, Winrich A. Freiwald  Neuron 
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Volume 27, Issue 6, Pages (March 2017)
Decision Making as a Window on Cognition
Volume 71, Issue 4, Pages (August 2011)
Volume 24, Issue 13, Pages (July 2014)
Volume 36, Issue 5, Pages (December 2002)
Jack Grinband, Joy Hirsch, Vincent P. Ferrera  Neuron 
Joshua I. Sanders, Balázs Hangya, Adam Kepecs  Neuron 
Nicholas J. Priebe, David Ferster  Neuron 
Adaptation Disrupts Motion Integration in the Primate Dorsal Stream
Contour Saliency in Primary Visual Cortex
Ralf M. Haefner, Pietro Berkes, József Fiser  Neuron 
Liu D. Liu, Christopher C. Pack  Neuron 
Confidence Is the Bridge between Multi-stage Decisions
Volume 27, Issue 23, Pages e3 (December 2017)
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
Volume 22, Issue 18, Pages (September 2012)
A Switching Observer for Human Perceptual Estimation
Integration Trumps Selection in Object Recognition
Moran Furman, Xiao-Jing Wang  Neuron 
A Scalable Population Code for Time in the Striatum
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Ryo Sasaki, Takanori Uka  Neuron  Volume 62, Issue 1, Pages (April 2009)
Volume 25, Issue 3, Pages (February 2015)
Redmond G. O’Connell, Michael N. Shadlen, KongFatt Wong-Lin, Simon P
Volume 75, Issue 5, Pages (September 2012)
Rei Akaishi, Kazumasa Umeda, Asako Nagase, Katsuyuki Sakai  Neuron 
Jeremy M. Wolfe, Michael J. Van Wert  Current Biology 
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
Timescales of Inference in Visual Adaptation
The Normalization Model of Attention
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Paying Attention to the Details of Attention
Volume 21, Issue 11, Pages (June 2011)
Daniela Vallentin, Andreas Nieder  Current Biology 
Sound Facilitates Visual Learning
Volume 74, Issue 1, Pages (April 2012)
Volume 75, Issue 5, Pages (September 2012)
Volume 21, Issue 23, Pages (December 2011)
Manuel Jan Roth, Matthis Synofzik, Axel Lindner  Current Biology 
Volume 22, Issue 5, Pages (March 2012)
Visual Motion Induces a Forward Prediction of Spatial Pattern
Volume 28, Issue 19, Pages e8 (October 2018)
Volume 27, Issue 6, Pages (March 2017)
Volume 23, Issue 11, Pages (June 2013)
Presentation transcript:

Improved Probabilistic Inference as a General Learning Mechanism with Action Video Games  C. Shawn Green, Alexandre Pouget, Daphne Bavelier  Current Biology  Volume 20, Issue 17, Pages 1573-1579 (September 2010) DOI: 10.1016/j.cub.2010.07.040 Copyright © 2010 Elsevier Ltd Terms and Conditions

Figure 1 Visual Motion Direction Discrimination (A) Task. Subjects viewed a dynamic random dot motion display and were asked to indicate the net direction of motion (left or right; here, the correct answer would be right). On every trial, some proportion of the dots moved coherently (top, 50% coherence; middle, 25% coherence; bottom, 0% coherence) either to the left or to the right, while the remaining dots were replotted randomly. By parametrically varying the number of coherently moving dots from very few to many, full psychometric and chronometric curves could be obtained. (B) Behavior. Although VGPs and NVGPs demonstrated equivalent accuracy (p = .65, p-eta2 = .01) (top), VGPs responded substantially faster than NVGPs (F(1,21) = 18.9, p < .001, p-eta2 = .47) (bottom). Importantly, VGP status interacted with coherence because of a greater decrease in RTs in VGPs at low than high coherence (F(6, 126) = 3.5, p < .001, p-eta2 = .15). In this and all other psychometric and chronometric curve figures, error bars correspond to between-subject standard error. (C) Drift diffusion model (DDM). The accumulation of the noisy sensory evidence is simulated by the diffusion of a particle upward or downward until a decision bound is reached. DDM models generate psychometric and chronometric curves that are constrained by three main variables [14]: (1) the rate at which information is accumulated over time, (2) the height of the decision bound at which the system stops accumulating evidence and a decision is made, and (3) the nondecision time, an additive amount of time that is common to all tasks and reflects nondecision processes such as motor planning and execution. To quantitatively assess the individual contribution of integration rate, decision bound, and nondecision time, RT and accuracy data were simultaneously fit to each subject's data with the proportional-rate diffusion model as in Palmer and colleagues [14]. The fits were good and equivalent in the two groups (r2VGP = .93, r2NVGP = .90, p = .36). The rate of integration was greater in the VGP than the NVGP group (t(21) = 2.6, p = .02, Cohen's d = 1.13, top), whereas the opposite result was observed for the decision bound (t(21) = 3.6, p = .002, Cohen's d = 1.57, middle). No difference was observed between the groups in nondecision time (p > .7, Cohen's d = 0.14, bottom), eliminating an additive, postdecision process as a possible source of group differences. Data from individual subjects were fit separately and error bars correspond to between-subject standard error. Current Biology 2010 20, 1573-1579DOI: (10.1016/j.cub.2010.07.040) Copyright © 2010 Elsevier Ltd Terms and Conditions

Figure 2 Neural Model—Motion Direction Discrimination (A) Neural Network Architecture. The network consists of two interconnected layers of neurons with Gaussian tuning curves. In MT, the sensory layer, the tuning curves are for direction of motion, whereas in LIP, the integration layer, the tuning curves are for saccade direction, as a proxy in our case for a left/right decision. Note that we do not mean to say that LIP is the only area involved in this process—the label is used mostly for convenience (the same is true for the MT label in the input layer). The layers differ by their connectivity and dynamics. The MT neurons send feed-forward connection to the LIP neurons. Each LIP neuron receives a Gaussian pattern of weight centered on the MT neuron with the same preferred direction. The LIP neurons also have lateral connections to implement short-range excitation and long-range inhibition as well as a long time constant (1 s) allowing them to integrate their input. Each panel indicates a representative pattern of activity in terms of spike count 200 ms into a trial for the sensory layer (MT, bottom) and the integration layer (LIP, top). (B) Neural Model Fit. As with the DDM, the fits were good and equivalent for the two groups. The neural model captures the change in performance from NVGP to VGP with a 55% increase in the conductance of the feed-forward connections between the sensory (MT) and the integration (LIP) layers and, in contrast to the DDM, nearly no change in the bound height (a 1% decrease, which is within the resolution of our numerical maximization procedure; see Supplemental Experimental Procedures, part A). The conductance controls the amount of information transmitted from the sensory layer to the integration layer per unit time, or the strength of the feed-forward connections. It is important to note that this effect is not analogous to a simple change in sensitivity in DDMs. Although increasing the conductance does increase the amount of information transmitted from the sensory to the integration layer (which intuitively should increase accuracy), it also induces large fluctuations in the membrane potential of the neurons. These fluctuations lead the network to reach the bound faster, thus lowering the percentage of correct responses. These two effects cancel one another over a wide range of parameters, allowing a single change in feed-forward strength to alter reaction time while leaving the psychometric curve nearly unchanged. Model fit corresponds to best fit to the mean data rather than the mean of individual fits. Current Biology 2010 20, 1573-1579DOI: (10.1016/j.cub.2010.07.040) Copyright © 2010 Elsevier Ltd Terms and Conditions

Figure 3 Auditory Tone Location Discrimination (A) Task. A pure tone embedded in a white noise mask was presented in one ear, while white noise alone was presented in the other (both ears being normalized to the same mean amplitude). The subjects' task was to indicate with a button press the ear in which the tone was present as quickly and accurately as possible. In a manner consistent with adjusting the coherence level of the motion stimulus, the ratio of the amplitude of the target tone to the white noise mask was manipulated in order to test performance across the range of possible accuracy levels and reaction times (high amplitude, top; low amplitude, bottom). (B) Behavior. As in Experiment 1, VGPs and NVGPs demonstrated equivalent accuracy (p = .32, p-eta2 = .05) (top), VGPs responded substantially faster than the NVGPs (F(1,21) = 20.6, p < .001, p-eta2 = .50) (bottom), and the RT difference between groups was greater at lower signal-to-noise ratios (SNR) (F(7,147) = 5.2, p < .001, p-eta2 = .2). (C) Drift diffusion model. The rate of integration was greater in the VGP than the NVGP group (t(21) = 3.8, p = .001, Cohen's d = 1.66) (top), while the opposite result was observed for the decision bound (t(21) = 2.6, p = .02, Cohen's d = 1.13) (middle). No difference was observed between the groups in nondecision time (p > .05, Cohen's d = .82) (bottom). Data from individual subjects were fit separately and error bars correspond to between-subject standard error. Current Biology 2010 20, 1573-1579DOI: (10.1016/j.cub.2010.07.040) Copyright © 2010 Elsevier Ltd Terms and Conditions

Figure 4 Critical Duration Experiment Results Accuracy of the VGPs and NVGPs for two levels of visual coherence (A) and two levels of auditory SNRs (B) as a function of presentation duration (see Supplemental Information). Individual subject data were modeled as a simple exponential rise to an asymptote (VGP = thinner lines; NVGP = thicker lines) using %Correct(t) = λ(1- e-β(t- δ))+50%, where lambda (λ) is the level of asymptotic performance, beta (β) is the rate at which accuracy grows as a function of time, and delta (δ) is the intercept or the time at which accuracy rises above chance levels [35]. In both the DDM and the neural model, faster accumulation of information predicts greater rate (β) value in the VGPs. This prediction was confirmed (visual motion: VGP, 8.1 ± 1.1; NVGP, 4.8 ± 1.1; F(1,21) = 4.6, p = .045, p-eta2 = .19; auditory tone: VGP, 24.5 ± 3.9; NVGP, 12.9 ± 3.8; F(1,21) = 4.5, p = .044, p-eta2 = .18), demonstrating again a greater rate of sensory integration in VGPs. Current Biology 2010 20, 1573-1579DOI: (10.1016/j.cub.2010.07.040) Copyright © 2010 Elsevier Ltd Terms and Conditions