Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:

Slides:



Advertisements
Similar presentations
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Advertisements

MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
Biological Modeling of Neural Networks: Week 11 – Continuum models: Cortical fields and perception Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Transients.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
Connected Populations: oscillations, competition and spatial continuum (field equations) Lecture 12 Course: Neural Networks and Biological Modeling Wulfram.
Chapter 8 Introduction to Hypothesis Testing
Bump attractors and the homogeneity assumption Kevin Rio NEUR April 2011.
Neural mechanisms of Spatial Learning. Spatial Learning Materials covered in previous lectures Historical development –Tolman and cognitive maps the classic.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Ch 7. Cortical feature maps and competitive population coding Fundamentals of Computational Neuroscience by Thomas P. Trappenberg Biointelligence Laboratory,
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Chapter 4. Formal Tools for the Analysis of Brain-Like Structures and Dynamics (1/2) in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots.
Dependency Networks for Collaborative Filtering and Data Visualization UAI-2000 발표 : 황규백.
Chapter 2. From Complex Networks to Intelligent Systems in Creating Brain-like Systems, Sendhoff et al. Course: Robots Learning from Humans Baek, Da Som.
Effect of Small-World Connectivity on Sparsely Synchronized Cortical Rhythms W. Lim (DNUE) and S.-Y. KIM (LABASIS)  Fast Sparsely Synchronized Brain Rhythms.
1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Ch 9. Rhythms and Synchrony 9.7 Adaptive Cooperative Systems, Martin Beckerman, Summarized by M.-O. Heo Biointelligence Laboratory, Seoul National.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
By: Matthew A. Wilson & Bruce L. McNaughton GROUP A2: Anna Loza Elyse Rosa Britni Rowe Caroline Olsen Vedran Dzebic Kris Clark.
Information Processing by Neuronal Populations Chapter 6: Single-neuron and ensemble contributions to decoding simultaneously recoded spike trains Information.
Chapter 4. Analysis of Brain-Like Structures and Dynamics (2/2) Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans 09/25.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Bayesian Perception.
Does the brain compute confidence estimates about decisions?
Ch. 13 A face in the crowd: which groups of neurons process face stimuli, and how do they interact? KARI L. HOFFMANN 2009/1/13 BI, Population Coding Seminar.
1 9. Continuous attractor and competitive networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer.
Chapter 9. The PlayMate System ( 2/2 ) in Cognitive Systems Monographs. Rüdiger Dillmann et al. Course: Robots Learning from Humans Summarized by Nan Changjun.
Artificial Intelligence DNA Hypernetworks Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
Biointelligence Laboratory, Seoul National University
Chapter 14. Conclusions From “The Computational Nature of Language Learning and Evolution” Summarized by Seok Ho-Sik.
Introduction to Hypothesis Test – Part 2
NATURE NEUROSCIENCE 2007 Coordinated memory replay in the visual cortex and hippocampus during sleep Daoyun Ji & Matthew A Wilson Department of Brain.
CHAPTER 29: Multiple Regression*
The Normal Probability Distribution Summary
Information Processing by Neuronal Populations
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 2 Stimulus-Response Agents
Joshua P. Bassett, Thomas J. Wills, Francesca Cacucci  Current Biology 
INTRODUCTION TO HYPOTHESIS TESTING
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Chenguang Zheng, Kevin Wood Bieri, Yi-Tse Hsiao, Laura Lee Colgin 
Artificial Intelligence Chapter 3 Neural Networks
Probabilistic Population Codes for Bayesian Decision Making
CHAPTER 12 More About Regression
Confidence as Bayesian Probability: From Neural Origins to Behavior
CA3 Retrieves Coherent Representations from Degraded Input: Direct Evidence for CA3 Pattern Completion and Dentate Gyrus Pattern Separation  Joshua P.
Decoding Cognitive Processes from Neural Ensembles
Adaptive Cooperative Systems Chapter 6 Markov Random Fields
Volume 36, Issue 5, Pages (December 2002)
Thomas Akam, Dimitri M. Kullmann  Neuron 
Artificial Intelligence Chapter 3 Neural Networks
Spatially Periodic Activation Patterns of Retrosplenial Cortex Encode Route Sub-spaces and Distance Traveled  Andrew S. Alexander, Douglas A. Nitz  Current.
Chapter 5 Language Change: A Preliminary Model (2/2)
Artificial Intelligence Chapter 3 Neural Networks
Patrick Kaifosh, Attila Losonczy  Neuron 
Jozsef Csicsvari, Hajime Hirase, Akira Mamiya, György Buzsáki  Neuron 
by Khaled Nasr, Pooja Viswanathan, and Andreas Nieder
Using Bayesian Network in the Construction of a Bi-level Multi-classifier. A Case Study Using Intensive Care Unit Patients Data B. Sierra, N. Serrano,
Looking for cognition in the structure within the noise
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables: implications for cognition Adam Johnson, Jadin C. Jackson, and A. David Redish Summary by B.-H. Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

(c) 2000-2008 SNU CSE Biointelligence Lab Outline Introduction Representation / Encoding (tuning curves) / Decoding (reconstruction) Non-local reconstruction (memory and cognition) Self-consistency (coherency) Comparing actual and expected activity patterns Validation by simulations Self-consistency in a Bayesian framework Multiple models in hippocampus conclusions (c) 2000-2008 SNU CSE Biointelligence Lab

Introduction Neural representations are distributed X S Neural Activity sensory description moter planning for behavior cognitive process in between Modern recording technologies Neural ensembles Behavioral variables Immediate reconstruction X simultaneous recording of large neural ensembles (> 100 cells simultaneously) from awake behaving animals. Distributed representation Representation of non-local values for cognitive process Neural representations within ensembles Neural Activity Measuring self-consistency Dynamic changes in the self-consistency  Indicative of cognitive process S Check the extent the firing pattern matches expectations (c) 2000-2008 SNU CSE Biointelligence Lab

Ensemble-based reconstruction Sensory process Response World Memory process Decision making Information Neural Representation Action Behavior x encoded information preceding experience planned future behaviors s neural activity spikes If information is consistently represented across a population of neurons, then it should be possible to infer the expectations of the variable x by examining the neural activity across the population. Encoding model: hypothesized relationship tuning curves mutual information I(x;s) linear filter kernel Encoding Decoding depends on the encoding model Decoding (reconstruction) conditional independency assumption non-prob. methods (c) 2000-2008 SNU CSE Biointelligence Lab

Hippocampus in a brain Hippocampus plays major roles in short term memory and spatial navigation (c) 2000-2008 SNU CSE Biointelligence Lab

Non-local reconstruction (memory and cognition) – Example of a rat experiment Non-local reconstruction is a sign of memory and cognition Internal representation reflects Primary inputs (information on location) rats perform active behavioral tracks on an environment location of the animal ex) slow dynamics in which reconstruction tracks behavior Different information processing modes Internal representation deviate from the external world S cognition potentially plays a role There is a connection of the observable world with rat’s invisible goals or motivations sleeping /pausing at feeder sites: reflecting recently experienced memories rather than the current location reconstruction during non-attentive waking states: representations of non-local information ex) fast dynamics of replay Neural Activity external variable internal representation (c) 2000-2008 SNU CSE Biointelligence Lab

Authors’ contribution Defining self-consistency (or coherency), in order to differentiate between models Implications of multiple generative models for understanding these multiple information processing modes (c) 2000-2008 SNU CSE Biointelligence Lab

Self-consistency : Motivation Possible pitfall of the ensemble-based reconstruction Risky assumption: brain rigidly adheres to representing the present behavioral status Reconstruction errors are viewed as “noise in the system” Ignoring the cognitive questions of memory and recall Questions What is recall or confusion? How does the brain represent competing values in ambiguous situations? How do units within a network function together to form a coherent representation? (c) 2000-2008 SNU CSE Biointelligence Lab

Self-consistency: an Example (tuning curve) A coherent or self-consistent representation The firing of all neurons in a network conforms to some pattern expected from observations during normal encoding. A: unimodal tuning curve B: coherent network firing pattern C: bimodal representation – ambiguous or incoherent state of the network D: confused or incoherent state of the network (behavioral variable) (c) 2000-2008 SNU CSE Biointelligence Lab

Measures for self-consistency Studies of Redish, Averbeck, Georgopoulous Jackson and Redish (2003) Defining the expected ‘activity packet’ Reflecting actual ‘reconstruction errors’ in the ‘self-consistency’ of the representation Measures: comparing actual and expected activity patterns k: available cells in the ensemble T: turning curve F: firing rate Most sensitive Recommended for real use (c) 2000-2008 SNU CSE Biointelligence Lab

Validation by simulations – Setting Attractor network used for the simulations Standard local-excitatory/global-inhibitory network Features of the structure symmetric local excitatory connections btw neurons with similar preferred directions Global inhibition with periodic boundary conditions Can be thought as a circular ring of neurons with a stable attractor state (c) 2000-2008 SNU CSE Biointelligence Lab

Validation by simulations – Issue 1 Random network firing vs. stable activity mode (c) 2000-2008 SNU CSE Biointelligence Lab

Validation by simulations – Issue 2 Rotation vs. jump (c) 2000-2008 SNU CSE Biointelligence Lab

Validation by simulations – Issue 3 Ambiguous vs. single-valued representations (c) 2000-2008 SNU CSE Biointelligence Lab

Self-consistency in a Bayesian Framework Self-consistency measure in a Bayesian Framework (probability of a given neural activity set) (observed neural activity) (decoded neural representation) (M: the generative model used) (c) 2000-2008 SNU CSE Biointelligence Lab

Multiple models in hippocampus Spatial representations within the hippocampus Generally, the neural activity of place cells and the decoded spatial representation very well predicts the animal’s position However, place cell activity can remain well organized even when the decoded representation does not match the animal’s position Properties of hippocampus of interest Multiple brain states Multiple spatiotemporal dynamics, even during awake behaviors Predictive filter Prediction step Correction step (c) 2000-2008 SNU CSE Biointelligence Lab

Multiple generative models in the hippocampus 4 generative models used (prob. dist. is spread with different rates) Percentage of samples in which each model was found to be the most consistent (c) 2000-2008 SNU CSE Biointelligence Lab

Self-consistency (coherency) measures Summary Title revisited Measuring distributed properties of neural representations - self-consistency measures in neural ensemble beyond the decoding of local variables - Immediate reconstruction implications for cognition Dynamic changes in the self-consistency  Indicative of cognitive process Self-consistency (coherency) measures (c) 2000-2008 SNU CSE Biointelligence Lab