Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.

Slides:



Advertisements
Similar presentations
Dougal Sutherland, 9/25/13.
Advertisements

Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
CSC321: Neural Networks Lecture 3: Perceptrons
PROCESS INTEGRATED DESIGN WITHIN A MODEL PREDICTIVE CONTROL FRAMEWORK Mario Francisco, Pastora Vega, Omar Pérez University of Salamanca – Spain University.
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1.
Physical Symbol System Hypothesis
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Theory Simulations Applications Theory Simulations Applications.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural basis of Perceptual Learning Vikranth B. Rao University of Rochester Rochester, NY.
Machine learning Image source:
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Self Organizing Maps (SOM) Unsupervised Learning.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
Decision Making Theories in Neuroscience Alexander Vostroknutov October 2008.
Neural Modeling - Fall NEURAL TRANSFORMATION Strategy to discover the Brain Functionality Biomedical engineering Group School of Electrical Engineering.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
1 Discovery and Neural Computation Paul Thagard University of Waterloo.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Exploiting Cognitive Constraints To Improve Machine-Learning Memory Models Michael C. Mozer Department of Computer Science University of Colorado, Boulder.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Activity Recognition Journal Club “Neural Mechanisms for the Recognition of Biological Movements” Martin Giese, Tomaso Poggio (Nature Neuroscience Review,
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Neural networks and support vector machines
End-To-End Memory Networks
Randomness in Neural Networks
Learning in Neural Networks
Neural Networks.
Implementing a sequence machine using spiking neurons
Biointelligence Laboratory, Seoul National University
Classification / Regression Neural Networks 2
Outline Linear Shift-invariant system Linear filters
Machine Learning Today: Reading: Maria Florina Balcan
How Neurons Do Integrals
Dynamic Causal Modelling (DCM): Theory
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Bryan Stearns University of Michigan Soar Workshop - May 2018
The Brain as an Efficient and Robust Adaptive Learner
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Learning Precisely Timed Spikes
Optimal Degrees of Synaptic Connectivity
Grossberg Network.
Brendan K. Murphy, Kenneth D. Miller  Neuron 
CSC2535: Computation in Neural Networks Lecture 13: Representing things with neurons Geoffrey Hinton.
Confidence as Bayesian Probability: From Neural Origins to Behavior
Model generalization Brief summary of methods
Volume 25, Issue 23, Pages R1116-R1121 (December 2015)
Attention for translation
The Brain as an Efficient and Robust Adaptive Learner
Introduction to Neural Networks
Image recognition.
Presentation transcript:

Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical Neuroscience University of Waterloo

Goal ● Create a neural cognitive architecture – Biologically realistic ● spiking neurons, anatomical constraints, neural parameters, etc. – Supports high-level cognition ● symbol manipulation, cognitive control, etc. ● Advantages – Connect cognitive theory to neural data – Neural implementation imposes constraints on theory

Required Components ● Representation – Distributed representation of high-dimensional vectors ● Transformation – Manipulate and combine representations ● Memory – Store representations over time ● Control – Apply the right operations at the right time

Representation ● Assumption – Cognition uses high-dimensional vectors for representation ● [2, 4, -3, 7, 0, 2,...] – Top level of many hierarchical object recognition models – Compressed information – Different vectors: ● DOG, CAT, SQUARE, TRIANGLE, RED, BLUE, SENTENCE, etc.

Representation ● How can a group of neurons represent vectors? ● Visual and motor cortex (e.g. Georgopoulos et al., 1986) – Representing spatial location (2D) – Distributed representation – Each neuron has a “preferred” direction ● One direction it fires most strongly for ● Uniformly distributed [run ]

Representation ● Neural representation – Leaky Integrate-and-Fire neurons – Input current: ● How good is the representation? – Vector → spikes – Can the input be recovered from the output? – Post-synaptic current → vector

Representation ● Linear decoding – Weighted sum of neural outputs – Need weights d (decoding) for optimal estimate of input – Extends to higher dimensions ● Basis of Neural Engineering Framework ● Decrease error by increasing number of neurons ● Distributed representation ● Robust to noise, neuron loss ● (Eliasmith & Anderson, 2003) ● [run] [run]

Transformation ● Using the representations – Transfer information from one neural group to another ● Simplest case: – Form synaptic connections between group A and group B such that B will represent whatever A represents – Communication channel – Neural Engineering Framework ● Optimal synaptic connection weights: ● [run] [run]

Transformation ● Linear transformation – Group A represents x, want group B to represent Mx – [x y z] → [2x 3x-2y+z] ● Non-linear transform f(x) – Some functions more accurately represented than others – What functions do we need?

Transformation ● Binding operation – We have vectors for RED, BLUE, TRIANGLE, CIRCLE – How do we represent “a red triangle and a blue circle”? – Solution via Vector Symbolic Architectures (Gayler, 2003) ● Create a new vector for RED  TRIANGLE ● Should be highly dissimilar from other vectors – Many suitable functions ● Circular convolution (Plate, 2003) ● Inverse operation: (RED  TRIANGLE)  RED'≈TRIANGLE – where RED' is a rearrangement of RED (linear operation) ● [run] [run]

Memory ● Need to store representations over time – Short-term working memory – Neurons need to maintain their firing over time – Need recurrent connections – Communications channel back to itself ● Given no input, will maintain current value (some random drift) ● Given input, will add input to current value (integrator) ● [run] [run]

Memory ● Decay time – Depends on number of neurons, time constant of neurotransmitter – Matches spike data from PFC during memory task ● (Romo et al., 1999; Singh & Eliasmith, 2006) ● Storage capacity – Scales exponentially in # dimensions – ~50,000 neurons can represent 8 pairs of terms out of a vocabulary of 60,000 items ● (see Stewart, Tang, Eliasmith, 2010)

Control ● Need to do different things at different times – Set the inputs to the memory – Set what to extract from memory ● Action selection – Choose one action out of many – Basal ganglia – Select action with greatest utility – Non-spiking model (Gurney, Prescott, & Redgrave, 2001) – Convert to spiking model via Neural Engineering Framework (Stewart, Choo, & Eliasmith, 2010b) ● [run] [run]

Control ● Cortex – Stores and transforms vectors ● Basal ganglia – Compare cortex state to optimum states for all actions – Output inhibits all actions in thalamus except best action ● (closest match) ● Thalamus – Executes chosen action – Sends vectors to cortex; controls cortex transformations Cortex Basal Ganglia Thalamu s

Question Answering ● Two actions – If STATEMENT in visual, send to working memory – If QUESTION in visual, send to working memory extraction area ● Stimulus – Present statement to visual area ● STATEMENT+RED  TRIANGLE ● STATEMENT+BLUE  CIRCLE – Present question to visual area ● QUESTION+RED then QUESTION+CIRCLE ● [run] [run]

Results ● Successful implementation of controlled symbolic reasoning in spiking neurons – Symbols are vectors, vectors can be bound and unbound – Neural implementation of VSAs ● Scales to human-sized vocabulary ● Timing different for two types of actions – Simple: 34-44ms; Communication channels: 59-73ms ● (see Stewart, Choo, Eliasmith, 2010b) – No free parameters (all fixed from neuroscience data)

Conclusions ● Can perform symbol manipulation via spiking neurons – Distinguishes “red triangle and blue circle” from “red circle and blue triangle” – Implements IF-THEN action rules (productions) ● IF cortex state is similar to X ● THEN send particular values to particular areas in cortex and/or activate particular communication channels – Conforms to neural anatomy, neuron properties ● (firing rates, neurotransmitter time constants) – Produces neural activity predictions – Suggests change to the standard 50ms cognitive cycle time

Sequential Action ● Simple example – If working memory contains A, set working memory to B – If working memory contains B, set working memory to C – If working memory contains C, set working memory to D –... ● Implementation – Connect cortex to basal ganglia using where M=[A B C...] – Connect thalamus to cortex using M=[B C D...] ● [run] [run]

Information Routing ● Add a visual area to cortex ● Add one action to basal ganglia – If LETTER in visual area, transfer contents of visual area to cortex – Add communication channel: visual to working memory – Connect action in thalamus: inhibit communication channel ● Start with nothing (no action chosen) – Present LETTER+C to visual area – Transfers C to working memory, continues sequence ● [run] [run]