Building a scalable neural processing subsystem Joy Bose Supervisors Steve Furber (Amulet Group) Jonathan Shapiro (AI Group) Amulet Group Meeting 30 January 2003
Outline of the talk Introduction Memories: human and computer Kanerva’s theory and the n-of-m modification Storing sequences: feedback and desentisization Software memory model implementation Conclusion and future work
Objective A scalable neural chip A robust dynamic neural memory
Motivation Take inspiration from the brain Commercial Applications Scalable chips
The human memory Associative Learn and recall Robust Pattern is the address Capable of storing sequences Info is stored in connections between neurons Forgets gradually
The computer memory Is like a lookup table: data stored at particular address Hash function Overwriting, but never forgets Address space is of exponential order to number of dimensions
Address Decoder Data Memory Address (n bit) 1 out of 2 n lines Input data (n bit) Output Data (n bit) Conventional Computer Memory Writing Mode Recall Mode
Kanerva’s theory Seeks to combine the good features of both Is mainly for the sparse case, when meaningful addresses are much less than 2 (no of dimensions) Is a binary computer memory with characteristics of human memory Is also a 2-dimensional neural network
Working: Kanerva’s Memory model
The n-of-m modification The aim is increased capacity, error checking and hardware implementation feasibility Instead of binary address it uses n-of-m codes No negative inputs/weights Works in two modes: learning and recall
Address Decoder (a-of-A) Data Memory i-of-A address w-of-W Word lines d-of-D data N-of-M Kanerva Memory model Learning ModeRecall Mode
Remembering sequences Order matters: earlier is more important Added: feedback Issues: symbol interference Two time constants Forward (shunt) inhibition: order sensitivity Backward inhibition: self- resetting
A finite state machine Finite number of states Transitions between states a b c d a Stores sequences Interference between different sequences to experiment ‘online’ learning and ‘offline’ learning
The Network Model (for FSM) ADDEC DATA MEMORY ADDRESSDATA
The Neural Network (for FSM)
Some implementation issues Pendulum model: to impose time- ordering Event queue Desentisization factor Spiking neuron model
Pendulum Model and event queue
The Neuron: Brain Cell
Neuron: Conventional Vs. Spiking Model
Conclusion and future work Hypothesis: It is possible to build a dynamic neural memory as a finite state machine capable of storing sequences Objective: To build a robust neural memory and a modular neural chip Future work: To finalise the various parameters of the neural model and to experiment with various models
Further Information Fire Project Home Page: E Mail: Homepage: Thanks!!