Download presentation
Presentation is loading. Please wait.
Published byJessie Dennis Modified over 8 years ago
1
Building a scalable neural processing subsystem Joy Bose Supervisors Steve Furber (Amulet Group) Jonathan Shapiro (AI Group) Amulet Group Meeting 30 January 2003
2
Outline of the talk Introduction Memories: human and computer Kanerva’s theory and the n-of-m modification Storing sequences: feedback and desentisization Software memory model implementation Conclusion and future work
3
Objective A scalable neural chip A robust dynamic neural memory
4
Motivation Take inspiration from the brain Commercial Applications Scalable chips
5
The human memory Associative Learn and recall Robust Pattern is the address Capable of storing sequences Info is stored in connections between neurons Forgets gradually
6
The computer memory Is like a lookup table: data stored at particular address Hash function Overwriting, but never forgets Address space is of exponential order to number of dimensions
7
Address Decoder Data Memory Address (n bit) 1 out of 2 n lines Input data (n bit) Output Data (n bit) Conventional Computer Memory Writing Mode Recall Mode
8
Kanerva’s theory Seeks to combine the good features of both Is mainly for the sparse case, when meaningful addresses are much less than 2 (no of dimensions) Is a binary computer memory with characteristics of human memory Is also a 2-dimensional neural network
9
Working: Kanerva’s Memory model
10
The n-of-m modification The aim is increased capacity, error checking and hardware implementation feasibility Instead of binary address it uses n-of-m codes No negative inputs/weights Works in two modes: learning and recall
11
Address Decoder (a-of-A) Data Memory i-of-A address w-of-W Word lines d-of-D data N-of-M Kanerva Memory model Learning ModeRecall Mode
12
Remembering sequences Order matters: earlier is more important Added: feedback Issues: symbol interference Two time constants Forward (shunt) inhibition: order sensitivity Backward inhibition: self- resetting
13
A finite state machine Finite number of states Transitions between states a b c d a Stores sequences Interference between different sequences to experiment ‘online’ learning and ‘offline’ learning
14
The Network Model (for FSM) ADDEC DATA MEMORY ADDRESSDATA
15
The Neural Network (for FSM)
16
Some implementation issues Pendulum model: to impose time- ordering Event queue Desentisization factor Spiking neuron model
17
Pendulum Model and event queue
18
The Neuron: Brain Cell
19
Neuron: Conventional Vs. Spiking Model
20
Conclusion and future work Hypothesis: It is possible to build a dynamic neural memory as a finite state machine capable of storing sequences Objective: To build a robust neural memory and a modular neural chip Future work: To finalise the various parameters of the neural model and to experiment with various models
21
Further Information Fire Project Home Page: www.cs.man.ac.uk/amulet/projects/fire E Mail: joy.bose@cs.man.ac.uk Homepage: www.cs.man.ac.uk/~bosej Thanks!!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.