Building a scalable neural processing subsystem Joy Bose Supervisors Steve Furber (Amulet Group) Jonathan Shapiro (AI Group) Amulet Group Meeting 30 January.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks 2. Overview  The McCulloch-Pitts neuron  Pattern space  Limitations  Learning.
Advertisements

Prith Banerjee ECE C03 Advanced Digital Design Spring 1998
Topics covered: CPU Architecture CSE 243: Introduction to Computer Architecture and Hardware/Software Interface.
A Scalable and Reconfigurable Search Memory Substrate for High Throughput Packet Processing Sangyeun Cho and Rami Melhem Dept. of Computer Science University.
Marković Miljan 3139/2011
TOPIC : Finite State Machine(FSM) and Flow Tables UNIT 1 : Modeling Module 1.4 : Modeling Sequential circuits.
1 © 2014 B. Wilkinson Modification date: Dec Sequential Logic Circuits – I Flip-Flops A sequential circuit is a logic components whose outputs.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Brain-like design of sensory-motor programs for robots G. Palm, Uni-Ulm.
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
Fourth International Symposium on Neural Networks (ISNN) June 3-7, 2007, Nanjing, China A Hierarchical Self-organizing Associative Memory for Machine Learning.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
A New Household Security Robot System Based on Wireless Sensor Network Reporter :Wei-Qin Du.
Sequential System Synthesis -- Introduction
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
Fourth International Symposium on Neural Networks (ISNN) June 3-7, 2007, Nanjing, China Online Dynamic Value System for Machine Learning Haibo He, Stevens.
Nawaf M Albadia Introduction. Components. Behavior & Characteristics. Classes & Rules. Grid Dimensions. Evolving Cellular Automata using Genetic.
MANINDER KAUR RAM and ROM Chips 24-Nov
Secure Embedded Processing through Hardware-assisted Run-time Monitoring Zubin Kumar.
CS3350B Computer Architecture Winter 2015 Lecture 5.2: State Circuits: Circuits that Remember Marc Moreno Maza [Adapted.
Biologically-Inspired Neural Nets Modeling the Hippocampus.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
ECE 101 An Introduction to Information Technology Digital Logic.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
NEURAL NETWORKS FOR DATA MINING
Finite State Machines (FSMs) and RAMs and inner workings of CPUs COS 116, Spring 2010 Guest: Szymon Rusinkiewicz.
Fuzzy Reinforcement Learning Agents By Ritesh Kanetkar Systems and Industrial Engineering Lab Presentation May 23, 2003.
Finite State Machines (FSMs) and RAMs and CPUs COS 116, Spring 2011 Sanjeev Arora.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Evolutionary Path to Biological Kernel Machines Magnus Jändel Swedish Defence Research Agency.
From brain activities to mathematical models The TempUnit model, a study case for GPU computing in scientific computation.
Registers; State Machines Analysis Section 7-1 Section 5-4.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
SWE 4743 Abstraction Richard Gesick. CSE Abstraction the mechanism and practice of abstraction reduces and factors out details so that one can.
Wnopp Memory device Introduction n Memory Cell n Memory Word n Byte n Capacity n Address n Read Operation n Write Operation n Access Time n Volatile.
Perceptrons Michael J. Watts
ACCESS IC LAB Graduate Institute of Electronics Engineering, NTU 99-1 Under-Graduate Project Design of Datapath Controllers Speaker: Shao-Wei Feng Adviser:
Ghent University Compact hardware for real-time speech recognition using a Liquid State Machine Benjamin Schrauwen – Michiel D’Haene David Verstraeten.
Progress check Learning Objective: Success Criteria : Can identify various input and output devices - Level 4 – 5 Can identify all the major items of hardware.
COMBINATIONAL AND SEQUENTIAL CIRCUITS Guided By: Prof. P. B. Swadas Prepared By: BIRLA VISHVAKARMA MAHAVDYALAYA.
Chapter 3 Boolean Algebra and Digital Logic T103: Computer architecture, logic and information processing.
Memory and Programmable Logic
Introduction.
Multilayer Perceptrons
Fall 2004 Perceptron CS478 - Machine Learning.
Digital Logic & Design Dr. Waseem Ikram Lecture 39.
CSE 190 Neural Networks: The Neural Turing Machine
Implementing a sequence machine using spiking neurons
Assistant Prof. Fareena Saqib Florida Institute of Technology
Jeremy R. Johnson Mon. Apr. 3, 2000
Computer Science 210 Computer Organization
Chapter 1: Computer Systems
Arealization and Memory in the Cortex
CSE 370 – Winter Sequential Logic - 1
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Lecture 17 Logistics Last lecture Today HW5 due on Wednesday
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Chapter 8: Generalization and Function Approximation
CS 621 Artificial Intelligence Lecture /10/05 Prof
Systems Architecture I
Motivation Combinational logic functions can be represented, and defined, by truth tables. Sequential logic function cannot, because their behavior depends.
Lecture 17 Logistics Last lecture Today HW5 due on Wednesday
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Review: The whole processor
Chapter 7 Microprogrammed Control
Computer Architecture
LHC beam mode classification
Presentation transcript:

Building a scalable neural processing subsystem Joy Bose Supervisors Steve Furber (Amulet Group) Jonathan Shapiro (AI Group) Amulet Group Meeting 30 January 2003

Outline of the talk Introduction Memories: human and computer Kanerva’s theory and the n-of-m modification Storing sequences: feedback and desentisization Software memory model implementation Conclusion and future work

Objective A scalable neural chip A robust dynamic neural memory

Motivation Take inspiration from the brain Commercial Applications Scalable chips

The human memory Associative Learn and recall Robust Pattern is the address Capable of storing sequences Info is stored in connections between neurons Forgets gradually

The computer memory Is like a lookup table: data stored at particular address Hash function Overwriting, but never forgets Address space is of exponential order to number of dimensions

Address Decoder Data Memory Address (n bit) 1 out of 2 n lines Input data (n bit) Output Data (n bit) Conventional Computer Memory Writing Mode Recall Mode

Kanerva’s theory Seeks to combine the good features of both Is mainly for the sparse case, when meaningful addresses are much less than 2 (no of dimensions) Is a binary computer memory with characteristics of human memory Is also a 2-dimensional neural network

Working: Kanerva’s Memory model

The n-of-m modification The aim is increased capacity, error checking and hardware implementation feasibility Instead of binary address it uses n-of-m codes No negative inputs/weights Works in two modes: learning and recall

Address Decoder (a-of-A) Data Memory i-of-A address w-of-W Word lines d-of-D data N-of-M Kanerva Memory model Learning ModeRecall Mode

Remembering sequences Order matters: earlier is more important Added: feedback Issues: symbol interference Two time constants Forward (shunt) inhibition: order sensitivity Backward inhibition: self- resetting

A finite state machine Finite number of states Transitions between states a  b  c  d  a Stores sequences Interference between different sequences to experiment ‘online’ learning and ‘offline’ learning

The Network Model (for FSM) ADDEC DATA MEMORY ADDRESSDATA

The Neural Network (for FSM)

Some implementation issues Pendulum model: to impose time- ordering Event queue Desentisization factor Spiking neuron model

Pendulum Model and event queue

The Neuron: Brain Cell

Neuron: Conventional Vs. Spiking Model

Conclusion and future work Hypothesis: It is possible to build a dynamic neural memory as a finite state machine capable of storing sequences Objective: To build a robust neural memory and a modular neural chip Future work: To finalise the various parameters of the neural model and to experiment with various models

Further Information Fire Project Home Page: E Mail: Homepage: Thanks!!