Matrix Representation of Spiking Neural P Systems with Delay Kamala Krithivasan and Ajeesh Ramanujan Department of Computer Science and Engineering Indian.

Slides:



Advertisements
Similar presentations
An Overview of ABFT in cloud computing
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
SOLUTION OF STATE EQUATION
WMC8 – Thessaloniki, Greece Some Applications of Spiking Neural P Systems Mihai Ionescu 1 & Dragoş Sburlan 2 1 URV, Research Group on Mathematical Linguistics,
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Automatic Speech Recognition II  Hidden Markov Models  Neural Network.
Representing Relations Using Matrices
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Turing Machines (At last!). Designing Universal Computational Devices Was Not The Only Contribution from Alan Turing… Enter the year 1940: The world is.
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
Some Characteristics of Spiking Neural P Systems with Anti-Spikes Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of.
Simulation of Spiking Neural P Systems Using Pnet Lab Authors Padmavati Metta Kamala Krithivasan Deepak Garg.
Mathematics. Matrices and Determinants-1 Session.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
(Page 554 – 564) Ping Perez CS 147 Summer 2001 Alternative Parallel Architectures  Dataflow  Systolic arrays  Neural networks.
Chapter 6: Multilayer Neural Networks
Artificial Neural Networks
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
MATRICES. Matrices A matrix is a rectangular array of objects (usually numbers) arranged in m horizontal rows and n vertical columns. A matrix with m.
Applied Discrete Mathematics Week 10: Equivalence Relations
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Chapter 3: The Fundamentals: Algorithms, the Integers, and Matrices
CSCI-455/552 Introduction to High Performance Computing Lecture 18.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
© by Kenneth H. Rosen, Discrete Mathematics & its Applications, Sixth Edition, Mc Graw-Hill, 2007 Chapter 9 (Part 2): Graphs  Graph Terminology (9.2)
Stochastic Activity Networks ( SAN ) Sharif University of Technology,Computer Engineer Department, Winter 2013 Verification of Reactive Systems Mohammad.
Slide Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
A Universal Turing Machine
THE CHURCH-TURING T H E S I S “ TURING MACHINES” Part 1 – Pages COMPUTABILITY THEORY.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
Joint Power and Channel Minimization in Topology Control: A Cognitive Network Approach J ORGE M ORI A LEXANDER Y AKOBOVICH M ICHAEL S AHAI L EV F AYNSHTEYN.
Relation. Combining Relations Because relations from A to B are subsets of A x B, two relations from A to B can be combined in any way two sets can be.
MapReduce and the New Software Stack. Outline  Algorithm Using MapReduce  Matrix-Vector Multiplication  Matrix-Vector Multiplication by MapReduce 
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 3 The Church-Turing Thesis Contents Turing Machines definitions, examples,
13.1 Sequences. Definition of a Sequence 2, 5, 8, 11, 14, …, 3n-1, … A sequence is a list. A sequence is a function whose domain is the set of natural.
Solving Numerical NP-complete Problems with Spiking Neural P Systems Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di.
1 Lecture 5 (part 2) Graphs II (a) Circuits; (b) Representation Reading: Epp Chp 11.2, 11.3
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Recursively Enumerable and Recursive Languages. Definition: A language is recursively enumerable if some Turing machine accepts it.
1 Digital Design Debdeep Mukhopadhyay Associate Professor Dept of Computer Science and Engineering NYU Shanghai and IIT Kharagpur.
Mathematics Medicine Sequences and series.
The Acceptance Problem for TMs
NUMBER SYSTEMS.
Stefan Mihalas Ernst Niebur Krieger Mind/Brain Institute and
Adaptive Resonance Theory (ART)
COSC 3340: Introduction to Theory of Computation
Peptide Computing – Universality and Complexity
Classification with Perceptrons Reading:
Chapter 12: Query Processing
CSE 105 theory of computation
Lecture 9: Asynchronous Network Algorithms
James B. Orlin Presented by Tal Kaminker
BLOCK DIAGRAM OF AN ADDRESS
Md. Mojahidul Islam Lecturer Dept. of Computer Science & Engineering
Md. Mojahidul Islam Lecturer Dept. of Computer Science & Engineering
Recurrent Networks A recurrent network is characterized by
Recall last lecture and Nondeterministic TMs
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
CSE 105 theory of computation
Dr. Unnikrishnan P.C. Professor, EEE
CSE 105 theory of computation
Expressive Power of CCS
Presentation transcript:

Matrix Representation of Spiking Neural P Systems with Delay Kamala Krithivasan and Ajeesh Ramanujan Department of Computer Science and Engineering Indian Institute of Technology Madras

2 Overview ● Spiking Neural P systems ● Working of Spiking Neural P systems ● Matrix representation of Spiking Neural P systems with delay ● Computation using matrices ● Conclusion

3 Spiking Neural P (SN P) Systems ● Spiking Neural P system is a computational model that has been inspired by biological nervous system ● Distributed and parallel computing model ● Variant of Membrane System (P System) ● Uses one type of object called spike (a) ● Computationally complete Ionescu, M., Păun, Gh., Yokomori, T.: Spiking Neural P Systems, Fund. Infor. 71, (2006).

4 Definition of SN P Systems Π = (O, σ 1, …, σ m, syn, i 0 ), where ● O = { a } (the alphabet of objects contains only one object called spike) ● σ 1, …, σ m are neurons, identified by tuples σ i = (n i,R i ), 1 ≤ i ≤ m, where: 1) n i ≥ 0, initial number of spikes in neuron i a2a2 a

5 Definition of SN P Systems contd... Π = (O, σ 1, …, σ m, syn, i 0 ), where 2) R i is a finite set of rules: a)E/a r → a; d, where E is a regular expresion over O, r ≥ 1, d ≥ 0; (spiking rules) b) a s → λ, for some s ≥ 1, a s ∉ L(E) for any rule of type (a) from R i ( forgetting rules) a2a2 a a 2/ a 2 ->a;0 (aa)*/a 3 ->a;1 a->a;0 a 2 ->λ

6 Definition of SN P Systems contd... Π = (O, σ 1, …, σ m, syn, i 0 ), ● syn ⊆ {1, 2,...,m} x {1, 2,..., m}, with (i,i) ∉ syn, for 1 ≤ i ≤ m; ● I 0 ε {1, 2, … m} indicates the output neuron a2a2 a a 2 ->a; 0 (aa)*/a 3 ->a;1 a->a;0 a 2 ->λ

7 Configuration of a SN P system ● The initial configuration of a SN P system is described by the number of spikes present in it. ● During a computation, the state of the system is described by the number of spikes present in each neuron, the open/closed condition of each neuron

8 Working of an SN P system ● A global clock is assumed and all neurons work in parallel but each neuron can use one rule at a time. ● There can be more than one rule enabled at any time in a neuron, then a rule is chosen in a non-deterministic way. ● Using the rules, we pass from one configuration of the system to another configuration. Such a step is called transition. ● A computation of an SN P system is finite or infinite sequence of transitions starting from the initial configuration.

9 Result of a computation ● Halting configuration (a configuration where no rules can be used) ● Count the number of spikes present in the output neuron or sent to the environment by the output neuron. ● The number of steps elapsed between the two spikes (sent to the environment). In this paper we use this mode of computation

10 Matrix representation for SN P Systems with delay ● Number of neurons, m ● Number of rules, n ● Set a total order to all the rules ● For a rule d i : a r → a s ; t, define lhs(d i ) = r, rhs(d i ) = s and delay(d i ) = t.

11 Matrix representation for SN P Systems with delay ● Vectors used ● Configuration vector ● denotes the number of spikes present in each neuron at step i ● the i th step configuration vector is represented by c i = (n i1, n i2, · · ·, n im ), where n ij represents the number of spikes contained in neuron j at step i

12 Matrix representation for SN P Systems with delay ● Vectors used contd... ● Spiking vector ● denotes which rules are selected and applied at step i ● the i th step spiking vector is represented by s i = (s i1, s i2, · · ·, s in ), where s ij is equal to 1 if the j th rule is selected at step i, otherwise 0

13 Matrix representation for SN P Systems with delay ● Vectors used contd... ● Delay vector ● describes which neurons are inactive at step i ● the i th step delay vector is represented by dv i = (d i1, d i2, · · ·, d im ), where d ij is greater than 0 if the j th neuron is inactive at step i, otherwise it is active

14 Matrix representation for SN P Systems with delay ● Vectors used contd... ● Transfer vector ● used to transfer spikes to neurons whose state changed from inactive to active state at step i ● the i th step transfer vector is represented by tv i = (t i1, t i2, · · ·, t im ) and if t ij = 1 then the spike corresponding to rule j can be sent to the active neurons connected to the neuron containing the rule j

15 Matrix representation for SN P Systems with delay ● Matrices used ● Consumption matrix ● When the rules are chosen and applied at every step, the reduction in the number of spikes in different neurons is represented by consumption matrix. ● An n x m matrix C where c ij = −lhs(d i ) if there is a rule d i : a r → a s ; t in neuron j, otherwise 0

16 Matrix representation for SN P Systems with delay ● Matrices used ● Consumption matrix (contd...) ● For example for the SN P system, C = a3a3 a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

17 Matrix representation for SN P Systems with delay ● Matrices used (contd...) ● Transfer matrix ● When the rules are chosen and applied at every step, the gain in the number of spikes in different neurons is represented by transfer matrix ● An n x m matrix T where t ij = rhs(d i ) if there is a rule d i : a r → a s ; t in neuron s and s = j, (s, j) syn, otherwise 0

18 Matrix representation for SN P Systems with delay ● Matrices used ● Transfer matrix (contd...) ● For example for the SN P system, T = a3a3 a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

19 Matrix representation for SN P Systems with delay ● Matrices used (contd...) ● Delay matrix ● The delay induced by the rules in the neurons is represented by delay matrix ● An n x m matrix D where d ij = delay(d i )+1 if there is a rule d i : a r → a s ; t in neuron j and t > 0, otherwise 0

20 Matrix representation for SN P Systems with delay ● Matrices used ● Delay matrix (contd...) ● For example for the SN P system, D = a3a3 a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

21 Matrix representation for SN P Systems with delay ● Matrices used (contd...) ● Firing matrix ● In order to control the transfer of spikes from the neurons when a neuron comes out of its inactive state a firing matrix is used ● An n x m matrix F where f ij = delay(d i )+1 if there is a rule d i : a r → a s ; t in neuron j and if j is an output neuron then there must exist a neuron l such that (j,l) syn, otherwise 0

22 Matrix representation for SN P Systems with delay ● Matrices used ● Firing matrix (contd...) ● For example for the SN P system, F = a3a3 a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

23 Using a rule in SN P Systems with delay ● Consider a rule d i : a r → a s ; t in neuron j and suppose that it is used in the k th step ● Using the rule consumes r number of spikes from neuron j and changes its state to inactive state because of the delay ● Neuron j cannot send or receive any more spike until its state changes to active state ● At the k+j th step it can transfer s spikes to all active neurons connected to it ● At the k+j+1 th step it becomes active again and it can receive spikes and can use the rules

24 Computation by Matrices ● Steps involved in a single computation step ● In the begining of every step decrement the delay and the transfer vectors ● Obtain a spiking vector ● Update the configuration vector by taking away the spikes from the neurons if the neurons are active by using the spiking vector and the consumption matrix ● Update the configuration vector by transfering spikes to active neurons if the delay caused by the rules used in the previous steps expired using the delay vector, transfer vector and the transfer matrix

25 Computation by Matrices ● Steps involved in a single computation step (condt...) ● Update the transfer vector and the delay vector using the firing matrix and the delay matrix ● If a zero delay rule is used in the step update the configuration vector adding the appropriate number of spikes

26 Computation by Matrices ● Step 0 ● c 0 = (3, 0, 1), dv 0 = (0, 0, 0), tv 0 = (0, 0, 0) ● Step 1 – dv 1 = (0, 0, 0), tv 1 = (0, 0, 0) – s 1 = (1, 0, 1) since rule 1 and 3 can be used – c 1 = c 0 + s 1. C = (3, 0, 1)+(1, 0, 1) – = (2, 0, 0) a3a3 a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

27 Computation by Matrices ● Step 1 (contd...) ● Rules 1 and 3 are used. Since rule 1 has a delay of 2 the spike will not be passed to neuron 2 immediately where as one spike is passed to the environment by rule 3 since rule 3 has zero delay ● Update delay and transfer vectors dv 1 = (3, 0, 0), tv 1 = (3, 0, 0) a2a2 a + /a → a; 2(1) a → a;0(3) a 2 → a;1(2)

28 Computation by Matrices ● Step 2 ● dv 2 = (2, 0, 0), tv 2 = (2, 0, 0) ● Neuron 1 is inactive ● Since no rules can be selected and applied the spiking vector s 2 =(0,0,0) ● No change in configuration, c 2 = (2, 0, 0) a2a2 a + /a → a; 2(1) a → a;0(3) a 2 → a;1(2)

29 Computation by Matrices ● Step 3 ● dv 3 = (1, 0, 0), tv 3 = (1, 0, 0) ● Neuron 1 is inactive ● Since no rules can be selected and applied the spiking vector s 3 =(0,0,0) ● Since tv 31 is 1 and neuron 2 is active we can transfer spike from neuron 1 to 2. So configuration, c 3 = (2, 1, 0) a2a2 a + /a → a; 2(1) a → a;0(3) a a 2 → a;1(2)

30 Computation by Matrices ● Step 4 ● dv 4 = (0, 0, 0), tv 4 = (0, 0, 0) ● Neuron 1 is again active ● Since rule 1 can be selected and applied the spiking vector s 4 =(1,0,0) ● We can remove 1 spike from neuron 1. So configuration, c 4 = (1, 1, 0) a a + /a → a; 2(1) a → a;0(3) a a 2 → a;1(2)

31 Computation by Matrices ● Step 4 (cont...) ● Since we used rule 1, the state of neuron changes and the delay vector and transfer vector get updated, dv 4 = (3, 0, 0), tv 4 = (3, 0, 0) ● Since neuron 1 is inactive, spike cannot be passed to neuron 2. So configuration, c 4 = (1, 1, 0) a a + /a → a; 2(1) a → a;0(3) a a 2 → a;1(2)

32 Computation by Matrices ● Step 5 ● dv 5 = (2, 0, 0), tv 5 = (2, 0, 0) ● Neuron 1 is inactive ● Since no rules can be selected and applied the spiking vector s 5 =(0,0,0) ● No change in configuration, c 5 = (1, 1, 0) a a + /a → a; 2(1) a → a;0(3) a a 2 → a;1(2)

33 Computation by Matrices ● Step 6 ● dv 6 = (1, 0, 0), tv 6 = (1, 0, 0) ● Neuron 1 is inactive ● Since no rules can be selected and applied the spiking vector s 6 =(0,0,0) ● Since tv 61 is 1 and neuron 2 is active we can transfer spike from neuron 1 to 2. So configuration, c 6 = (1, 2, 0) a a + /a → a; 2(1) a → a;0(3) a 2 a 2 → a;1(2)

34 Computation by Matrices ● Step 7 ● dv 7 = (0, 0, 0), tv 7 = (0, 0, 0) ● Neuron 1 is again active ● Since rules 1 and 2 can be selected and applied the spiking vector s 7 =(1,1,0) ● We can remove 1 spike from neuron 1 and 2 spikes from neuron 2. So configuration, c 7 = (0, 0, 0) a a + /a → a; 2(1) a → a;0(3) a 2 a 2 → a;1(2)

35 Computation by Matrices ● Step 7 (contd...) ● Since we used rules 1 and 2, the state of neurons 1 and 2 changes and the delay vector and transfer vector get updated, dv 7 = (3, 2, 0), tv 7 = (3, 2, 0) ● Since neurons 1 and 2 are inactive, spike cannot be passed to neuron 2 from 1 and neuron 2 cannot receive any spike. So configuration, c 7 = (0, 0, 0) a + /a → a; 2(1) a → a;0(3) a 2 → a;1(2)

36 Computation by Matrices ● Step 8 ● dv 8 = (2, 1, 0), tv 8 = (2, 1, 0) ● Neuron 1 and 2 are inactive ● Since no rules can be selected and applied the spiking vector s 8 =(0,0,0) and there will not be any change in configuration a + /a → a; 2(1) a → a;0(3) a 2 → a;1(2)

37 Computation by Matrices ● Step 8 (contd...) ● Since tv 82 = 1, one spike is transfered from neuron 2 to neuron 3 ● The configuration, c 8 = (0, 0, 1) a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

38 Computation by Matrices ● Step 9 ● dv 9 = (1, 0, 0), tv 9 = (1, 0, 0) ● Neuron 1 and 2 are inactive ● Since rule 3 can be selected and applied the spiking vector s 9 =(0,0,1) ● So configuration, c 9 = (0, 0, 0) and the spike produced by rule 3 is sent to environment a + /a → a; 2(1) a a → a;0(3) a 2 → a;1(2)

39 Computation by Matrices ● Step 9 (contd...) ● dv 9 = (1, 0, 0), tv 9 = (1, 0, 0) ● Since tv 91 is 1, one spike is transfered from neuron 1 to neuron 2 by the rule 1. So configuration, c 9 = (0, 1, 0) ● Since no rules can be applied the computations halt. a + /a → a; 2(1) a → a;0(3) a a 2 → a;1(2)

40 Conclusion ● Can be used to build a simulator for SN P system with delay ● Applications using SN P can be implemented in parallel computing architecture such as Compute Unified Device Architecture (CUDA)

41 References ● M. Ionescu, G. Pa ̆ un and T. Yokomori, Spiking Neural P Systems, Fundamenta Informaticae, vol.71, No.2-3,pp ,2006. ● X. Zeng, H. Adorna, M. Angel Martinez-del-Amor, L.Pan and M. J. Pe ́reze Jime ́nez, Matrix Representation of Spiking Neural P Systems, M. Gheorghe et al. (Eds.): CMC 2010, LNCS 6501, pp , ● L. Pan and G. Pa ̆ un, Spiking Neural P Systems with Anti- Spikes, Int. J. of Computers, Communications and Control, 4, pp ,2009 ● The P System Web Page:

42 THANK YOU