NEURONAL DYNAMICS 2: ACTIVATION MODELS

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Ch 7.6: Complex Eigenvalues
5.1 Real Vector Spaces.
Computational Intelligence
Introduction to Neural Networks Computing
Properties of State Variables
1.Neuronal Dynamical Systems We describe the neuronal dynamical systems by first- order differential or difference equations that govern the time evolution.
Ch 9.4: Competing Species In this section we explore the application of phase plane analysis to some problems in population dynamics. These problems involve.
Ch 9.1: The Phase Plane: Linear Systems
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
1cs542g-term Notes. 2 Solving Nonlinear Systems  Most thoroughly explored in the context of optimization  For systems arising in implicit time.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
18 1 Hopfield Network Hopfield Model 18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier.
Hypercubes and Neural Networks bill wolfe 10/23/2005.
Modern Control Systems1 Lecture 07 Analysis (III) -- Stability 7.1 Bounded-Input Bounded-Output (BIBO) Stability 7.2 Asymptotic Stability 7.3 Lyapunov.
Linear Equations in Linear Algebra
THE LAPLACE TRANSFORM LEARNING GOALS Definition The transform maps a function of time into a function of a complex variable Two important singularity functions.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
 Row and Reduced Row Echelon  Elementary Matrices.
Chapter 6 Associative Models. Introduction Associating patterns which are –similar, –contrary, –in close proximity (spatial), –in close succession (temporal)
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Decentralised load balancing in closed and open systems A. J. Ganesh University of Bristol Joint work with S. Lilienthal, D. Manjunath, A. Proutiere and.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
10/6/20151 III. Recurrent Neural Networks. 10/6/20152 A. The Hopfield Network.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Matrices & Determinants Chapter: 1 Matrices & Determinants.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Lecture #11 Stability of switched system: Arbitrary switching João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems.
THE LAPLACE TRANSFORM LEARNING GOALS Definition
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Architecture and Equilibra 结构和平衡 Chapter Chapter 6 Architecture and Equilibria Perface lyaoynov stable theorem.
Boyce/DiPrima 9 th ed, Ch 7.6: Complex Eigenvalues Elementary Differential Equations and Boundary Value Problems, 9 th edition, by William E. Boyce and.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Ch 9.2: Autonomous Systems and Stability In this section we draw together and expand on geometrical ideas introduced in Section 2.5 for certain first order.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
Asymptotic behaviour of blinking (stochastically switched) dynamical systems Vladimir Belykh Mathematics Department Volga State Academy Nizhny Novgorod.
Architecture and Equilibria 结构和平衡 Chapter 6 神经网络与模糊系统 学生: 李 琦 导师:高新波.
NEURAL NETWORK THEORY NEURONAL DYNAMICS Ⅰ : ACTIVATIONS AND SIGNALS 欢迎大家提出意见建议!
Lecture #7 Stability and convergence of ODEs João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems NO CLASSES.
17 1 Stability Recurrent Networks 17 3 Types of Stability Asymptotically Stable Stable in the Sense of Lyapunov Unstable A ball bearing, with dissipative.
NEURAL NETWORK THEORY NEURONAL DYNAMICS Ⅰ : ACTIVATIONS AND SIGNALS
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Matrices, Vectors, Determinants.
Ch 9.6: Liapunov’s Second Method In Section 9.3 we showed how the stability of a critical point of an almost linear system can usually be determined from.
Dynamical Systems 3 Nonlinear systems
Lecture 39 Hopfield Network
Eigenvalues, Zeros and Poles
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Boyce/DiPrima 9th ed, Ch 9.4: Competing Species Elementary Differential Equations and Boundary Value Problems, 9th edition, by William E. Boyce and.
Review of Matrix Operations
Chapter 6 Associative Models
Ch7: Hopfield Neural Model
Real Neurons Cell structures Cell body Dendrites Axon
NEURONAL DYNAMICS 2: ACTIVATION MODELS
Chapter 10 Optimal Control Homework 10 Consider again the control system as given before, described by Assuming the linear control law Determine the constants.
NEURONAL DYNAMICS 2: ACTIVATION MODELS
§7-4 Lyapunov Direct Method
Synaptic Dynamics: Unsupervised Learning
1.3 Vector Equations.
Stability.
8. Stability, controllability and observability
دانشگاه صنعتي اميركبير
Recurrent Networks A recurrent network is characterized by
Stability Analysis of Linear Systems
Architecture and Equilibria 结构和平衡 学生:郑巍 导师:刘三阳
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
Computational Intelligence
Computational Intelligence
Presentation transcript:

NEURONAL DYNAMICS 2: ACTIVATION MODELS 2002.10.8

Chapter 3. Neuronal Dynamics 2 :Activation Models 3.1 Neuronal dynamical system Neuronal activations change with time. The way they change depends on the dynamical equations as following: (3-1) (3-2) 2002.10.8

3.1 ADDITIVE NEURONAL DYNAMICS first-order passive decay model In the absence of external or neuronal stimuli, the simplest activation dynamics model is: (3-3) (3-4) 2002.10.8

3.1 ADDITIVE NEURONAL DYNAMICS since for any finite initial condition The membrane potential decays exponentially quickly to its zero potential.

Passive Membrane Decay Passive-decay rate scales the rate to the membrane’s resting potential. solution : Passive-decay rate measures: the cell membrane’s resistance or “friction” to current flow. 2002.10.8

property Pay attention to property The larger the passive-decay rate,the faster the decay--the less the resistance to current flow.

Membrane Time Constants The membrane time constant scales the time variable of the activation dynamical system. The multiplicative constant model: (3-8) 2002.10.8

Solution and property solution property The smaller the capacitance ,the faster things change As the membrane capacitance increases toward positive infinity,membrane fluctuation slows to stop.

Membrane Resting Potentials Definition Define resting Potential as the activation value to which the membrane potential equilibrates in the absence of external or neuronal inputs: (3-11) Solutions (3-12) 2002.10.8

Note The capacitance appear in the index of the solution,it is called time-scaling capacitance. It does no affect the steady-state solution and does not depend on the finite initial condition. In resting case,we can find the solution quickly.

Additive External Input Add input Apply a relatively constant numeral input to a neuron. (3-13) solution (3-14)

Meaning of the input Input can represent the magnitude of directly experiment sensory information or directly apply control information. The input changes slowly,and can be assumed constant value.

3.2 ADDITIVE NEURONAL FEEDBACK Neurons do not compute alone. Neuron modify their state activations with external input and with the feedback from one another. This feedback takes the form of path-weighted signals from synaptically connected neurons.

Synaptic Connection Matrices n neurons in field p neurons in field The ith neuron axon in a synapse jth neurons in is constant,can be positive,negative or zero.

Meaning of connection matrix The synaptic matrix or connection matrix M is an n-by-p matrix of real number whose entries are the synaptic efficacies .the ijth synapse is excitatory if inhibitory if The matrix M describes the forward projections from neuron field to neuron field The matrix N describes the feedforward projections from neuron field to neuron field

Bidirectional and Unidirectional connection Topologies Bidirectional networks M and N have the same or approximately the same structure. Unidirectional network A neuron field synaptically intraconnects to itself. BAM M is symmetric, the unidirectional network is BAM 2002.10.8

Augmented field and augmented matrix M connects to ,N connects to then the augmented field intraconnects to itself by the square block matrix B 2002.10.8

Augmented field and augmented matrix In the BAM case,when then hence a BAM symmetries an arbitrary rectangular matrix M. In the general case, P is n-by-n matrix. Q is p-by-p matrix. If and only if, the neurons in are symmetrically intraconnected 2002.10.8

3.3 ADDITIVE ACTIVATION MODELS Define additive activation model n+p coupled first-order differential equations defines the additive activation model (3-15) (3-16) 2002.10.8

additive activation model define The additive autoassociative model correspond to a system of n coupled first-order differential equations (3-17) 2002.10.8

additive activation model define A special case of the additive autoassociative model (3-18) (3-19) where is (3-20) measures the cytoplasmic resistance between neurons i and j. 2002.10.8

continuous additive bidirectional associative memories Hopfield circuit and continuous additive bidirectional associative memories Hopfield circuit arises from if each neuron has a strictly increasing signal function and if the synaptic connection matrix is symmetric (3-21) continuous additive bidirectional associative memories (3-22) (3-23) 2002.10.8

3.4 ADDITIVE BIVALENT FEEDBACK Discrete additive activation models correspond to neurons with threshold signal function The neurons can assume only two value: ON and OFF. ON represents the signal value +1. OFF represents 0 or –1. Bivalent models can represent asynchronous and stochastic behavior.

BAM-bidirectional associative memory Bivalent Additive BAM BAM-bidirectional associative memory Define a discrete additive BAM with threshold signal functions, arbitrary thresholds and inputs,an arbitrary but constant synaptic connection matrix M,and discrete time steps k. (3-24) (3-25) 2002.10.8

Threshold binary signal functions Bivalent Additive BAM Threshold binary signal functions (3-26) (3-27) For arbitrary real-value thresholds for neurons for neurons 2002.10.8

A example for BAM model Example A 4-by-3 matrix M represents the forward synaptic projections from to . A 3-by-4 matrix MT represents the backward synaptic projections from to .

Suppose at initial time k all the neurons in are ON. A example for BAM model Suppose at initial time k all the neurons in are ON. So the signal state vector at time k corresponds to Input Suppose 2002.10.8

first:at time k+1 through synchronous operation,the result is: A example for BAM model first:at time k+1 through synchronous operation,the result is: next:at time k+1 ,these signals pass “forward” through the filter M to affect the activations of the neurons. The three neurons compute three dot products,or correlations. The signal state vector multiplies each of the three columns of M. 2002.10.8

A example for BAM model the result is: synchronously compute the new signal state vector :

A example for BAM model the signal vector passes “backward” through the synaptic filter at time k+2: synchronously compute the new signal state vector :

A example for BAM model since then conclusion These same two signal state vectors will pass back and forth in bidirectional equilibrium forever-or until new inputs perturb the system out of equilibrium.

A example for BAM model asynchronous state changes may lead to different bidirectional equilibrium keep the first neurons ON,only update the second and third neurons. At k,all neurons are ON. new signal state vector at time k+1 equals:

A example for BAM model new activation state vector equals: synchronously thresholds passing this vector forward to gives

A example for BAM model similarly, for any asynchronous state change policy we apply to the neurons the system has reached a new equilibrium,the binary pair represents a fixed point of the system.

conclusion conclusion Different subset asynchronous state change policies applied to the same data need not product the same fixed-point equilibrium. They tend to produce the same equilibria. All BAM state changes lead to fixed-point stability.

Bidirectional Stability definition A BAM system is Bidirectional stable if all inputs converge to fixed-point equilibria. A denotes a binary n-vector in B denotes a binary p-vector in

Bidirectional Stability Represent a BAM system equilibrates to bidirectional fixed point as

Lyapunov Functions Lyapunov Functions L maps system state variables to real numbers and decreases with time. In BAM case,L maps the Bivalent product space to real numbers. Suppose L is sufficiently differentiable to apply the chain rule: (3-28)

Lyapunov Functions The quadratic choice of L (3-29) Suppose the dynamical system describes the passive decay system. (3-30) The solution (3-31)

Lyapunov Functions The partial derivative of the quadratic L: (3-32) (3-33) or (3-34) (3-35) In either case (3-36) At equilibrium This occurs if and only if all velocities equal zero

conclusion A dynamical system is stable if some Lyapunov Functions L decreases along trajectories. A dynamical system is asymptotically stable if it strictly decreases along trajectories Monotonicity of a Lyapunov Function provides a sufficient condition for stability and asymptotic stability.

Linear system stability For symmetric matrix A and square matrix B,the quadratic form behaves as a strictly decreasing Lyapunov function for any linear dynamical system if and only if the matrix is negative definite.

The relations between convergence rate and eigenvalue sign A general theorem in dynamical system theory relates convergence rate and ergenvalue sign: A nonlinear dynamical system converges exponetially quickly if its system Jacobian has eigenvalues with negative real parts. Locally such nonlinear system behave as linearly. A Lyapunov Function summarizes total system behavuor. A Lyapunov Function often measures the energy of a physical sysem.

Potential energy function represented by quadratic form Consider a system of n variables and its potential-energy function E. Suppose the coordinate measures the displacement from equilibrium of ith unit.The energy depends on only coordinate ,so since E is a physical quantity,we assume it is sufficiently smooth to permit a multivariable Taylor-series expansion about the origin:

Potential energy function represented by quadratic form Where A is symmetric,since

The reason of (3-42)follows First,we defined the origin as an equilibrium of zero potential energy;so Second,the origin is an equilibrium only if all first partial derivatives equal zero. Third,we can neglect higher-order terms for small displacement,since we assume the higher-order products are smaller than the quadratic products.

Bivalent BAM theorem The average signal energy L of the forward pass of the Signal state vector through M,and the backward pass Of the signal state vector through : since

Lower bound of Lyapunov function The signal is Lyapunov function clearly bounded below. For binary or bipolar,the matrix coefficients define the attainable bound: The attainable upper bound is the negative of this expression.

Lyapunov function for the general BAM system The signal-energy Lyapunov function for the general BAM system takes the form Inputs and and constant vectors of thresholds the attainable bound of this function is.

Bivalent BAM theorem Bivalent BAM theorem.every matrix is bidrectionally stable for synchronous or asynchronous state changes. Proof consider the signal state changes that occur from time k to time k+1,define the vectors of signal state changes as:

Bivalent BAM theorem define the individual state changes as: We assume at least one neuron changes state from k to time k+1. Any subset of neurons in a field can change state,but in only one field at a time. For binary threshold signal functions if a state change is nonzero,

Bivalent BAM theorem For bipolar threshold signal functions The “energy”change Differs from zero because of changes in field or in field

Bivalent BAM theorem

Bivalent BAM theorem Suppose Then This implies so the product is positive: Another case suppose

Bivalent BAM theorem This implies so the product is positive: So for every state change. Since L is bounded,L behaves as a Lyapunov function for the additive BAM dynamical system defined by before. Since the matrix M was arbitrary,every matrix is bidirectionally stable. The bivalent Bam theorem is proved.

Property of globally stable dynamical system

Two insights about the rate of convergence First,the individual energies decrease nontrivially.the BAM system does not creep arbitrary slowly down the toward the nearest local minimum.the system takes definite hops into the basin of attraction of the fixed point. Second,a synchronous BAM tends to converge faster than an asynchronous BAM.In another word, asynchronous updating should take more iterations to converge.