Presentation is loading. Please wait.

Presentation is loading. Please wait.

MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence.

Similar presentations


Presentation on theme: "MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence."— Presentation transcript:

1 MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

2 Recurrent Neural Networks Sofar, we have considered only feed-forward neural networks Apart for Hebbian Learning Most biological network have recurrent connections. This change of direction in the flow of information is interesting, as it can allow: To keep a memory of the activation of the neuron To propagate the information across output neurons

3 Neuron models Binary neurons Discrete time Real number neurons Discrete time Real number neurons Continuous time Perceptron NNs Hopfield network BackProp NNs Kohonen map Cont. Time Recur. NN Echo-state network Several CPG models Abstract Realistic

4 Dynamical Systems and NN Dynamical Systems are at the core of the control systems underlying many of the vertebrates control system for skillful motion Central Pattern Generator Pure cyclic patterns underlying basic locomotion

5 Dynamical Systems and NN Dynamical Systems are at the core of the control systems underlying many of the vertebrates control system for skillful motion Adaptive Controllers: Dynamical modulation of CPG

6 Dynamical Systems

7

8

9

10

11

12

13

14

15

16

17 Dynamical Systems: Applications Model of human three-dimensional reaching movements  To find a generic representation of motions that allows both robust visual recognition and flexible regeneration of motion.

18 Dynamical System Modulation Adaptation to sudden target displacement Different initial conditions Dynamical Systems: Applications

19 Adaptation to sudden target displacement Different initial conditions Dynamical Systems: Applications

20 Adaptation to different contexts Online adaptation to changes in the context Dynamical Systems: Applications

21 Neuron models Binary neurons Discrete time Real number neurons Discrete time Real number neurons Continuous time Perceptron NNs Hopfield network BackProp NNs Kohonen map Cont. Time Recur. NN Echo-state network Several CPG models Abstract Realistic

22 Leaky integrator neuron model Idea: add a state variable m j (~membrane potential) that is controlled by a differential equation Discrete time Real time

23 Leaky integrator neuron model Idea: add a state variable m j (membrane potential) that is controlled by a differential equation

24 Leaky integrator neuron model This type of neuron models are used in: Recurrent neural networks for time series analysis (e.g. echo-state networks) Neural oscillators Several CPG models Associative memories, e.g. the continuous time version of the Hopfield model

25 Behavior of a single neuron The behavior of a single leaky-integrator neuron without self- connection is a linear differential equation that can be solved analytically. Here S is a constant input: 1

26 Behavior of a single neuron tau = 0.2; D = 1.0; m 0 = 0.0; S = 3.0; b = 0.0; 1

27 Behavior of a single neuron The behavior of a single leaky-integrator neuron with a self-connection gives a nonlinear differential equation that cannot be solved analytically 1 Nonlinear term

28 Behavior of a single neuron: numerical simulation tau = 0.2; D = 1; w 11 = -0.5; b = 0.0; S = 3.0; 1

29 Fixed points with inhibitory self- connection Finding the (stable or unstable) fixed points: tau = 0.2; D = 1; w 11 = -20; b = 0.0; w 11 = - 20, S=30 1

30 Fixed points with inhibitory self- connection 1 tau = 0.2; D = 1; w 11 = -20; b = 0.0; w 11 = - 20, S=30

31 Fixed points with excitatory self- connection 1 Finding the (stable or unstable) fixed points: tau = 0.2; D = 1; w 11 = 20; b = 0.0; w 11 = 20, S=-10

32 Fixed points with excitatory self- connection 1 Finding the (stable or unstable) fixed points: tau = 0.2; D = 1; w 11 = 20; b = 0.0; w 11 = 20, S= -10 This neuron will converge to one of the three fixed points depending on initial conditions Fixed points

33 Stable fixed point

34 Stable and unstable fixed points

35 Bifurcation w 11 = -20 Stable By changing the value of w 11, the neuron stability properties changes. The system has undergone a bifurcation w 11 = 20 Unstable Stable

36 Can we create a two-neuron oscillator? Yes, but it is not easy. 2 1

37 Does this network oscillate? No 2 1 - -

38 Does this network oscillate? No 2 1

39 Two-neuron oscillator tau 1 = tau 2 = 0.1; D = 1; b 1 = -2.75, b 2 = -1.75; w 11 = 4.5, w 12 = -1, w 21 =1, w 22 = 4.5 See Beer (1995), Adaptive Behavior, Vol 3 No 4 2 1 - + + + Yes, with:

40 Two-neuron oscillator 2 1

41 2 1 Phase plot

42 Two-neuron network: possible behaviors One stable point One unstable One saddle One limit cycle Three stable points Two saddles Four stable points One unstable Four saddles See Beer (1995), Adaptive Behavior, Vol 3 No 4

43 Conclusion: even very simple leaky- integrator neural networks can exhibit rich dynamics

44 Half-center oscillators Brown in 1914 suggested that rhythms could be generated centrally (as opposed to peripherally) by systems of coupled neurons with reciprocal inhibition 2 1 - - Brown understood that mechanisms for producing the transitions between activity in the two halves of the circuit were required

45 Four-neuron oscillator 2 1 3 4

46 D = 1; tau = [0.02, 0.02, 0.1, 0.1]; b = [3.0, 3.0, -3.0, -3.0]; w(1,2) = w(1,4) = w(2,1) = w(2,3) = -5 w(1,3) = w(2,4) = 5; w(3,1) = w(4,2) = -5; 2 1 3 4

47 Four-neuron oscillator 2 1 3 4

48 Modulation of a four-neuron oscillator 2 1 3 4

49

50 Applications of a four-neuron oscillator Each neuron’s activation function is governed by:

51 Applications of a four-neuron oscillator Transition from walking to trotting and then galloping gait following an increase of the tonic input from 1 to 1.4 and 1.6 respectively.

52 Applications of a four-neuron oscillator Simple circuit to implement a sitting and lying down behavior by sequential inhibition of the legs

53 Applications of a four-neuron oscillator

54 How to design leaky-integrator neural networks? Recurrent back-propagation algorithm with the use of an energy function, cf. Hopfield Genetic algorithms Linear regression (echo state network) Use guidance from dynamical systems theory

55 Examples of leaky-integrator neural networks 6-legged locomotion control (Randal Beer and colleagues) Lamprey and salamander locomotion (Auke Ijspeert and colleagues) Echo state networks (Herbert Jaeger and colleagues) Associative memory (John Hopfield and colleagues)

56 Application of leaky-integrator neural networks: Modeling Human Data Muscle Model Coupled Oscillators for basic cyclic motion and reflexes Time-Delay NN acting as associative memory for storing sequences of activation

57 Muscle Model Application of leaky-integrator neural networks: Modeling Human Data Human Data Simulated Data

58 Schematic setup of Echo State Network

59 Schematic setup of ESN (II) Output weights : trained Inputs (time series)Internal stateOutput (time series) Input weights : Random values Internal weights : Random values

60 W out How do we train W out ? supervised It is a supervised learning algorithm. The training dataset is the input time series the desired output time series

61 Simply do a linear regression… Linear regression on the (high dimensional) space of the inputs AND internal states. Geometrical illustration with a 3 units network

62 Data acquisition

63 Network inputs and outputs Blue line : desired output Red line : network output

64 Neuron models Binary neurons Discrete time Real number neurons Discrete time Perceptron NNs Hopfield network BackProp NNs Kohonen map Abstract Realistic

65 BACKPROPAGATION A two-layer Feed-Forward Neural Network Outputs Output Neurons Inputs Input Neurons Hidden Neurons The output of the hidden nodes is unknown. Thus, the error must be back-propagated from output neuron to hidden neurons.

66 BPRNN Backprogagation has also been generalized to allow learning in recurrent neural networks (Elman, Jordan type of RNN Networks)  Learning time series

67 Recurrent Neural Networks Recurrent neural network: Context units Input units Hidden units Output layer JORDAN NETWORK Context Units:

68 Recurrent Neural Networks Recurrent neural network: Context units Input units Hidden units Output layer ELMAN NETWORK The context units store the content of the hidden units:

69 Recurrent Neural Networks Context units Input units Hidden units Output layer ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS ROBOT PERCEPTIONS ROBOT ACTIONS

70 ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS Generalization Recurrent Neural Networks: Robotics Applications

71 ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS Generalization Recurrent Neural Networks: Robotics Applications

72 ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS Generalization Recurrent Neural Networks: Robotics Applications Ito, Noda, Hashino & Tani, Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model, Neural Networks, April, 2006

73 Recurrent Neural Networks: Robotics Applications

74

75 ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS Generalization Recurrent Neural Networks: Robotics Applications

76

77 Neuron models Binary neurons Discrete time Real number neurons Discrete time Real number neurons Continuous time Spiking neurons (integrate and fire) Perceptron NNs Hopfield network BackProp NNs Kohonen map Cont. Time Recur. NN Echo-state network Several CPG models Liquid-state machine Several comp. neurosc. models Abstract Realistic

78 Rate coding versus spike coding Important question: is information in the brain encoded in rates of spikes or in the timing of individual spikes? Answer: probably both! Rates encode information sent to muscles Visual processing can be done very quickly (~150ms), with just a few spikes (Thorpe S., Fize D., and Marlot C. 1996, Nature).

79 Time Rate coding Spike coding Rate coding versus spike coding

80 Integrate-and-fire neuron Integrate-and-fire: like leaky-integrator models, but with the production of spikes when the membrane potential exceeds a threshold It combines leaky-integration and reset See Spiking Neuron Models. Single Neurons, Populations, Plasticity, Gerstner and Kistler, Cambridge University Press, 2002 (Gerstner 2002)

81 Neuron models Binary neurons Discrete time Real number neurons Discrete time Real number neurons Continuous time Spiking neurons (integrate and fire) Biophysical models Perceptron NNs Hopfield network BackProp NNs Kohonen map Cont. Time Recur. NN Echo-state network Several CPG models Liquid-state machine Several comp. neurosc. models Squid neuron (H.&H.) Numerous comp. neurosc. models Abstract Realistic

82 Hodgkin and Huxley neuron model Very influential model of the spiking property of a neuron based on ionic currents The details are out of the scope of this course Time Voltage

83 Hodgkin and Huxley neuron model Very influential model of the spiking property of a neuron based on ionic currents

84 Hodgkin and Huxley neuron model REFERENCES Original Paper: A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J Physiol. 1952 August 28; 117(4): 500–544. http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1392413&blobtype=pdf Recent Update: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek, Computation in a Single Neuron: Hodgkin and Huxley Revisited Neural Computation, Vol. 15, No. 8: 1715-1749, 2003. http://www.mitpressjournals.org/doi/pdfplus/10.1162/08997660360675017

85 FURTHER READING I Ito, Noda, Hashino & Tani, Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model, Neural Networks, April, 2006 http://www.bdc.brain.riken.go.jp/~tani/papers/NN2006.pdf H. Jaeger, "The echo state approach to analysing and training recurrent neural networks" (GMD-Report 148, German National Research Institute for Computer Science 2001). ftp://borneo.gmd.de/pub/indy/publications_herbert/EchoStatesTechRep.pdf B. Mathayomchan and R. D. Beer, Center-Crossing Recurrent Neural Networks for the Evolution of Rhythmic Behavior, Neural Comput., September 1, 2002; 14(9): 2043 - 2051. http://www.mitpressjournals.org/doi/pdf/10.1162/089976602320263999 S. R. D. Beer, Parameter space structure of continuous-time recurrent neural networks. Neural Comput., December 1, 2006; 18(12): 3009 - 3051. http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.12.3009 Pham, Q.C., and Slotine, J.J.E., "Stable Concurrent Synchronization in Dynamic System Networks," Neural Networks, 20(1), 2007. http://web.mit.edu/nsl/www/preprints/Polyrhythms05.pdf Billard, A. and Ijspeert, A.J. (2000) Biologically inspired neural controllers for motor control in a quadruped robot.. In Proceedings of the International Joint Conference on Neural Networks, Come (Italy), July. http://lasa.epfl.ch/publications/uploadedFiles/AB_Ijspeert_IJCINN2000.pdf Billard, A. and Mataric, M. (2001) Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture. Robotics & Autonomous Systems 941, 1-16. http://lasa.epfl.ch/publications/uploadedFiles/AB_Mataric_RAS2001.pdf

86 FURTHER READING II Herbert Jaeger and Harald Haas, Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication, Science 2, Vol. 304. no. 5667, pp. 78 - 80 http://www.sciencemag.org/cgi/reprint/304/5667/78.pdf S. Psujek, J. Ames, and R. D. Beer Connection and coordination: the interplay between architecture and dynamics in evolved model pattern generators. Neural Comput., March 1, 2006; 18(3): 729 - 747. http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.3.729


Download ppt "MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence."

Similar presentations


Ads by Google