Download presentation
Presentation is loading. Please wait.
Published byBetty Matthews Modified over 9 years ago
1
Closed Loop Temporal Sequence Learning Learning to act in response to sequences of sensor events
2
Overview over different methods You are here !
3
Real Life Heat radiation predicts pain when touching a hot surface. Sound of a prey may precede its smell will precede its taste. Control Force precedes position change. Electrical (disturbance) pulse may precede change in a controlled plant. Psychological Experiment Pavlovian and/or Operant Conditioning: Bell precedes Food. Natural Temporal Sequences in life and in “control” situations
4
Early: “Bell” Late: “Food” x What we had so far !
5
This is an open-loop system Sensor 2 conditioned Input BellFood Salivation Pavlov, 1927 Temporal Sequence This is an Open Loop System !
6
Adaptable Neuron Env. Closed loop Sensing Behaving
7
How to assure behavioral & learning convergence ?? This is achieved by starting with a stable reflex-like action and learning to supercede it by an anticipatory action. Remove before being hit !
8
Robot Application x Early: “Vision” Late: “Bump”
9
Reflex Only (Compare to an electronic closed loop controller!) This structure assures initial (behavioral) stability (“homeostasis”) Think of a Thermostat !
10
This is a closed-loop system before learning The Basic Control Structure Schematic diagram of A pure reflex loop Bump Retraction reflex
11
Learning Goal: Correlate the vision signals with the touch signals and navigate without collisions. Initially built-in behavior: Retraction reaction whenever an obstacle is touched. Robot Application
12
Lets look at the Reflex first
13
This is an open-loop system This is a closed-loop system before learning This is a closed-loop system during learning Closed Loop TS Learning The T represents the temporal delay between vision and bump.
14
Lets see how learning continues
15
Closed Loop TS Learning Antomically this loop still exists but ideally it should never be active again ! This is the system after learning
16
What has the learning done ? Elimination of the late reflex input corresponds mathematically to the condition: X 0 =0 This was the major condition of convergence for the ISO/ICO rules. ( The other one was T=0). What’s interesting about this is that the learning-induced behavior self-generates this condition. This is non-trivial as it assures the synaptic weight AND behavior stabilize at exactly the same moment in time.
17
This is the system after learning. This is the system after learning What has happened in engineering terms? The inner pathway has now become a pure feed-forward path
18
Formally: Delta pulse Disturbance D Calculations in the Laplace Domain!
19
Phase Shift The Learner has learned the inverse transfer function of the world and can compensate the disturbance therefore at the summation node! D P1P1 -e -sT e -sT 0
20
The out path has, through learning, built a Forward Model of the inner path. This is some kind of a “holy grail” for an engineer called: “Model Free Feed-Forward Compensation” For example: If you want to keep the temperature in an outdoors container constant, you can employ a thermostat. You may however also first measure how the sun warms the liquid in the container, relating outside temperature to inside temperature (“Eichkurve”). From this curve you can extract a control signal for the cooling of the liquid and react BEFORE the liquid inside starts warming up. As you react BEFORE, this is called Feed-Forward-Compensation (FFC). As you have used a curve, have performed Model-Based FFC. Our robot learns the model, hence we do Model Free FFC.
21
Example of ISO instability as compared to ICO stability ! Remember the Auto-Correlation Instability of ISO?? = u 1 v’ dd dt = u 1 u 0 ’ dd dt ICO ISO
23
Old ISO-Learning
24
New ICO-Learning
25
Full compensation One-shot learning Over compensation
26
Statistical Analysis: Conventional differential hebbian learning One-shot learning (some instances) Highly unstable
27
Stable in a wide domain One-shot learning (common) Statistical Analysis: Input Correlation Learning
28
Stable learning possible with about 10-100 fold higher learning rate
29
Two more applications: RunBot With different convergent conditions! Walking is a very difficult problem as such and requires a good, multi-layered control structure only on top of which learning can take place.
30
An excursion into Multi-Layered Neural Network Control Some statements: Animals are behaving agents. They receive sensor inputs from MANY sensors and they have to control the reaction of MANY muscles. This is a terrific problem and solved by out brain through many control layers. Attempts to achieve this with robots have so far been not very successful as network control is somewhat fuzzy and unpredictable.
31
Central Control (Brain) Spinal Cord (Oscillations ) Motorneurons & Sensors (Reflex generation) Muscles & Skeleton (Biomechanics) Muscle length Ground- contact Step control Terrain control Adaptive Control during Waliking Three Loops
32
Muscles & Skeleton (Biomechanics) Self-Stabilization by passive Biomechanical Properties (Pendelum)
33
Passive Walking Properties
34
Motor neurons & Sensors (Reflex generation) Muscles & Skeleton (Biomechanics) Muscle length RunBot’s Network of the lowest loop
35
Leg Control of RunBot Reflexive Control (cf. Cruse)
36
Instantaneous Parameter Switching
37
Body (UBC) Control of RunBot
38
Central Control (Brain) Terrain control Long-Loop Reflex of one of the upper loops Before or during a fall: Leaning forward of rump and arms
39
Central Control (Brain) Terrain control Long-Loop Reflexe der obersten Schleife Forward leaning UBC Backward leaning UBC
40
The Leg Control as Target of the Learning
41
Learning in RunBot Differential Hebbian Learning related to Spike-Timing Dependent Plasticity
42
RunBot: Learning to climb a slope Spektrum der Wissenschaft, Sept. 06
43
Human walking versus machine walking ASIMO by Honda
44
Lower floor Upper floor Ramp Falls Manages RunBot’s Joints
45
Change of Gait when walking upwards
46
Too lateEarly enough X X 0 =0: Weight stabilize together with the behaviour Learning relevant parameters
47
AL, (AR) Stretch receptor for anterior angle of left (right) hip; GL, (GR) Sensor neuron for ground contact of left (right) foot; EI (FI) Extensor (Flexor) reflex inter-neuron, EM (FM) Extensor (Flexor) reflex motor-neuron; ES (FS) Extensor (Flexor) reflex sensor neuron A different Learning Control Circuitry
48
Motor Neuron Signal Structure (spike rate) x0x0 x1x1 Left Leg Right Leg Learning reverses pulse sequence Stability of STDP
49
Motor Neuron Signal Structure (spike rate) x0x0 x1x1 Left Leg Right Leg Learning reverses pulse sequence Stability of STDP
50
The actual Experiment Note: Here our convergence condition has been T=0 (oscillatory!) and NOT as usual X 0 =0
51
Speed Change by Learning Note the Oscillations ! T ≈ 0 !
54
Stable learning possible with about 10-100 fold higher learning rate = u 1 v’ dd dt = u 1 u 0 ’ dd dt
55
Driving School: Learn to follow a track and develop RFs in a closed loop behavioral context
56
Central Features: Filtering of all inputs with low-pass filters (membrane property) Reproduces asymmetric weight change curve Derivative of the Output Filtered Input ISO-learning for temporal sequence learning: (Porr and Wörgötter, 2003) T Differential Hebbian learning used for STDP
57
Differential Hebbian Learning rule: Transfer of a computational neuroscience approach to a technical domain. Modelling STDP (plasticity) and using this approach (in a simplified way) to control behavioral learning. An ecological approach to some aspects of theoretical neuroscience
58
This is a closed-loop system before learning This is a closed-loop system during learning Adding a secondary loop This delay constitutes the temporal sequence of sensor events ! Pavlov in “real life” !
59
What has happened during learning to the system ? The primary reflex re-action has effectively been eliminated and replaced by an anticipatory action
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.