Lyapunov Functions and Memory Justin Chumbley. Why do we need more than linear analysis? What is Lyapunov theory? – Its components? – What does it bring?

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

INTRODUCTION to SWITCHED SYSTEMS ; STABILITY under ARBITRARY SWITCHING
THE ROLE OF LIE BRACKETS IN STABILITY OF LINEAR AND NONLINEAR SWITCHED SYSTEMS Daniel Liberzon Coordinated Science Laboratory and Dept. of Electrical &
7.4 Predator–Prey Equations We will denote by x and y the populations of the prey and predator, respectively, at time t. In constructing a model of the.
Example System: Predator Prey Model
Ch 9.1: The Phase Plane: Linear Systems
Artificial Neural Networks - Introduction -
COMMUTATION RELATIONS and STABILITY of SWITCHED SYSTEMS Daniel Liberzon Coordinated Science Laboratory and Dept. of Electrical & Computer Eng., Univ. of.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Modeling of Coupled Non linear Reactor Separator Systems Prof S.Pushpavanam Chemical Engineering Department Indian Institute of Technology Madras Chennai.
Dynamical Systems Analysis III: Phase Portraits By Peter Woolf University of Michigan Michigan Chemical Process Dynamics and Controls.
Linear system by meiling CHEN1 Lesson 8 Stability.
Structural Stability, Catastrophe Theory, and Applied Mathematics
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Artificial Neurons, Neural Networks and Architectures
II. Towards a Theory of Nonlinear Dynamics & Chaos 3. Dynamics in State Space: 1- & 2- D 4. 3-D State Space & Chaos 5. Iterated Maps 6. Quasi-Periodicity.
Modern Control Systems1 Lecture 07 Analysis (III) -- Stability 7.1 Bounded-Input Bounded-Output (BIBO) Stability 7.2 Asymptotic Stability 7.3 Lyapunov.

Antoine Girard VAL-AMS Project Meeting April 2007 Behavioral Metrics for Simulation-based Circuit Validation.
A LIE-ALGEBRAIC CONDITION for STABILITY of SWITCHED NONLINEAR SYSTEMS CDC ’04 Michael Margaliot Tel Aviv University, Israel Daniel Liberzon Univ. of Illinois.
Computational Optimization
Asymptotic Techniques
Unsupervised learning
EXAMPLES: Example 1: Consider the system Calculate the equilibrium points for the system. Plot the phase portrait of the system. Solution: The equilibrium.
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Example System: Predator Prey Model
Example System – Tumor Growth Tumor p(t) is the tumor volume in mm 3 q(t) is the carrying capacity of the endothelial cells in mm 3, α,B,d,G are constants.
Yongwimon Lenbury Department of Mathematics, Faculty of Science, Mahidol University Centre of Excellence in Mathematics, PERDO, Commission on Higher Education.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
Chapter 7 Stability and Steady-State Error Analysis
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
LYAPUNOV STABILITY THEORY:
NEURONAL DYNAMICS 2: ACTIVATION MODELS
The Action Potential & Impulse/Signal Propagation Learning Objective Be able to describe what a synapse is. Be able to describe how an action potential.
Synaptic plasticity DENT/OBHS 131 Neuroscience 2009.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
COMMUTATION RELATIONS and STABILITY of SWITCHED SYSTEMS Daniel Liberzon Coordinated Science Laboratory and Dept. of Electrical & Computer Eng., Univ. of.
UNIT 3 THE CONSCIOUS SELF
(COEN507) LECTURE III SLIDES By M. Abdullahi
Lecture #7 Stability and convergence of ODEs João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems NO CLASSES.
Automatic Control Theory School of Automation NWPU Teaching Group of Automatic Control Theory.
17 1 Stability Recurrent Networks 17 3 Types of Stability Asymptotically Stable Stable in the Sense of Lyapunov Unstable A ball bearing, with dissipative.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Ch 9.6: Liapunov’s Second Method In Section 9.3 we showed how the stability of a critical point of an almost linear system can usually be determined from.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
11-1 Lyapunov Based Redesign Motivation But the real system is is unknown but not necessarily small. We assume it has a known bound. Consider.
Eigenvalues, Zeros and Poles
General Considerations
Stability and instability in nonlinear dynamical systems
L6 Optimal Design concepts pt B
Salman Bin Abdulaziz University
Boyce/DiPrima 9th ed, Ch 9.6: Liapunov’s Second Method Elementary Differential Equations and Boundary Value Problems, 9th edition, by William E. Boyce.
Types of Learning Associative Learning: Classical Conditioning
Biointelligence Laboratory, Seoul National University
NEURONAL DYNAMICS 2: ACTIVATION MODELS
§7-4 Lyapunov Direct Method
Complex Variables. Complex Variables Open Disks or Neighborhoods Definition. The set of all points z which satisfy the inequality |z – z0|
Autonomous Cyber-Physical Systems: Dynamical Systems
Types of Learning Associative Learning: Classical Conditioning
Modern Control Systems (MCS)
Stability.
Types of Learning Associative Learning: Classical Conditioning
Types of Memory (iconic memory) (7 bits for 30seconds)
Recurrent Networks A recurrent network is characterized by
Stability Analysis of Linear Systems
EXAMPLES: Example 1: Consider the system
Types of Learning Associative Learning: Classical Conditioning
Chapter 7 Inverse Dynamics Control
Presentation transcript:

Lyapunov Functions and Memory Justin Chumbley

Why do we need more than linear analysis? What is Lyapunov theory? – Its components? – What does it bring? Application: episodic learning/memory

Linearized stability of non-linear systems: Failures Is there a steady state under pure imaginary eigenvalues? – theorem 8 doesn’t say Size/Nature of Basin of attractions? – cf a small neighborhood of the ss (linearizing) Lyapunov – Geometric interpretation of state-space trajectories

Important geometric concepts (in 2d for convenience) State function – scalar function U of system with continuous partial derivatives – A landscape Define a landscape with steady state at the bottom of a valley

Positive definite state function

e.g. Unique singular point at 0 Not unique U

* U defines the valley – Do state trajectories travel downhill? Temporal change of pd state function along trajectories? – Time implicit in U

e.g. N-dim case

Lyapunov functions and asymptotic stability Intuition – Water down a valley  all trajectories in a neighborhood approach singular point as

satisfies a

Ch 8 Hopf bifurcation Van der Pol model for a heart-beat – Analyzed at bifurcation point (where linearized eigenvalues are purely imaginary) – At this point…  (0,0) is the only steady state  Linearized analysis can’t be applied (pure imaginary eigs) – But: pd state function has time derivates along trajectories

satisfies b So – Except on x,y axes where – But when x = 0 then  trajectories will move to points where -So U is a Lyapunov function for … -Ss at (0,0) is asymptotically stable Conclusion: have proven stability where linearization fails

Another failure of Theorem 8 Points ‘sufficiently close’ to asymptotically stable steady state go there as But U defines ALL points in the valley in which the ss lies! – Intuition: any trajectory starting within the valley flows to ss.

Formally many steady and basins – Assume we have U for It delimits a region R within which theorem 12 holds  A constraint U<K defines a subregion within the basin

Where does U come from? No general rule. Another e.g. divisive feedback

*

Memory Declarative – Episodic – Semantic Procedural …

Episodic memory (then learning)

m 16*16 pyramidal – Completely connected but not self-connected 1 for feedback inhibition

Aim – Understand generalization/discrimination Strategy – Input in the basin will be ‘recognized’ i.e. identified with the stored pattern (asympotically) – Lyapunov theory  assess basins of attraction Notation: etc…

Theorem 14

For reference Can be generalized to higher order

s ,

Pattern recognition (matlab)

Hebb Rule Empirical results – Implicate cortical and hippocampal NMDA – ms window for co-occurance – Presynaptic Glu and Postsynaptic depolarisation by backpropogation from postsynaptic axon (Mg ion removal).  Chemical events change synapse

For simplicity… M = max firing rate – (both pre and post must be firing higher than half maximum) Synapse changes to fixed k when modified Irreversible synaptic change All pairs symmetrically coupled 

Learning (matlab) One stimuli Multiple stimuli

Pros and limitations of Lyapunov theory More general stability analysis Basins of attraction Elegance and power No algorithm for getting U Not unique U: each gives lower bound on basin