Attractors in Neurodynamical Systems

Slides:



Advertisements
Similar presentations
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Advertisements

Computational model of the brain stem functions Włodzisław Duch, Krzysztof Dobosz, Grzegorz Osiński Department of Informatics/Physics Nicolaus Copernicus.
Principal Component Analysis
Attractors in Neurodynamical Systems Włodzisław Duch, Krzysztof Dobosz Department of Informatics Nicolaus Copernicus University, Toruń, Poland Google:
Global Visualization of Neural Dynamics
Dimension reduction : PCA and Clustering Christopher Workman Center for Biological Sequence Analysis DTU.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Aula 4 Radial Basis Function Networks
Radial-Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Fuzzy Symbolic Dynamics for Neurodynamical Systems Krzysztof Dobosz 1 and Włodzisław Duch 2 1 Faculty of Mathematics and Computer Science, 2 Department.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Sparsely Synchronized Brain Rhythms in A Small-World Neural Network W. Lim (DNUE) and S.-Y. KIM (LABASIS)
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Inner music & brain connectivity challenges Włodek Duch, K. Dobosz. Department of Informatics, Nicolaus Copernicus University, Toruń; Włodzimierz Klonowski,
Computational Intelligence: Methods and Applications Lecture 8 Projection Pursuit & Independent Component Analysis Włodzisław Duch Dept. of Informatics,
Are worms more complex than humans? Rodrigo Quian Quiroga Sloan-Swartz Center for Theoretical Neurobiology. Caltech.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
References: [1] E. Tognoli, J. Lagarde, G.C. DeGuzman, J.A.S. Kelso (2006). The phi complex as a neuromarker of human social coordination. PNAS 104:
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
CLASSIFICATION OF ECG SIGNAL USING WAVELET ANALYSIS
Ch 7. Computing with Population Coding Summarized by Kim, Kwonill Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget.
Exploring the Landscape of Brain States Włodzisław Duch & Krzysztof Dobosz (Nicolaus Copernicus University, Toruń, Poland), Aleksandar Jovanovic (University.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning Supervised Learning Classification and Regression
Clustering (1) Clustering Similarity measure Hierarchical clustering
Big data classification using neural network
Support Feature Machine for DNA microarray data
Deep Feedforward Networks
Artificial Neural Networks
Deep Learning Amin Sobhani.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Inner music & brain connectivity challenges
Nonlinear Structure in Regression Residuals
Fabien LOTTE, Cuntai GUAN Brain-Computer Interfaces laboratory
Soma Mukherjee for LIGO Science Collaboration
Intelligent Information System Lab
Basic machine learning background with Python scikit-learn
4.2 Data Input-Output Representation
Tomasz Maszczyk and Włodzisław Duch Department of Informatics,
Computational Intelligence: Methods and Applications
Volume 86, Issue 3, Pages (May 2015)
Neuro-Computing Lecture 4 Radial Basis Function Network
An Improved Neural Network Algorithm for Classifying the Transmission Line Faults Slavko Vasilic Dr Mladen Kezunovic Texas A&M University.
Computational Intelligence: Methods and Applications
Computational Intelligence: Methods and Applications
Fuzzy rule-based system derived from similarity to prototypes
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Connecting Data with Domain Knowledge in Neural Networks -- Use Deep learning in Conventional problems Lizhong Zheng.
Visualization of the hidden node activities or hidden secrets of neural networks. Włodzisław Duch Department of Informatics Nicolaus Copernicus University,
Data Transformations targeted at minimizing experimental variance
Nicholas J. Priebe, David Ferster  Neuron 
Visualization of the hidden node activities or hidden secrets of neural networks. Włodzisław Duch Department of Informatics Nicolaus Copernicus University,
Computational model of the brain stem functions
Sparseness and Expansion in Sensory Representations
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Support Vector Neural Training
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Volume 86, Issue 3, Pages (May 2015)
Computational Models of Grid Cells
Scale-Change Symmetry in the Rules Governing Neural Systems
NonLinear Dimensionality Reduction or Unfolding Manifolds
Computational Intelligence: Methods and Applications
Deep neural networks for spike sorting: exploring options
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor
Lecture 16. Classification (II): Practical Considerations
Feature Selection in BCIs (section 5 and 6 of Review paper)
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Attractors in Neurodynamical Systems Włodzisław Duch, Krzysztof Dobosz Department of Informatics Nicolaus Copernicus University, Toruń, Poland Google: W. Duch ICNN, Hangzhou, Nov 2009

Motivation Neural respiratory rhythm generator (RRG): hundreds of neurons, what is the system doing? Analysis of multi-channel, non-stationary, time series data. Information is in the trajectories, but how to see in high-D? Component-based analysis: ICA, NNMF, wavelets ... Time-frequency analysis, bumps ... Recurrence plots, state portraits: limited info about trajectories. Fuzzy Symbolic Dynamics (FSD): visualize + understand. Understand FSD mappings using simulated data. First looks at some real data. Examples from simulations of semantic word recognition.

Example of a pathological signal analysis Brain Spirography Example of a pathological signal analysis

Recurrent plots and trajectories Trajectory of dynamical system (neural activities, av. rates): Use time as indicator of minimal distance: For discretized time steps binary matrix Rij is obtained. Many measure of complexity and dynamical invariants may be derived from RP: generalized entropies, correlation dimensions, mutual information, redundancies, etc. N. Marwan et al, Phys. Reports 438 (2007) 237-329. Embedding of time series or mutidimensional trajectories.

Recurrence plots Unfold the trajectory at t and show when it comes close to x(t).

Fuzzy Symbolic Dynamics (FSD) Trajectory of dynamical system (neural activities, av. rates): 1. Standardize data. 2. Find cluster centers (e.g. by k-means algorithm): m1, m2 ... 3. Use non-linear mapping to reduce dimensionality: Localized membership functions: sharp indicator functions => symbolic dynamics; strings. soft membership functions => fuzzy symbolic dynamics, dimensionality reduction => visualization.

Model, radial/linear sources Sources generate waves on a grid Flat wave Radial wave Relatively simple patterns arise, but slow sampling shows numerical artifacts. Ex: one and two radial waves.

Radial + plane waves Radial sources are turned on and off, 5 events+transients.

Respiratory Rhythm Generator 3 layers, spiking neurons, output layer with 50 neurons

Sensitive differences?

Sensitive differences?

FSD development Optimization of parameters of membership functions to see more structure from the point of view of relevant task. Learning: supervised clustering, projection pursuit based on quality of clusters => projection on interesting directions. Measures to characterize dynamics: position and size of basins of attractors, transition probabilities, types of oscillations around each attractor (follow theory of recurrent plots for more). Visualization in 3D and higher (lattice projections etc). Tests on model data and on the real data.

BCI EEG example Data from two electrodes, BCI IIIa

Alcoholics vs. controls Colors: from blue at the beginning of the sequence, to red at the end. Left: normal subject; right: alcoholic; task: two matched stimuli, 64 channels (3 after PP), 256 Hz sampling, 1 sec, 10 trials; single st alc.

Model of reading Emergent neural simulator: Aisa, B., Mingus, B., and O'Reilly, R. The emergent neural modeling system. Neural Networks, 21, 1045-1212, 2008. 3-layer model of reading: orthography, phonology, semantics, or distribution of activity using 140 microfeatures of concepts. Hidden layers in between. Learning: mapping one of the 3 layers to the other two. Fluctuations around final configuration = attractors representing concepts. How to see properties of their basins, their relations? (c) 1999. Tralvex Yeap. All Rights Reserved

Attractors FSD representation of 140-dim. trajectories in 2 or 3 dimensions. Attractor landscape changes in time due to neuron accommodation. (c) 1999. Tralvex Yeap. All Rights Reserved

2D attractors for words Dobosz K, Duch W, Fuzzy Symbolic Dynamics for Neurodynamical Systems. Neural Networks (in print, 2009). Same 8 words, more synaptic noise.

Depth of attractor basins Variance around the center of a cluster grows with synaptic noise; for narrow and deep attractors it will grow slowly, but for wide basins it will grow fast. Jumping out of the attractor basin reduces the variance due to inhibition of desynchronized neurons. (c) 1999. Tralvex Yeap. All Rights Reserved

3D attractors for words Non-linear visualization of activity of the semantic layer with 140 units for the model of reading that includes phonological, orthographic and semantic layers + hidden layers. Cost /wage, hind/deer have semantic associations, attractors are close to each other, but without neuron accommodation attractor basins are tight and narrow, poor generalization expected. Training with more variance in phonological and written form of words may help to increase attractor basins and improve generalization. (c) 1999. Tralvex Yeap. All Rights Reserved

Connectivity effects With small synaptic noise (var=0.02) the network starts from reaching an attractor and moves to creates “chain of thoughts”. Same situation but recurrent connections within layers are stronger, fewer but larger attractors are reached, more time is spent in each attractor. (c) 1999. Tralvex Yeap. All Rights Reserved

Inhibition effects Increasing gi from 0.9 to 1.1 reduces the attractor basins and simplifies trajectories. Prompting the system with single word and following noisy dynamics, not all attractors are real words. (c) 1999. Tralvex Yeap. All Rights Reserved

Exploration Same parameters but different runs: each time a single word is presented and dynamics run exploring different attractors. Like in molecular dynamics, long time is needed to explore various potential attractors – depending on priming (previous dynamics or context) and chance. (c) 1999. Tralvex Yeap. All Rights Reserved

Neurons and dynamics Trajectories show spontaneous shifts of attention. Attention shifts may be impaired due to the deep and narrow attractor basins that entrap dynamics – dysfunction of leak channels (~15 types)? In memory models overspecific memories are created (as in ASD), unusual attention to details, the inability to generalize visual and other stimuli. Accommodation: voltage-dependent K+ channels (~40 types) do not decrease depolarization in a normal way, attractors do not shrink. This should slow down attention shifts and reduce jumps to unrelated thoughts or topics (in comparison to average person). Neural fatigue temporarily turns some attractors off, making all attractors that code significantly overlapping concepts inaccessible. This is truly dynamic picture: attractor landscape changes in time! What behavioral changes are expected depending on connectivity, inhibition, accommodation dynamics, leak currents, etc? (c) 1999. Tralvex Yeap. All Rights Reserved

What can we learn? Visualization should give insight into general behavior of neurodynamical systems, measure of complexity and dynamical invariants may be derived along the lines of recurrence plots. How many attractors can be identified? Where does the system spends most of its time? Where is the trajectory most of the time? What are the properties of basins of attractors (size, depths, time spend)? What are the probabilities of transition between them (distances )? How fast the transition occurs? What type of oscillations occur around the attractors? Chaos? FSD shows global mapping of the whole trajectory (do we want that?). Different conditions more easily distinguished and interpreted than in recurrence plots, potentially useful in classification and diagnosis.

Future plans Relations between FSD, symbolic dynamics, and recurrence plots. Simulated EEG models to understand how to interpret the FSD plots. Other visualization methods: MDS, LLE, Isomap, LTSA, diffusion map … Effects of various component-based transformations: PCA, ICA, NNMF ... Supervised learning of membership function parameters to find interesting structures in low-dimensional maps: adding projection pursuit to find interesting views; projection pursuit in space and time to identify interesting segments. Combining projection pursuit with time-frequency analysis and FSD for EEG analysis. Systematic investigation of parameters of neurodynamics on basins of attractors. BCI and other applications + many other things …

Thank you for lending your ears ... Google: W. Duch => Papers & presentations See also http:www.e-nns.org