Attractors in Neurodynamical Systems Włodzisław Duch, Krzysztof Dobosz Department of Informatics Nicolaus Copernicus University, Toruń, Poland Google:

Slides:



Advertisements
Similar presentations
Visualization of the hidden node activities or hidden secrets of neural networks. Włodzisław Duch Department of Informatics Nicolaus Copernicus University,
Advertisements

Universal Learning Machines (ULM) Włodzisław Duch and Tomasz Maszczyk Department of Informatics, Nicolaus Copernicus University, Toruń, Poland ICONIP 2009,
1 Testing the Efficiency of Sensory Coding with Optimal Stimulus Ensembles C. K. Machens, T. Gollisch, O. Kolesnikova, and A.V.M. Herz Presented by Tomoki.
Machine Learning Neural Networks
Two Technique Papers on High Dimensionality Allan Rempel December 5, 2005.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Computational model of the brain stem functions Włodzisław Duch, Krzysztof Dobosz, Grzegorz Osiński Department of Informatics/Physics Nicolaus Copernicus.
Principal Component Analysis
Mutual Information Mathematical Biology Seminar
Fuzzy rule-based system derived from similarity to prototypes Włodzisław Duch Department of Informatics, Nicolaus Copernicus University, Poland School.
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
Almost Random Projection Machine with Margin Maximization and Kernel Features Tomasz Maszczyk and Włodzisław Duch Department of Informatics, Nicolaus Copernicus.
Support Vector Neural Training Włodzisław Duch Department of Informatics Nicolaus Copernicus University, Toruń, Poland School of Computer Engineering,
1 Abstract This study presents an analysis of two modified fuzzy ARTMAP neural networks. The modifications are first introduced mathematically. Then, the.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Projection Pursuit. Projection Pursuit (PP) PCA and FDA are linear, PP may be linear or non-linear. Find interesting “criterion of fit”, or “figure of.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Dimension reduction : PCA and Clustering Slides by Agnieszka Juncker and Chris Workman.
Global Visualization of Neural Dynamics
Dimension reduction : PCA and Clustering Christopher Workman Center for Biological Sequence Analysis DTU.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Aula 4 Radial Basis Function Networks
NonLinear Dimensionality Reduction or Unfolding Manifolds Tennenbaum|Silva|Langford [Isomap] Roweis|Saul [Locally Linear Embedding] Presented by Vikas.
Introduction to Neural Networks Kenji Nakayama Kanazawa University, JAPAN 適応システム理論 ガイダンス.
FLANN Fast Library for Approximate Nearest Neighbors
Radial Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
IE 594 : Research Methodology – Discrete Event Simulation David S. Kim Spring 2009.
Fuzzy Symbolic Dynamics for Neurodynamical Systems Krzysztof Dobosz 1 and Włodzisław Duch 2 1 Faculty of Mathematics and Computer Science, 2 Department.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Applications of Neural Networks in Time-Series Analysis Adam Maus Computer Science Department Mentor: Doctor Sprott Physics Department.
Inner music & brain connectivity challenges Włodek Duch, K. Dobosz. Department of Informatics, Nicolaus Copernicus University, Toruń; Włodzimierz Klonowski,
STATISTICAL COMPLEXITY ANALYSIS Dr. Dmitry Nerukh Giorgos Karvounis.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
CCN COMPLEX COMPUTING NETWORKS1 This research has been supported in part by European Commission FP6 IYTE-Wireless Project (Contract No: )
VizDB A tool to support Exploration of large databases By using Human Visual System To analyze mid-size to large data.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Intro. ANN & Fuzzy Systems Lecture 14. MLP (VI): Model Selection.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Computational Intelligence: Methods and Applications Lecture 8 Projection Pursuit & Independent Component Analysis Włodzisław Duch Dept. of Informatics,
Are worms more complex than humans? Rodrigo Quian Quiroga Sloan-Swartz Center for Theoretical Neurobiology. Caltech.
Complex brain networks: graph theoretical analysis of structural and functional systems.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Model-based learning: Theory and an application to sequence learning P.O. Box 49, 1525, Budapest, Hungary Zoltán Somogyvári.
Cognitive models for emotion recognition: Big Data and Deep Learning
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
CLASSIFICATION OF ECG SIGNAL USING WAVELET ANALYSIS
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Exploring the Landscape of Brain States Włodzisław Duch & Krzysztof Dobosz (Nicolaus Copernicus University, Toruń, Poland), Aleksandar Jovanovic (University.
Deep Learning Amin Sobhani.
Inner music & brain connectivity challenges
Soma Mukherjee for LIGO Science Collaboration
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Connecting Data with Domain Knowledge in Neural Networks -- Use Deep learning in Conventional problems Lizhong Zheng.
Visualization of the hidden node activities or hidden secrets of neural networks. Włodzisław Duch Department of Informatics Nicolaus Copernicus University,
Computational model of the brain stem functions
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Scale-Change Symmetry in the Rules Governing Neural Systems
NonLinear Dimensionality Reduction or Unfolding Manifolds
Attractors in Neurodynamical Systems
Computational Intelligence: Methods and Applications
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Attractors in Neurodynamical Systems Włodzisław Duch, Krzysztof Dobosz Department of Informatics Nicolaus Copernicus University, Toruń, Poland Google: W. Duch ICNN, Hangzhou, Nov 2009

Motivation Neural respiratory rhythm generator (RRG): hundreds of neurons, what is the system doing? Neural respiratory rhythm generator (RRG): hundreds of neurons, what is the system doing? Analysis of multi-channel, non-stationary, time series data. Analysis of multi-channel, non-stationary, time series data. Information is in the trajectories, but how to see in high-D? Information is in the trajectories, but how to see in high-D? Component-based analysis: ICA, NNMF, wavelets... Component-based analysis: ICA, NNMF, wavelets... Time-frequency analysis, bumps... Time-frequency analysis, bumps... Recurrence plots, state portraits: limited info about trajectories. Recurrence plots, state portraits: limited info about trajectories. Fuzzy Symbolic Dynamics (FSD): visualize + understand. 1. Understand FSD mappings using simulated data. 2. First looks at some real data. 3. Examples from simulations of semantic word recognition.

Brain Spirography Example of a pathological signal analysis

Recurrent plots and trajectories Trajectory of dynamical system (neural activities, av. rates): Use time as indicator of minimal distance: For discretized time steps binary matrix R ij is obtained. Many measure of complexity and dynamical invariants may be derived from RP: generalized entropies, correlation dimensions, mutual information, redundancies, etc. N. Marwan et al, Phys. Reports 438 (2007) Embedding of time series or mutidimensional trajectories.

Recurrence plots Unfold the trajectory at t and show when it comes close to x(t). Unfold the trajectory at t and show when it comes close to x(t).

Fuzzy Symbolic Dynamics (FSD) Trajectory of dynamical system (neural activities, av. rates): 1. Standardize data. 2. Find cluster centers (e.g. by k-means algorithm):  1,  Use non-linear mapping to reduce dimensionality: Localized membership functions: sharp indicator functions => symbolic dynamics; strings. soft membership functions => fuzzy symbolic dynamics, dimensionality reduction => visualization.

Model, radial/linear sources Sources generate waves on a grid Flat wave Radial wave Radial wave Relatively simple patterns arise, but slow sampling shows numerical artifacts. Ex: one and two radial waves.

Radial + plane waves Radial sources are turned on and off, 5 events+transients.

Respiratory Rhythm Generator 3 layers, spiking neurons, output layer with 50 neurons

Sensitive differences?

FSD development  Optimization of parameters of membership functions to see more structure from the point of view of relevant task.  Learning: supervised clustering, projection pursuit based on quality of clusters => projection on interesting directions.  Measures to characterize dynamics: position and size of basins of attractors, transition probabilities, types of oscillations around each attractor (follow theory of recurrent plots for more).  Visualization in 3D and higher (lattice projections etc).  Tests on model data and on the real data.

BCI EEG example  Data from two electrodes, BCI IIIa

Alcoholics vs. controls Colors: from blue at the beginning of the sequence, to red at the end. Left: normal subject; right: alcoholic; task: two matched stimuli, 64 channels (3 after PP), 256 Hz sampling, 1 sec, 10 trials; single st alc.

Model of reading Learning: mapping one of the 3 layers to the other two. Fluctuations around final configuration = attractors representing concepts. How to see properties of their basins, their relations? Emergent neural simulator: Aisa, B., Mingus, B., and O'Reilly, R. The emergent neural modeling system. Neural Networks, 21, , layer model of reading: orthography, phonology, semantics, or distribution of activity using 140 microfeatures of concepts. Hidden layers in between.

Attractors FSD representation of 140-dim. trajectories in 2 or 3 dimensions. Attractor landscape changes in time due to neuron accommodation.

2D attractors for words Dobosz K, Duch W, Fuzzy Symbolic Dynamics for Neurodynamical Systems. Neural Networks (in print, 2009). Same 8 words, more synaptic noise.

Depth of attractor basins Variance around the center of a cluster grows with synaptic noise; for narrow and deep attractors it will grow slowly, but for wide basins it will grow fast. Jumping out of the attractor basin reduces the variance due to inhibition of desynchronized neurons.

3D attractors for words Non-linear visualization of activity of the semantic layer with 140 units for the model of reading that includes phonological, orthographic and semantic layers + hidden layers. Cost /wage, hind/deer have semantic associations, attractors are close to each other, but without neuron accommodation attractor basins are tight and narrow, poor generalization expected. Training with more variance in phonological and written form of words may help to increase attractor basins and improve generalization.

Connectivity effects Same situation but recurrent connections within layers are stronger, fewer but larger attractors are reached, more time is spent in each attractor. With small synaptic noise (var=0.02) the network starts from reaching an attractor and moves to creates “chain of thoughts”.

Inhibition effects Prompting the system with single word and following noisy dynamics, not all attractors are real words. Increasing g i from 0.9 to 1.1 reduces the attractor basins and simplifies trajectories.

Exploration Like in molecular dynamics, long time is needed to explore various potential attractors – depending on priming (previous dynamics or context) and chance. Same parameters but different runs: each time a single word is presented and dynamics run exploring different attractors.

Neurons and dynamics Trajectories show spontaneous shifts of attention. o Attention shifts may be impaired due to the deep and narrow attractor basins that entrap dynamics – dysfunction of leak channels (~15 types)? In memory models overspecific memories are created (as in ASD), unusual attention to details, the inability to generalize visual and other stimuli. o Accommodation: voltage-dependent K + channels (~40 types) do not decrease depolarization in a normal way, attractors do not shrink. This should slow down attention shifts and reduce jumps to unrelated thoughts or topics (in comparison to average person). Neural fatigue temporarily turns some attractors off, making all attractors that code significantly overlapping concepts inaccessible. This is truly dynamic picture: attractor landscape changes in time! What behavioral changes are expected depending on connectivity, inhibition, accommodation dynamics, leak currents, etc?

What can we learn? o Visualization should give insight into general behavior of neurodynamical systems, measure of complexity and dynamical invariants may be derived along the lines of recurrence plots. o How many attractors can be identified? o Where does the system spends most of its time? o Where is the trajectory most of the time? o What are the properties of basins of attractors (size, depths, time spend)? o What are the probabilities of transition between them (distances )? o How fast the transition occurs? o What type of oscillations occur around the attractors? Chaos? o FSD shows global mapping of the whole trajectory (do we want that?). o Different conditions more easily distinguished and interpreted than in recurrence plots, potentially useful in classification and diagnosis.

Future plans Relations between FSD, symbolic dynamics, and recurrence plots. Relations between FSD, symbolic dynamics, and recurrence plots. Simulated EEG models to understand how to interpret the FSD plots. Simulated EEG models to understand how to interpret the FSD plots. Other visualization methods: MDS, LLE, Isomap, LTSA, diffusion map … Other visualization methods: MDS, LLE, Isomap, LTSA, diffusion map … Effects of various component-based transformations: PCA, ICA, NNMF... Effects of various component-based transformations: PCA, ICA, NNMF... Supervised learning of membership function parameters to find interesting structures in low-dimensional maps: adding projection pursuit to find interesting views; projection pursuit in space and time to identify interesting segments. Supervised learning of membership function parameters to find interesting structures in low-dimensional maps: adding projection pursuit to find interesting views; projection pursuit in space and time to identify interesting segments. Combining projection pursuit with time-frequency analysis and FSD for EEG analysis. Combining projection pursuit with time-frequency analysis and FSD for EEG analysis. Systematic investigation of parameters of neurodynamics on basins of attractors. Systematic investigation of parameters of neurodynamics on basins of attractors. BCI and other applications + many other things … BCI and other applications + many other things …

Thank you for lending your ears... Google: W. Duch => Papers & presentations See also