Exploring the Landscape of Brain States Włodzisław Duch & Krzysztof Dobosz (Nicolaus Copernicus University, Toruń, Poland), Aleksandar Jovanovic (University.

Slides:



Advertisements
Similar presentations
Active Appearance Models
Advertisements

CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Acoustic design by simulated annealing algorithm
Critical Transitions in Nature and Society Marten Scheffer.
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
Machine Learning Neural Networks
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Michigan State University1 Visual Attention and Recognition Through Neuromorphic Modeling of “Where” and “What” Pathways Zhengping Ji Embodied Intelligence.
Computational model of the brain stem functions Włodzisław Duch, Krzysztof Dobosz, Grzegorz Osiński Department of Informatics/Physics Nicolaus Copernicus.
Almost Random Projection Machine with Margin Maximization and Kernel Features Tomasz Maszczyk and Włodzisław Duch Department of Informatics, Nicolaus Copernicus.
Support Vector Neural Training Włodzisław Duch Department of Informatics Nicolaus Copernicus University, Toruń, Poland School of Computer Engineering,
Attractors in Neurodynamical Systems Włodzisław Duch, Krzysztof Dobosz Department of Informatics Nicolaus Copernicus University, Toruń, Poland Google:
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1.
Global Visualization of Neural Dynamics
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #31 4/17/02 Neural Networks.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Radial Basis Function Networks
Gaussian process modelling
Percolation in Living Neural Networks Ilan Breskin, Jordi Soriano, Tsvi Tlusty, and Elisha Moses Department of Physics of Complex Systems, The Weizmann.
Fuzzy Symbolic Dynamics for Neurodynamical Systems Krzysztof Dobosz 1 and Włodzisław Duch 2 1 Faculty of Mathematics and Computer Science, 2 Department.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Time Series Data Analysis - I Yaji Sripada. Dept. of Computing Science, University of Aberdeen2 In this lecture you learn What are Time Series? How to.
FMRI Methods Lecture7 – Review: analyses & statistics.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
The primate visual systemHelmuth Radrich, The primate visual system 1.Structure of the eye 2.Neural responses to light 3.Brightness perception.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
The origin of complexity Jaeseung Jeong, Ph.D. Department of Bio and Brain Engineering, KAIST.
Inner music & brain connectivity challenges Włodek Duch, K. Dobosz. Department of Informatics, Nicolaus Copernicus University, Toruń; Włodzimierz Klonowski,
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
A. Bazzani, B. Giorgini, S. Rambaldi Physics of the City Lab Dept. of Physics - CIG – INFN Models and tools for the analysis.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
09/20/04 Introducing Proteins into Genetic Algorithms – CSIMTA'04 Introducing “Proteins” into Genetic Algorithms Virginie LEFORT, Carole KNIBBE, Guillaume.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Lecture 5 Neural Control
Entrainment of randomly coupled oscillator networks Hiroshi KORI Fritz Haber Institute of Max Planck Society, Berlin With: A. S. Mikhailov 
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
DO LOCAL MODIFICATION RULES ALLOW EFFICIENT LEARNING ABOUT DISTRIBUTED REPRESENTATIONS ? A. R. Gardner-Medwin THE PRINCIPLE OF LOCAL COMPUTABILITY Neural.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Dynamic of Networks at the Edge of Chaos
APPLICATION OF CLUSTER ANALYSIS AND AUTOREGRESSIVE NEURAL NETWORKS FOR THE NOISE DIAGNOSTICS OF THE IBR-2M REACTOR Yu. N. Pepelyshev, Ts. Tsogtsaikhan,
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Big data classification using neural network
Mihály Bányai, Vaibhav Diwadkar and Péter Érdi
James A. Roberts, Karl J. Friston, Michael Breakspear 
Deep Feedforward Networks
Inner music & brain connectivity challenges
The General Linear Model (GLM): the marriage between linear systems and stats FFA.
Dynamic Causal Modelling (DCM): Theory
Tomasz Maszczyk and Włodzisław Duch Department of Informatics,
CS 4501: Introduction to Computer Vision Training Neural Networks II
Artificial Intelligence Chapter 3 Neural Networks
EE513 Audio Signals and Systems
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Artificial Intelligence Chapter 3 Neural Networks
Bayesian Methods in Brain Imaging
Volume 36, Issue 5, Pages (December 2002)
Artificial Intelligence Chapter 3 Neural Networks
Computational model of the brain stem functions
Facultés universitaires Notre Dame de la Paix à Namur (FUNDP)
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Artificial Intelligence Chapter 3 Neural Networks
Support Vector Neural Training
Timescales of Inference in Visual Adaptation
Homework 2 Let us simulate the savings experiment of Kojima et al. (2004) assuming that the learner models the hidden state of the world with a 3x1 vector.
Attractors in Neurodynamical Systems
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Exploring the Landscape of Brain States Włodzisław Duch & Krzysztof Dobosz (Nicolaus Copernicus University, Toruń, Poland), Aleksandar Jovanovic (University of Belgrade, Serbia), W. Klonowski (Nałęcz Institute of Biocybernetics & Biomedical Eng, Polish Academy of Sciences, Warsaw, Poland). References 1.M. Spivey, Continuity of mind. Oxford University Press Dobosz K, Duch W, Understanding Neurodynamical Systems via Fuzzy Symbolic Dynamics. Neural Networks (2009), 3.Dobosz K, Duch W, Fuzzy Symbolic Dynamics for Neurodynamical Systems. Lecture Notes in Computer Sci. 5164, , Dobosz K, Duch W, Global Visualization of Neural Dynamics. Neuromath Workshop, Jena, Germany, 2008, pp Duch W, Consciousness and Attention in Autism Spectrum Disorders. Coma and Consciousness. Clinical, Societal and Ethical Implications. Berlin, 4-5 June 2009, p. 46 Google: W Duch for papers/presentations. Recurrence Analysis The model of reading (used also for dyslexia) was modified adding accommodation (neural fatigue) mechanism into neuronal units based on K + (Ca ++ ) ion channels, plus synaptic Gaussian noise with zero mean and 0.02 variance that provides an extra energy to facilitate free evolution. Interesting properties can be obtained from recurrence plots: 1.Blue squares along the diagonal line show attractor basins. 2.The size of each square represents time that the system is spending in each attractor. 3.The color intensity allows for estimation of the strength of the attractor, e.g. dark blue square indicates a very strong attractor with small variance; color distribution gives some idea about the shape of attractor basin. 4.Recurrence analysis shows states to which the system is returning during evolution, displaying distance between attractors. Blue squares outside the diagonal line indicate that the distance between these two attractor basins is small, and if the square is uniformly dark blue, that the system has returned to the same basin of attraction. 5.Block structure depends mostly on accommodation parameters of neurons, attractor basins vanish when neurons are fatigues, and this facilitates jumps to distant attractors. Introduction The goal is to understand neurodynamics, identify attractor states, their properties, understand how neural processes give rise to attention and conscious mental states during spontaneous thinking or mind wandering. Using trained, biologically plausible model of neural network that has learned associations between phonology, orthography and semantic of many words, implemented in the Emergent simulator, we are investigating the landscape of attractor states using fuzzy symbolic dynamics visualization and recurrence plots. Hypothesis: transient states are not perceived in a conscious way, perception requires formation of a quasi-stable state where trajectory of the system falls into basin of some attractor, transitions between attractors generate “stream of consciousness”. Fuzzy Symbolic Dynamics Fuzzy Symbolic Dynamics (FSD) is used to: Reduce dimensionality of trajectory x(t), visualize it in 2 or 3D. Estimate properties of attractors: number, position, strength. Can be combined with time-frequency or component analysis. Algorithm: 1.Standardize data. 2.Find cluster centers (e.g. by k-means algorithm): μ 1, μ 2, … 3.Use non-linear mapping to reduce dimensionality: Reference points μ i may be optimized to resolve minimal distances. Model of Reading Emergent neural simulator: Aisa, B., Mingus, B., and O'Reilly, R. The emergent neural modeling system. Neural Networks 21, , layer model of reading: orthography, phonology, semantics, or distribution of activity over 140 microfeatures of concepts. Hidden layers in between. Learning: mapping one of the 3 layers to the other two. Fluctuations around final configuration show basins of attractors representing concepts. How to see their properties and relations? Trajectories of the activity of 140 neurons in the semantic layer, subject to synaptic noise and neural fatigue, oscillate in attractor basins that correspond to word semantics. Trajectories of 4 pairs of correlated words are shown, with concrete words in the first two pairs (flag, coat, hind, deer) marked in blue, and abstract words (wage, coat, loss, gain) as the last two pairs, marked in red-yellow. Recurrence Plots Eckmann et al. (1987) introduced recurrence plots, a general method for analysis of dynamical systems. A trajectory x i =x(t i ) returns to almost the same area (within ε distance) at some other point in time, creating non-zero elements (or black dots in the plot) of the recurrence matrix: With some experience such plots allow for identification of many interesting behaviors of the system. However, these plots depend strongly on the choice of ε and the choice of the time step Δt=t i+1 –t i. In the analysis of high-dimensional dynamical systems it is better to plot all distances between trajectory points using color scale for distance determination. The recurrence matrix is given by: Such recurrence plots may be smoothed using non-linear distance function ||. ||, capturing essential info about system’s behavior, especially its attractor dynamics. Recurrence plots have wide applications for analysis of time series, including high-dimensional neurodynamical systems, but their interpretation is not always easy; FSD complements them. Trajectories are good for … New area in psycholinguistics: investigation of dynamical cognition, influence of masking on semantic and phonological errors, temporal dynamics in perception, effects of priming, associations, continuous mental trajectories, formation of mems and their influence on thinking and relating mental states to neural/brain properties. M. Spivey, Continuity of mind. Oxford University Press P. McLeod, T. Shallice, D.C. Plaut, Attractor dynamics in word recognition: converging evidence from errors by normal subjects, dyslexic patients and a connectionist model. Cognition 74, , Conclusions 1. Spivey (2007) recommended symbolic dynamics for investigation of mental states, but recurrent plots and fuzzy symbolic dynamics may show in a much better way the continuity of mental states. 2. FSD/RP type of visualization shows the microstructure of cognition, at the conscious as well as subconscious levels, with fast transition states and 3. With more detailed models of neurons many phenomena related to mental states, associative thinking, attention deficits, temporo-spatial processing disorders and other mental problems may be investigated, and connected to experimental observations. 4. FSD/RP analysis may be applied to any multi-channel signals, applications to real EEG data are in progress. Results Neurodynamics of this model shows many interesting properties: 1.The landscape of attractor basins available for neurodynamics is not fixed, but changes depending on the history and context of the current event – the same stream is never entered twice. 2.Structure of spontaneous transitions between states of semantic network is regulated by the synaptic noise and neural accommodation, giving rise to different speed of mental time flow. 3.Depending on the network parameters (inhibition vs. excitation, neural accommodation due to leak ion channels etc.), availability of neurotransmitters, attractor basins may be deep and entrap neurodynamics, making the attention shifts difficult (leading to autism-like behavior), or it may be too shallow and the system will not be able to focus attention (ADHD-like behavior). This may be investigated by plotting the trajectory variance in attractor basin against noise variance: steep rise signifies weak attractors, while slow rise means strong (deep and narrow attractors). 4.Strong recurrent connections lead to fewer but larger basins of attraction.