A Geodesic Method for Spike Train Distances Neko Fisher Nathan VanderKraats.

Slides:



Advertisements
Similar presentations
Routing Complexity of Faulty Networks Omer Angel Itai Benjamini Eran Ofek Udi Wieder The Weizmann Institute of Science.
Advertisements

Reactive and Potential Field Planners
Distance Preserving Embeddings of Low-Dimensional Manifolds Nakul Verma UC San Diego.
NEURAL NETWORKS Backpropagation Algorithm
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Approximations of points and polygonal chains
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
DYNAMICS OF RANDOM BOOLEAN NETWORKS James F. Lynch Clarkson University.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
1 Processing & Analysis of Geometric Shapes Shortest path problems Shortest path problems The discrete way © Alexander & Michael Bronstein, ©
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Heuristic alignment algorithms and cost matrices
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Artificial Neural Networks
Continuous Random Variables and Probability Distributions
Lecture 09 Clustering-based Learning
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Radial Basis Function Networks
Biological Modeling of Neural Networks Week 3 – Reducing detail : Two-dimensional neuron models Wulfram Gerstner EPFL, Lausanne, Switzerland 3.1 From Hodgkin-Huxley.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert.
MA Dynamical Systems MODELING CHANGE. Introduction to Dynamical Systems.
MA Dynamical Systems MODELING CHANGE. Modeling Change: Dynamical Systems A dynamical system is a changing system. Definition Dynamic: marked by.
An Efficient Approach to Clustering in Large Multimedia Databases with Noise Alexander Hinneburg and Daniel A. Keim.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
MA Dynamical Systems MODELING CHANGE. Modeling Change: Dynamical Systems ‘Powerful paradigm’ future value = present value + change equivalently:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Sequence Comparison Algorithms Ellen Walker Bioinformatics Hiram College.
Some figures adapted from a 2004 Lecture by Larry Liebovitch, Ph.D. Chaos BIOL/CMSC 361: Emergence 1/29/08.
Non-Bayes classifiers. Linear discriminants, neural networks.
Synchronization in complex network topologies
Geometry of Shape Manifolds
MA354 Dynamical Systems T H 2:30 pm– 3:45 pm Dr. Audi Byrne.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
1 Partial Synchronization in Coupled Chaotic Systems  Cooperative Behaviors in Three Coupled Maps Sang-Yoon Kim Department of Physics Kangwon National.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Continuous Random Variables and Probability Distributions
Randomized Kinodynamics Planning Steven M. LaVelle and James J
Self-Organizing Maps (SOM) (§ 5.5)
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
1 Symbolic Analysis of Dynamical systems. 2 Overview Definition an simple properties Symbolic Image Periodic points Entropy Definition Calculation Example.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
ECE-7000: Nonlinear Dynamical Systems 3. Phase Space Methods 3.1 Determinism: Uniqueness in phase space We Assume that the system is linear stochastic.
Today’s Lecture Neural networks Training
CS 388: Natural Language Processing: Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
Haim Kaplan and Uri Zwick
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Enumerating Distances Using Spanners of Bounded Degree
Perceptron as one Type of Linear Discriminants
Neuro-RAM Unit in Spiking Neural Networks with Applications
Generating Coherent Patterns of Activity from Chaotic Neural Networks
Synaptic integration.
ECE 576 POWER SYSTEM DYNAMICS AND STABILITY
Machine Learning.
Presentation transcript:

A Geodesic Method for Spike Train Distances Neko Fisher Nathan VanderKraats

Neuron: The Device Input: dendrites Output: axon Dendrite/axon connection = synapse

Input Output Threshold Time SynapseSynapse Dendrites (Input)Dendrites (Input) Cell BodyCell Body Axon (Output)Axon (Output) Neuron: The Device Slide complements of Arunava Banerjee How is information transmitted? – Spikes – Soma’s Membrane Potential (V) – Weighted sum – Spike effect decays over time

Systems of Spiking Neurons Spike effect decays over time –Bounded window Discretization Neuron 1: Neuron 2: Neuron 3: time

Systems of Spiking Neurons Spike train windows = points in Phase Space Neuron 1: Neuron 2: Neuron 3: time Dynamical System –Each point has a well-defined point following it

Phase Space Overview Extremely high dimensionality –For systems of 1000 neurons and reasonable simulation parameters, we have up to 100,000 dimensions!! Sensitive to Initial Conditions –Chaotic attractors

Phase Space Overview Degenerate States –Quiescent State –Seizure State Quiescence Seizure

Phase Space Overview Stable States –Zones of attraction Quiescence Seizure Stable States

Phase Space Overview Problem: Given a point (spike train), how can we tell what state we’re in? –Need Distance Metric between pts in our space Quiescence Seizure Stable States

Distance Metrics Spike Count Victor/Aranov Multi-neuronal edit distance Leader in the field Our Work Neuronal Edit Distance Distance Metric using a Geodesic Path

Spike Count Count the number of spikes Can tell between quiescent, stable and seizure state spike trains Hard to differentiate between spike trains from the same state(Quiescent, Stable and Seizure)

Spike Count n = 0 n = 71 n = 16 n = 17

Edit Distance Standard for calculating distance metrics Derived from Edit Distance for genetic sequence allignment Considers number of spikes Considers temporal locality of spikes Uses standard operations on spike trains to make them equivalent Insert/delete Shift

Victor/Aranov Multi-neuronal Edit Distance Insert/Delete Cost of 1 Shifting spikes within a neuron Cost of q |Δt| Shifting spikes between neurons Cost of k

Delete Victor/Aranov: Delete/Insert Insert Cost: 1

Shift within Neurons Victor/Aranov: Shifting spikes within neurons Cost: q|Δt| ΔtΔt

Victor/Aranov: Shifting spikes within neurons D = q |Δt| q determines sensitivity to spike count or spike timing –q = 0  spike count metric –Increasing q  sensitivity to spike timing –Two spikes are comparable if within 2/q sec. q|Δt|  2 (Cost of inserting and deleting)

Victor/Aranov: Shifting spikes between neurons Shift Between Neurons Cost: k Not biologically correct

Victor/Aranov: Shifting spikes between neurons d = k k = 0  neuron producing spike is irrelevant k > 2  spikes can’t be switched between neurons (cost would be greater than inserting and deleting)

Problems with Victor/Aranov Edit Distance Allows switching spikes between neurons Insert/delete cost are constant Edit Distances are Euclidean Needs Manifold  Euclidean distance cuts through manifold  Define local Euclidean distance  Move along manifold

Our Work Respect the Phase Space –Riemannian Manifold –Geodesic for distances Better local metric –Biologically-motivated edit distance (Neuronal Edit Distance) –Modification for geodesic (Distance Metric for Geodesic Paths) Testing: simulations

NED Operations Consider operations within each neuron independently –Total Distance is sum over all neurons Which situation is better? –6 spikes moving 1 timestep each –1 spike moving 6 timesteps Reward small distances for individual spikes –Cost of shifting a spike is (Δt) 2

NED Operations Which is better? –Extra spike in the middle of the time window –Extra spike in the beginning of the time window Potential spikes just off the window edge! Insert a spike by shifting a spike from the beginning of the window –Cost: (t-(-1)) 2 Delete a spike by shifting spikes to the end of the window –Cost: (t-WINDOW_SIZE) 2

NED Equation Basically, take minimum of all possible matchups: … … … … …

NED Equation Given 2 spike trains (points) x, y, with n neurons, window size w Let x i denote the i th neuron of x Let S(x i ) denote the number of spikes in x i Let f(x i,p,q) = (-1) p.x i.(w) q or the concatenation of p spikes at time -1 to the beginning of x i and the concatenation of q spikes at time w to the end Let f k (.) denote the k th spike time, in order, of the above

Geodesic Euclidean metric only good as a local approximation Globally, need to respect the phase space System dynamics come from points advancing in time Include small time changes locally Define small Euclidean distances from any of these “close in time” points Do global distances recursively

Geodesic New Local Distance (DMGP) –Distance Metric for Geodesic Paths Given a point x(t): Next point in time should have very low distance –Compute x(t+1) –DMGP[x(t) || x(t+1] = 0 For symmetry, define previous time similarly –Compute all possibilities for x(t-1) –DMGP[x i (t-1) || x(t)] = 0  i

Geodesic Initialization Geodesic algorithm must be given starting path with a set number of timesteps How to find an initial path? Our Idea: –Trace the NED –Subdivide recursively to create a path of arbitrary length x y 1,000,000 x y 385,000615,000 x y 320, ,000170,000215,000

Geodesic Initialization x y 320, ,000170,000215,000 How to subdivide a given interval between x and y? –Randomly select individual spikes from y and move them toward x, using the minimum distance as defined by NED[x||y], to create a new point x 1 –Continue until NED[x||x 1 ] is roughly half NED[x||y]. –Repeat until all intervals are sufficiently small. –Guarantees smooth transitions from one point to next x1x1

Geodesic Algorithm Initialize For each point x  (t) along geodesic trajectory: –For some fixed NED distance , consider local neighborhood as all points x’ where {NED(x(t)||x’) <  } U {NED(x(t+1)||x’) <  } U {NED(x(t-1)||x’) <  } Repeat until total distance stops decreasing

Testing K-means clustering –Sample points in different attractors Seizure versus stable states Rate-differentiable stable states –Sample points from same attractor Other ideas?

The End

Dynamical Systems Overview Fixed-point attractor X = 4 f(x) = 2 X = 2 f(x) = 1 X = 1 f(x) = 0.5 X = 0.5 f(x) = 0.25 y = ½ x Fixed point: x = 0 Return

Dynamical Systems Overview Periodic attractor –(aka Limit Cycle) Online example (Univ of Delaware) – LimitCycle/sample.htmlhttp://gorilla.us.udel.edu/plotapplet/examples/ LimitCycle/sample.html Return

Dynamical Systems Overview Chaotic Attractor F(x) = 2xif 0 ≤ x ≤ ½ 2(1-x)if ½ ≤ x ≤ 1 ½xif x ≥ 1 -½xif x ≤ 0 x=0x=1 F(x) attracts to the interval [0,1], then settles into any of an infinite number of periodic orbits Sensitive to initial conditions –Minor change causes different orbit x=1.7 F(x)=0.85 x=0.85 F(x)=0.3 x=0.3 F(x)=0.6 x=0.6 F(x)=0.8 F(x)=0.4 x=0.8 x=0.4 F(x)=0.8 Return

Example of stable spike train (1000 neurons for 800 ms)