Intuition for latent factors and dynamical systems.

Slides:



Advertisements
Similar presentations
Alex Cayco Gajic Journal Club 10/29/13. How does motor cortex generate muscle activity? Representational perspective: Muscle activity or abstract trajectory.
Advertisements

Introduction to Neural Networks 2. Overview  The McCulloch-Pitts neuron  Pattern space  Limitations  Learning.
Omri Barak Collaborators: Larry Abbott David Sussillo Misha Tsodyks Sloan-Swartz July 12, 2011 Messy nice stuff & Nice messy stuff.
Quantifying Generalization from Trial-by-Trial Behavior in Reaching Movement Dan Liu Natural Computation Group Cognitive Science Department, UCSD March,
The Nature of Statistical Learning Theory by V. Vapnik
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
Mechanics.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Chapter 3: Bifurcations ● Dependence on Parameters is what makes 1-D systems interesting ● Fixed Points can be created or destroyed, or the stability of.
CE 1501 CE 150 Fluid Mechanics G.A. Kallio Dept. of Mechanical Engineering, Mechatronic Engineering & Manufacturing Technology California State University,
Itti: CS564 - Brain Theory and Artificial Intelligence. Systems Concepts 1 CS564 - Brain Theory and Artificial Intelligence University of Southern California.
Helsinki University of Technology Adaptive Informatics Research Centre Finland Variational Bayesian Approach for Nonlinear Identification and Control Matti.
Unsupervised learning
MA Dynamical Systems MODELING CHANGE. Introduction to Dynamical Systems.
MA Dynamical Systems MODELING CHANGE. Modeling Change: Dynamical Systems A dynamical system is a changing system. Definition Dynamic: marked by.
Biological Modeling of Neural Networks: Week 9 – Adaptation and firing patterns Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 Firing patterns and adaptation.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
MA Dynamical Systems MODELING CHANGE. Introduction and Historical Context.
MA Dynamical Systems MODELING CHANGE. Modeling Change: Dynamical Systems ‘Powerful paradigm’ future value = present value + change equivalently:
Prof. dr. A. Achterberg, Astronomical Dept., IMAPP, Radboud Universiteit.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 19. Systems Concepts 1 Michael Arbib: CS564 - Brain Theory and.
What is the neural code?. Alan Litke, UCSD What is the neural code?
3D Flow Visualization Xiaohong Ye
MA354 Dynamical Systems T H 2:30 pm– 3:45 pm Dr. Audi Byrne.
Neural Modeling - Fall NEURAL TRANSFORMATION Strategy to discover the Brain Functionality Biomedical engineering Group School of Electrical Engineering.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Local Predictability of the Performance of an Ensemble Forecast System Liz Satterfield and Istvan Szunyogh Texas A&M University, College Station, TX Third.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Perceptrons Michael J. Watts
Scientific Data Analysis via Statistical Learning Raquel Romano romano at hpcrd dot lbl dot gov November 2006.
4.5: The Dimension of a Vector Space. Theorem 9 If a vector space V has a basis, then any set in V containing more than n vectors must be linearly dependent.
ECE-7000: Nonlinear Dynamical Systems 3. Phase Space Methods 3.1 Determinism: Uniqueness in phase space We Assume that the system is linear stochastic.
Syllabus Note : Attendance is important because the theory and questions will be explained in the class. II ntroduction. LL agrange’s Equation. SS.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
Fall 2004 Perceptron CS478 - Machine Learning.
Advanced Control Systems (ACS)
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
From: Motion processing with two eyes in three dimensions
Stability of Differential Equations
From: Motion processing with two eyes in three dimensions
Chapter Motion in Two and Three Dimensions
Perceptrons for Dummies
Ensemble variance loss in transport models:
Digital Control Systems (DCS)
Grid Cells for Conceptual Spaces?
Solving Systems of Three Equations
One-Dimensional Dynamics of Attention and Decision Making in LIP
Xaq Pitkow, Dora E. Angelaki  Neuron 
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
State Space Analysis UNIT-V.
Chapter 9: Rotational Motion
spike-triggering stimulus features
Support Vector Machines and Kernels
Dynamic Coding for Cognitive Control in Prefrontal Cortex
Polarization.
Integrated research framework, bounded by population and ecology of care axes running from highly selected to managed care populations (Y-axis) and from.
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Critical Timing without a Timer for Embryonic Development
Solving Systems of Three Equations
Neuronal responses to intracortical microstimulation in V1.
Grid Cells and Neural Coding in High-End Cortices
Adaptive Rescaling Maximizes Information Transmission
Polarization.
Dimensionality reduction in neuroscience
Activities of layer 2/3 motor cortex cells during task performance.
MET-1 increases DRG neuron K+ currents.
1.1 Dynamical Systems MODELING CHANGE
Presentation transcript:

Intuition for latent factors and dynamical systems. Intuition for latent factors and dynamical systems. A, n(t) is a vector representing observed spiking activity. Each element of the vector captures the number of spikes a given neuron emits within a short time window around time t. n(t) can typically be captured by the neural state variable x(t), an abstract, lower-dimensional representation that captures the state of the network. Dynamics are the rules that govern how the state updates in time. For a completely autonomous dynamical system without noise, if the dynamics f(x) are known, then the upcoming states are completely predictable based on an initial state x(0). B, In a simple three-neuron example, the ensemble's activity at each point in time traces out a trajectory in a 3-D state space, where each axis represents the activity of a given neuron. Not all possible patterns of activity are observed, rather, activity is confined to a 2-D plane within the 3-D space. The axes of this plane represent the neural state dimensions. Adapted from Cunningham and Yu, 2014. C, Conceptual low-dimensional dynamical system: a 1-D pendulum. A pendulum released from point p1 or p2 traces out different positions and velocities over time, and the state of the system can be captured by two state variables (position and velocity). D, The evolution of the system over time follows a fixed set of dynamic rules, i.e., the pendulum's equations of motion. Knowing the pendulum's initial state [x(0), filled circles] and the dynamical rules that govern its evolution [f(x), gray vector flow-field] is sufficient to predict the system's state at all future time points. Chethan Pandarinath et al. J. Neurosci. 2018;38:9390-9401 ©2018 by Society for Neuroscience