Positioning rat using neuronal activity in hippocampus

Slides:



Advertisements
Similar presentations
Memristor in Learning Neural Networks
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Deep Learning and Neural Nets Spring 2015
Decision Support Systems
1 Part I Artificial Neural Networks Sofia Nikitaki.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Prediction Networks Prediction –Predict f(t) based on values of f(t – 1), f(t – 2),… –Two NN models: feedforward and recurrent A simple example (section.
Introduction to Recurrent neural networks (RNN), Long short-term memory (LSTM) Wenjie Pei In this coffee talk, I would like to present you some basic.
LOGO Classification III Lecturer: Dr. Bo Yuan
ICS 273A UC Irvine Instructor: Max Welling Neural Networks.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Deep Learning Neural Network with Memory (1)
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
ARTIFICIAL NEURAL NETWORKS. Overview EdGeneral concepts Areej:Learning and Training Wesley:Limitations and optimization of ANNs Cora:Applications and.
A Simulated-annealing-based Approach for Simultaneous Parameter Optimization and Feature Selection of Back-Propagation Networks (BPN) Shih-Wei Lin, Tsung-Yuan.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
CS 189 Brian Chu Slides at: brianchu.com/ml/
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
ECE 6504: Deep Learning for Perception Dhruv Batra Virginia Tech Topics: –LSTMs (intuition and variants) –[Abhishek:] Lua / Torch Tutorial.
Predicting the dropouts rate of online course using LSTM method
Audio-Based Multimedia Event Detection Using Deep Recurrent Neural Networks Yun Wang, Leonardo Neves, Florian Metze 3/23/2016.
Attention Model in NLP Jichuan ZENG.
Machine Learning Supervised Learning Classification and Regression
Neural networks and support vector machines
Regularization Techniques in Neural Networks
RNNs: An example applied to the prediction task
CS 388: Natural Language Processing: Neural Networks
SD Study RNN & LSTM 2016/11/10 Seitaro Shinagawa.
Outilne Temporal feature construction
RADIAL BASIS FUNCTION NEURAL NETWORK DESIGN
Other Classification Models: Neural Network
CSE 190 Neural Networks: How to train a network to look and see
Recursive Neural Networks
Computer Science and Engineering, Seoul National University
Emulating the Functionality of Rodents’ Neurobiological Navigation and Spatial Cognition Cells in a Mobile Robot Peter J. Zeno Department of Computer Science.
Deep Neural Networks: Visualization and Dropout
Deep Learning: Model Summary
Intelligent Information System Lab
Intro to NLP and Deep Learning
Synthesis of X-ray Projections via Deep Learning
Master’s Thesis defense Ming Du Advisor: Dr. Yi Shang
Prof. Carolina Ruiz Department of Computer Science
RNNs: Going Beyond the SRN in Language Prediction
Grid Long Short-Term Memory
Hyperparameters, bias-variance tradeoff, validation
A First Look at Music Composition using LSTM Recurrent Neural Networks
Final Presentation: Neural Network Doc Summarization
Understanding LSTM Networks
Neural Networks Geoff Hulten.
Lecture 16: Recurrent Neural Networks (RNNs)
Recurrent Encoder-Decoder Networks for Time-Varying Dense Predictions
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
RNNs: Going Beyond the SRN in Language Prediction
Evolutionary Algorithms for Hyperparameter Optimization
实习生汇报 ——北邮 张安迪.
Martin Schrimpf & Jon Gauthier MIT BCS Peer Lectures
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Prediction Networks Prediction A simple example (section 3.7.3)
Dilated Neural Networks for Time Series Forecasting
Recurrent Neural Networks (RNNs)
The experiments based on Recurrent Neural Networks
Question Answering System
Bidirectional LSTM-CRF Models for Sequence Tagging
LHC beam mode classification
LSTM Practical Exercise
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Positioning rat using neuronal activity in hippocampus Tambet Matiisen Deep Learning for NLP seminar 25.01.2016

Place cells In year 2014 The Nobel Prize in Physiology or Medicine was given to John O’Keefe, May-Britt Moser and Edvard I. Moser for discovering “place cells” in a rat brain.

Dataset Rat recordings from University College London 1m x 1m arena for rat to run around. Number of spikes for 172 neurons. 20ms time resolution, 54100 measurements.

Features In total 5400 datapoints with 344 features. 1s (x,y) 1s (x,y) 200ms 200ms In total 5400 datapoints with 344 features.

Baseline Algorithm Train mean distance Validation mean distance Linear regression 13.27cm 20.85cm K nearest neighbors 0cm 24.05cm Random forest 2.47cm 16.02cm Extremely randomized trees 15.32cm Gradient boosted trees 1.45cm 15.30cm Linear SVM 13.04cm 21.09cm Polynomial SVM 14.25cm 18.77cm Radial basis SVM 0.16cm 15.66cm Used 4000/1400 train-test split and grid search for hyperparameter optimization.

Recurrent Neural Networks RNN Bi-directional RNN http://colah.github.io/posts/2015-09-NN-Types-FP/

Long Short-Term Memory http://colah.github.io/posts/2015-08-Understanding-LSTMs/

Reference Network Architecture 1 LSTM layer (+ linear fully connected layer), 1024 hidden nodes, dropout 0.5, sequence length 100, batch size 1, RMSProp, early stopping. 10.39cm validation error (5-fold crossvalidation).

Number of layers Hidden nodes Experiment Train mean distance Validation mean distance 1 layer 5.05cm 10.39cm 2 layers 5.63cm 10.61cm 3 layers 12.13cm 16.99cm Hidden nodes Experiment Train mean distance Validation mean distance Hidden 512 4.56cm 10.60cm Hidden 1024 5.05cm 10.39cm Hidden 2048 6.27cm 10.97cm

Dropout Experiment Train mean distance Validation mean distance 5.08cm 10.30cm Dropout 0.2 5.43cm 10.62cm Dropout 0.5 5.05cm 10.39cm

Sequence length Batch size Experiment Train mean distance Validation mean distance Seqlen 10 7.81cm 13.65cm Seqlen 100 5.05cm 10.39cm Seqlen 200 5.32cm 10.92cm Batch size Experiment Train mean distance Validation mean distance Batch size 1 5.05cm 10.39cm Batch size 5 4.59cm 10.50cm Batch size 10 4.94cm 10.87cm

Direction Optimization method Experiment Train mean distance Validation mean distance Reference 5.05cm 10.39cm Backwards 33.34cm 34.79cm Bi-directional 4.69cm 10.51cm Stateful 6.54cm 11.81cm Optimization method Experiment Train mean distance Validation mean distance RMSProp 5.05cm 10.39cm Adam 7.35cm 13.30cm Momentum ?

Do place cells code future or past?

Are errors caused by less data?

Are errors caused by rat speed or direction it is facing? Errors by speed Errors by angle

Are errors bigger at the start of sequence?

Latest results Experiment Train mean distance Validation mean distance LSTM 9-fold CV 5.24cm 10.19cm Bi-LSTM 9-fold CV 4.58cm 9.86cm LSTM 54-fold CV 8.42cm 9.84cm Bi-LSTM 54-fold CV 6.86cm 9.65cm

Thanks! Ardi Tampuu Sander Tanni Zurab Bzhalava Jaan Aru Raul Vicente