Ch 7. Computing with Population Coding Summarized by Kim, Kwonill 2008.12.22 Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Introduction to Neural Networks Computing
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
PCA + SVD.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
An Illustrative Example.
Machine Learning Neural Networks
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
An Illustrative Example.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Artificial Neural Networks ML Paul Scheible.
ENGG2013 Unit 17 Diagonalization Eigenvector and eigenvalue Mar, 2011.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
18 1 Hopfield Network Hopfield Model 18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Itti: CS564 - Brain Theory and Artificial Intelligence. Systems Concepts 1 CS564 - Brain Theory and Artificial Intelligence University of Southern California.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
1 Neural Networks- a brief intro Dr Theodoros Manavis
Biointelligence Laboratory, Seoul National University
Population Coding Alexandre Pouget Okinawa Computational Neuroscience Course Okinawa, Japan November 2004.
Eigenedginess vs. Eigenhill, Eigenface and Eigenedge by S. Ramesh, S. Palanivel, Sukhendu Das and B. Yegnanarayana Department of Computer Science and Engineering.
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
3.5 – Solving Systems of Equations in Three Variables.
NEURAL NETWORKS FOR DATA MINING
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Motor Control. Beyond babbling Three problems with motor babbling: –Random exploration is slow –Error-based learning algorithms are faster but error signals.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Contents PCA GHA APEX Kernel PCA CS 476: Networks of Neural Computation, CSD, UOC, 2009 Conclusions WK9 – Principle Component Analysis CS 476: Networks.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
Perceptrons Michael J. Watts
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Deep Feedforward Networks
Ch 12. Continuous Latent Variables ~ 12
Data Mining, Neural Network and Genetic Programming
How Neurons Do Integrals
Hopfield Network.
Face Recognition with Neural Networks
An Illustrative Example.
Probabilistic Population Codes for Bayesian Decision Making
Ch4: Backpropagation (BP)
Introduction to Radial Basis Function Networks
III. Introduction to Neural Networks And Their Applications - Basics
An Illustrative Example.
Volume 74, Issue 1, Pages (April 2012)
Feedforward, Feedback and Response Variability
Ch4: Backpropagation (BP)
Presentation transcript:

Ch 7. Computing with Population Coding Summarized by Kim, Kwonill Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget

Summary Design a Neural Network ≡ a Function General Building Algorithm for an Arbitrary Smooth Function –1. Feed-forward connection from input to intermediate layer –2. Recurrent connection to removing ridges –3. Feed-forward connection from intermediate to output layer Analysis & Optimal Network –Add feedback  Recurrent Network  Dynamic system with attractor –Minimize variance of estimate Not suitable for high dimensional problem – with

Contents Introduction –Computing, Invariance, Throwing Away Information Computing Function with Networks of Neurons: A General Algorithm Efficient Computing –Qualitative Analysis –Quantitative Analysis

Introduction Encoding information in population activity Computing with population codes Ex) Sensorimotor translation Invariance –Throwing away information Ex), Face recognition –Invariance to different patterns (because of noise)

i-th neuron’s activity Computing Function with Networks of Neurons: A General Algorithm (7.1) (7.2) Input variableSmooth functionOutput variable Input population activity Output population activity Neural Network Input Layer Intermediate Layer (2D) Output Layer

Computing Function with Networks of Neurons: A General Algorithm General Building Algorithm for an Arbitrary Smooth Function –1. Feed-forward connection from input to intermediate layer –2. Recurrent connection to removing ridges –3. Feed-forward connection from intermediate to output layer

1. Feed-forward connection from input to intermediate layer Intermediate layer: 2-D array

2. Recurrent connection to removing ridges Mexican hat connectivity –Winner-takes-all (7.3)

3. Feed-forward connection from intermediate to output layer More input dimension, More intermediate dimension  General algorithm for smooth function (7.4)

Efficient Computing: Qualitative Analysis 2 ways for more efficient computation –Feedforward  Recurrent –Optimal networks Multi-dimensional Attractor Networks –Ex) 1D manifold: invariance attractor (n-1)D manifold: converge to InputIntermediateOutput

Efficient Computing: Quantitative Analysis Transient dynamics –t=0: Transient, noisy population input –t=∞: Smooth bump population Network estimates the encoded variables.

Efficient Computing: Quantitative Analysis Question: –How accurate are those estimates? = What is the variance of the estimates? Single variable case

Efficient Computing: Quantitative Analysis State equation of single variable case –It can have Line attractor  There exists, such that –Initial condition (from (7.1)) – (7.5) (7.6) (7.7)

Efficient Computing: Quantitative Analysis Solving steps –1. Small noise assumption –2. Dynamics linearization around an equilibrium point on the attractor –3. Solve variance (7.8) (7.9)

Efficient Computing: Quantitative Analysis Coordinate transform Linearization (7.8) (7.9) (7.10)

Efficient Computing: Quantitative Analysis Eigenvalue analysis (7.11) (7.12) (7.13) (7.14)

Efficient Computing: Quantitative Analysis Variance –The efficiency of network depends only on the adjoint eigenvector of the linearized dynamics whose eigenvalue is zero. (7.14)

Efficient Computing: Quantitative Analysis Optimal network –Minimize variance –Optimal variance: Depends on “Correlation Structure” Not suitable for high dimensional problem (7.16)

Summary Design a Neural Network ≡ a Function General Building Algorithm for an Arbitrary Smooth Function –1. Feed-forward connection from input to intermediate layer –2. Recurrent connection to removing ridges –3. Feed-forward connection from intermediate to output layer Analysis & Optimal Network –Add feedback  Recurrent Attractor Network –Minimize variance of estimate Not suitable for high dimensional problem – with