Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao 2008-12-29 Summary by B.-H. Kim Biointelligence Lab School of.

Slides:



Advertisements
Similar presentations
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Dynamic Bayesian Networks (DBNs)
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
An Introduction to Variational Methods for Graphical Models.
Artificial Spiking Neural Networks
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Bayesian Filtering for Robot Localization
Chapter 11: Cognition and neuroanatomy. Three general questions 1.How is the brain anatomically organized? 2.How is the mind functionally organized? 3.How.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by B.-H. Kim Biointelligence Laboratory, Seoul National.
Markov Localization & Bayes Filtering
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
1 6. Feed-forward mapping networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Particle Filters.
Biological Modeling of Neural Networks Week 8 – Noisy output models: Escape rate and soft threshold Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
CS Statistical Machine learning Lecture 24
CSC321: Neural Networks Lecture 16: Hidden Markov Models
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
1 Chapter 15 Probabilistic Reasoning over Time. 2 Outline Time and UncertaintyTime and Uncertainty Inference: Filtering, Prediction, SmoothingInference:
Adaptive Cooperative Systems Chapter 8 Synaptic Plasticity 8.11 ~ 8.13 Summary by Byoung-Hee Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
Chapter 3. Stochastic Dynamics in the Brain and Probabilistic Decision-Making in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning.
Bayesian Brain: Probabilistic Approaches to Neural Coding Chapter 12: Optimal Control Theory Kenju Doya, Shin Ishii, Alexandre Pouget, and Rajesh P.N.Rao.
Ch. 5 Bayesian Treatment of Neuroimaging Data Will Penny and Karl Friston Ch. 5 Bayesian Treatment of Neuroimaging Data Will Penny and Karl Friston 18.
1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Fundamentals of Computational Neuroscience, T. P. Trappenberg, 2002.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Chapter 4. Analysis of Brain-Like Structures and Dynamics (2/2) Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans 09/25.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Biological Modeling of Neural Networks Week 11 – Variability and Noise: Autocorrelation Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Variation of.
Biointelligence Laboratory, Seoul National University
Biointelligence Laboratory, Seoul National University
6. Feed-forward mapping networks
Course: Autonomous Machine Learning
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking (1/2) Creating Brain-Like Intelligence, Sendhoff et al. (eds), Robots Learning from.
Hidden Markov Models Part 2: Algorithms
Presented by Rhee, Je-Keun
Artificial Intelligence Chapter 3 Neural Networks
An Introduction to Variational Methods for Graphical Models
Artificial Intelligence Chapter 2 Stimulus-Response Agents
Computational neuroscience
Neuro-RAM Unit in Spiking Neural Networks with Applications
Artificial Intelligence Chapter 3 Neural Networks
LECTURE 15: REESTIMATION, EM AND MIXTURES
Chapter14-cont..
Adaptive Cooperative Systems Chapter 6 Markov Random Fields
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Perceptual learning Nisheeth 15th February 2019.
ARTIFICIAL NEURAL networks.
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

(c) SNU CSE Biointelligence Lab2 Outline Cortical neuron Computing P(preferred state | current/past inputs) Analogy Spiking prob. ∝ P(S|D) Bayesian Inference Neurophysiology Input from inhibitory neurons Input from excitatory neurons Transition prob. btw states Prob. normalization Feedback from higher to lower areas Prior probabilities Models for probabilistic computation in networks of neuron-like elements

Introduction (c) SNU CSE Biointelligence Lab3 Inference over time using hidden Markov model (HMM) [A] Inference in a hierarchical graphical model [A] Visual motion detection and decision-making Understanding attentional effects in the primate visual cortex Models for neural implementation of the belief propagation algorithm For Bayesian inference [B] model application

Bayesian Inference through Belief Propagation (c) SNU CSE Biointelligence Lab4 Sum over many r.v.  exponential growth of comp. time Belief propagation (local operations) P(R) P(M) mCRmCR Efficient computation of the posterior probabilites

Belief Propagation over Time In HMM (hidden Markov model) (c) SNU CSE Biointelligence Lab5 Emission probabilities Message (forward)

Hierarchical Belief Propagation An example of 3-level graphical model for images (c) SNU CSE Biointelligence Lab6 Messages Posterior prob.

Belief Propagation over Time – Approximate Inference in Linear Recurrent Networks Linear recurrent network with firing dynamics  Commonly used neural architecture for modeling cortical response properties  Discrete form 7 I v (output firing rate) W (forward weight matrix) U (recurrent weight matrix) (11.5)

Belief Propagation over Time – Exact Inference in Nonlinear Networks Firing rate model that takes into account some of the effects of nonlinear filtering in dendrites 8 I v W (forward weight matrix) U (recurrent weight matrix) (linear recurrent network) (f, g: nonlinear dendritic filtering functions)

Neural Circuits (c) SNU CSE Biointelligence Lab9

Results Example 1: Detecting Visual Motion A prominent property of visual cortical cells in area (e.g. V1, MT) is selectivity to the direction of visual motion Interpretation on the activity of these cells  the posterior probability of stimulus motion in a particular direction  Given a series of input images Experiment  1D motion in an image with two possible motion directions: L or R (c) SNU CSE Biointelligence Lab10

Visual Cortex in Brains of Primates (c) SNU CSE Biointelligence Lab11

Results Example 1: Detecting Visual Motion (NIPS 2005) (c) SNU CSE Biointelligence Lab12

Results Example 2: Bayesian Decision-Making in a Random-Dots Task Dots motion discrimination task  Stimulus  An image sequence showing a group of moving dots  A fixed fraction of which are randomly selected at each frame and moved in a fixed direction (the rest are moved in random direction)  Coherence: the fraction of dots moving in the same direction  Task  Decide the direction of motion of the coherently moving dots  Data  Phychophysical performance of humans and monkeys + neural responses in brain areas such as MT and LIP Goal of the experiment  Explore the extent to which the proposed models for neural belief propagation can explain the exisiting data (c) SNU CSE Biointelligence Lab13

Results Example 2: Bayesian Decision-Making in a Random-Dots Task (c) SNU CSE Biointelligence Lab14

Hierarchical Belief Propagation - Noisy Spiking Neuron Model v represents the membrane potential values of neurons rather than their firing rates Recurrent network of leaky integrate-and-fire neurons  If v i crosses a threshold T, the neuron fires a spike and v i is reset to the potential v reset  Discrete form  Nonlinear variant (c) SNU CSE Biointelligence Lab15 background inputs Random openings of membrane channel Gaussian white noise  Escape function

Results Example 3: Attention in the Visual Cortex The responses modulation of neurons incortical areas V2 and V4 by attention to particular location within an input image (c) SNU CSE Biointelligence Lab16 Multiplicative modulation due to attention Input image configuratoin and conditional probabilities

Effects of Attention on Responses in the Presence of Distractors (c) SNU CSE Biointelligence Lab17

Effects of Attention on Neighboring Spatial Locations (c) SNU CSE Biointelligence Lab18

Related Models Models based on log-likelihood ratios Inference using distributional codes Hierarchical inference (c) SNU CSE Biointelligence Lab19

Open Problems and Future Challenges Learning and adaptation The use of spikes in prob. Representations How the dendritic nonlinearities could be exploited to implement belief propagation Exploring graphical models that are inspired by neurobiology (c) SNU CSE Biointelligence Lab20