Tahereh Toosi IPM. Recap 2 [Churchland and Abbott, 2012]

Slides:



Advertisements
Similar presentations
Binary Symmetric channel (BSC) is idealised model used for noisy channel. symmetric p( 01) =p(10)
Advertisements

Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Center for Computational Biology Department of Mathematical Sciences Montana State University Collaborators: Alexander Dimitrov Tomas Gedeon John P. Miller.
CSC Dr. Gary Locklair Exam #4 … CSC Dr. Gary Locklair update date on slides 5, 6, 7.
Spike Trains Kenneth D. Harris 3/2/2015. You have recorded one neuron How do you analyse the data? Different types of experiment: Controlled presentation.
1 Testing the Efficiency of Sensory Coding with Optimal Stimulus Ensembles C. K. Machens, T. Gollisch, O. Kolesnikova, and A.V.M. Herz Presented by Tomoki.
Shin Ishii Nara Institute of Science and Technology
BCS547 Neural Encoding.
Estimating mutual information Kenneth D. Harris 25/3/2015.
The University of Manchester Introducción al análisis del código neuronal con métodos de la teoría de la información Dr Marcelo A Montemurro
Poorvi Vora/CTO/IPG/HP 01/03 1 The channel coding theorem and the security of binary randomization Poorvi Vora Hewlett-Packard Co.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
For a random variable X with distribution p(x), entropy is given by H[X] = -  x p(x) log 2 p(x) “Information” = mutual information: how much knowing the.
A Bit of Information Theory Unsupervised Learning Working Group Assaf Oron, Oct Based mostly upon: Cover & Thomas, “Elements of Inf. Theory”,
Fundamental limits in Information Theory Chapter 10 :
Today: Entropy Information Theory. Claude Shannon Ph.D
Center for Computational Biology Department of Mathematical Sciences Montana State University Collaborators: Alexander Dimitrov John P. Miller Zane Aldworth.
An exactly solvable maximum entropy model Peter Latham Gatsby Computational Neuroscience Unit UCL CNS July 20, 2006.
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Center for Computational Biology Department of Mathematical Sciences Montana State University Collaborators: Alexander Dimitrov John P. Miller Zane Aldworth.
Information Theory Eighteenth Meeting. A Communication Model Messages are produced by a source transmitted over a channel to the destination. encoded.
1 Chapter 1 Introduction. 2 Outline 1.1 A Very Abstract Summary 1.2 History 1.3 Model of the Signaling System 1.4 Information Source 1.5 Encoding a Source.
Spike Train decoding Summary Decoding of stimulus from response –Two choice case Discrimination ROC curves –Population decoding MAP and ML estimators.
Reliability and Channel Coding
Information Theory Kenneth D. Harris 18/3/2015. Information theory is… 1.Information theory is a branch of applied mathematics, electrical engineering,
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Linear Codes for Distributed Source Coding: Reconstruction of a Function of the Sources -D. Krithivasan and S. Sandeep Pradhan -University of Michigan,
Noise, Information Theory, and Entropy
Basic Concepts in Information Theory
STATISTIC & INFORMATION THEORY (CSNB134)
Information Theory & Coding…
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
CY2G2 Information Theory 5
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Information theory and decoding approaches. Single-cell responses averaged over several repetitions of stimuli or behaviours. Neuroscience recorded the.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat
Synergy, redundancy, and independence in population codes, revisited. or Are correlations important? Peter Latham* and Sheila Nirenberg † *Gatsby Computational.
Data Hiding in Image and Video Part I: Fundamental Issues and Solutions ECE 738 Class Presentation By Tanaphol Thaipanich
Linawati Electrical Engineering Department Udayana University
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Coding Theory Efficient and Reliable Transfer of Information
Web page: Textbook. Abbott and Dayan. Homework and grades Office Hours.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Mathematical Foundations Elementary Probability Theory Essential Information Theory Updated 11/11/2005.
Estimating the firing rate
BCS547 Neural Decoding.
Abdullah Aldahami ( ) April 6,  Huffman Coding is a simple algorithm that generates a set of variable sized codes with the minimum average.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Source Encoder Channel Encoder Noisy channel Source Decoder Channel Decoder Figure 1.1. A communication system: source and channel coding.
1 CSCD 433 Network Programming Fall 2013 Lecture 5a Digital Line Coding and other...
Information Processing by Neuronal Populations Chapter 6: Single-neuron and ensemble contributions to decoding simultaneously recoded spike trains Information.
1 5. Representations and the neural code Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Center for Computational Biology Department of Mathematical Sciences Montana State University Collaborators: Alexander Dimitrov John P. Miller Zane Aldworth.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
Chapter 4: Information Theory. Learning Objectives LO 4.1 – Understand discrete and continuous messages, message sources, amount of information and its.
The Neural Code Baktash Babadi SCS, IPM Fall 2004.
1 CSCD 433 Network Programming Fall 2016 Lecture 4 Digital Line Coding and other...
Information Theory Information Suppose that we have the source alphabet of q symbols s 1, s 2,.., s q, each with its probability p(s i )=p i. How much.
“In The Name of God” Presented by: Yoosef Najian An Introduction to Information Theory.
Shannon Entropy Shannon worked at Bell Labs (part of AT&T)
Introduction to Information theory
Information Theory Michael J. Watts
Context-based Data Compression
Volume 20, Issue 5, Pages (May 1998)
Volume 20, Issue 5, Pages (May 1998)
Adaptive Rescaling Maximizes Information Transmission
Presentation transcript:

Tahereh Toosi IPM

Recap 2 [Churchland and Abbott, 2012]

Outline Some general problems in Neuroscience Information Theory IT in Neuroscience Entropy Mutual Information How to use Mutual Information? Summary 3

Some general problems Coding scheme : What is being encoded? How is it being encoded? With what precision? With what limitations? Multi trial or single trial? Goodness of our neural decoding model 4 [Borst and Theunissen, 1999]

Stimulus space 5 Stimulus parameter S 1 Stimulus parameter S 2 Distribution of stimuli P[S|{t i }] given spike train Estimate of stimulus given spike train accessible region for all stimuli in P[S] [Spikes : Exploring the neural code]

Statistical significance of how neural responses vary with different stimuli 6 [Borst and Theunissen, 1999]

Shannon helps… 7 [Borst and Theunissen, 1999]

Information Theory 8 information is reported in units of ‘bits’, with :  The minus sign makes h a decreasing function of its argument as required.  Note that information is really a dimensionless number. [Abbott and Dayan, 2001]

Encoding numbers in a digital code! 9 [Spikes : Exploring the neural code] one hand can carry log 2 (6) ≈22.58 bits

Channel capacity of a neuron 10 [Borst and Theunissen, 1999]

Information loss 11

Entropy 12 quantifies the surprise or unpredictability associated with a particular response Shannon’s entropy is just this measure averaged over all responses: [Abbott and Dayan, 2001] H =−(1− P[r + ]) log 2 (1− P[r + ])− P[r + ] log 2 P[r + ] The entropy of a binary code

To convey information about a set of stimuli, neural responses must be different for different stimuli. Entropy is a measure of response variability Mutual Information idea : comparing the responses obtained using a different stimulus on every trial with those measured in trials involving repeated presentations of the same stimulus The mutual information is the difference between the total response entropy and the average response entropy on trials that involve repetitive presentation of the same stimulus Mutual Information idea 13

Mutual Information 14 The entropy of the responses to a given stimulus : The noise entropy: This is the entropy associated with that part of the response variability that is not due to changes in the stimulus, but arises from other sources. Mutual Information

How to use Information Theory: 1.Show your system stimuli. 2.Measure neural responses. 3.Estimate: P( neural response | stimulus presented ) 4.From that, Estimate: P( neural repsones ) 5.Compute: H(neural response) and H(neural response | stimulus presented) 6. Calculate: I(response ; stimulus) 15

Entropy : spike count Stimuli responses spike count

How to screw it up: Choose stimuli which are not representative. Measure the “wrong” aspect of the response. Don’t take enough data to estimate P( ) well. Use a crappy method of computing H( ). Calculate I( ) and report it without comparing it to anything [Math for Neuroscience, Stanford]

Summary Strength of IT in Neuroscience: Coding efficiency : Comparing the overall information transfer to maximum spike train entropy Calculating the absolute amount of information transmitted Identifying system nonlinearities and validating any nonlinear system Neural code precision : The measure of Mutual Information Goodness of our neuronal decoding model : comparison of upper/lower estimate to direct estimate 18

THANK YOU! 19

20

21