The search for organizing principles of brain function Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Chrisantha Fernando & Sampsa Sojakka
V1 Physiology. Questions Hierarchies of RFs and visual areas Is prediction equal to understanding? Is predicting the mean responses enough? General versus.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Fast Readout of Object Identity from Macaque Inferior Tempora Cortex Chou P. Hung, Gabriel Kreiman, Tomaso Poggio, James J.DiCarlo McGovern Institute for.
Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011.
Artificial Spiking Neural Networks
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
黃文中 Preview 2 3 The Saliency Map is a topographically arranged map that represents visual saliency of a corresponding visual scene. 4.
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Learning Convolutional Feature Hierarchies for Visual Recognition
How Patterned Connections Can Be Set Up by Self-Organization D.J. Willshaw C. Von Der Malsburg.
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
Un Supervised Learning & Self Organizing Maps Learning From Examples
COGNITIVE NEUROSCIENCE
Information Theory and Learning
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
SME Review - September 20, 2006 Neural Network Modeling Jean Carlson, Ted Brookings.
How does the mind process all the information it receives?
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Mind, Brain & Behavior Wednesday February 5, 2003.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
FMRI Methods Lecture7 – Review: analyses & statistics.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
fMRI Methods Lecture 12 – Adaptation & classification
Multiple attractors and transient synchrony in a model for an insect's antennal lobe Joint work with B. Smith, W. Just and S. Ahn.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Introduction: Brain Dynamics Jaeseung Jeong, Ph.D Department of Bio and Brain Engineering, KAIST.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Lecture 5 Neural Control
James L. McClelland Stanford University
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Effect of Small-World Connectivity on Sparsely Synchronized Cortical Rhythms W. Lim (DNUE) and S.-Y. KIM (LABASIS)  Fast Sparsely Synchronized Brain Rhythms.
Several strategies for simple cells to learn orientation and direction selectivity Michael Eisele & Kenneth D. Miller Columbia University.
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
Electrophysiology & fMRI. Neurons Neural computation Neural selectivity Hierarchy of neural processing.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
How Do Brain Areas Work Together When We Think, Perceive, and Remember? J. McClelland Stanford University.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
       January 3 rd, 2005 The signaling properties of the individual neuron. How we move from understanding individual nerve cells.
1 Nonlinear models for Natural Image Statistics Urs Köster & Aapo Hyvärinen University of Helsinki.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Nicolas Alzetta CoNGA: Cognition and Neuroscience Group of Antwerp
Cognitive Computing…. Computational Neuroscience
Brodmann’s Areas. Brodmann’s Areas The Primary Visual Cortex Hubel and Weisel discovered simple, complex and hypercomplex cells in the striate.
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Introduction (2/2) in Adaptive Cooperative Systems by Martine Beckerman, ’ 7.10 B.-W. Ku.
Review - Objectives Describe what is meant by levels of explanation, name an advantage or disadvantage of each. Describe how ‘top-down” inquiry differs.
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Patrick Kaifosh, Attila Losonczy  Neuron 
How to win big by thinking straight about relatively trivial problems
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

The search for organizing principles of brain function Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas Self-organization: Hebbian learning => feature- analyzing cells => cortical maps Information theory, a neural optimization principle, and applications Prediction, control, and the “local cortical circuit” (LCC)

Self-organization Pattern formation (Turing, 1952) from simple local rules (e.g., Hebb, 1949) –Hebb rule: When the firing of cell A contributes to that of cell B, increase the efficiency (synaptic strength) with which A excites B to fire. –An early puzzle: How does a layer of orientation- selective cells (Hubel & Wiesel, s) form? –An early example of the power of Hebb learning: Hebb rule + short connections + locally-correlated random electrical activity, can => orientation-selective cells & their patterning in a layer (RL)

Self-organization in cortical models Movie: J Sirosh, R Miikkulainen, & JA Bednar (UT Austin), 1996 [courtesy JA Bednar] ~nn/web- pubs/htmlbook96/sirosh/or _quad.mpghttp:// ~nn/web- pubs/htmlbook96/sirosh/or _quad.mpg Click for movie: or_quad.mov or_quad.mov Orientation map (below; R Linsker, 1986)

Some higher-level properties that can result from Hebbian learning –Feature-analyzing (selective) cells. –“Infomax” principle (RL): Create a layer of cells whose outputs convey maximum (Shannon) information about its inputs, subject to biological constraints & costs (types of allowed proc’g, wiring length, energy cost, etc.). An optimal encoding principle. Various uses of infomax –Models of neural learning & development –Qual’ve (RL, others) and quant’ve (Atick et al.) exp’tal agreement –Infomax-based ICA (independent component analysis) (Bell & Sejnowski, 1995): Reconstructs N statistically independent sources, given N linear combinations of them. –Nonlinear infomax is one way to generate “sparse representations.” Sparse coding used to reconstruct 3 speech sources given only the composite signal at each of 2 receivers (RL, 2001)

time freq. Sparse representation of mixture of sources

time freq. Labeling using a source signature Can obtain source signature from: - Relative transfer function (attenuation & phase shift at each frequency) from source to two rcvrs (used here). - Other methods: Pitch tracking; phoneme properties; can de-mix two overlapping sources using two received mixtures, etc. (None used here.)

time freq. Masking & reconstruction

Acoustic separation demo Mixture of 3 stereo speech sources Source 1: reconstruction & original Source 2: reconstruction & original Source 3: reconstruction & original

The “local cortical circuit” (LCC) Substantial uniformity of cell org’n & connectivity across neocortical areas (Mountcastle) Core functions of the LCC “module”? –A recurrent neural net that can combine “bottom-up” data and “top-down” expectations. LCC role in: forming generalizations? stabilizing feature analysis within each cortical processing area? Bayesian inference? –It’s long been clear that prediction, estimation, inference, & goal-directed motor control are important functions of mammalian brains. –Recent work (RL): A neural net alg’m for optimal Kalman estimation (pred’n) and control. The alg’m implies a set of constraints on the NN circuitry & signal flows. This architecture turns out to be similar to LCC.

Some other important unsolved problems “Fast learning”: animals vs. neural nets –Learning causal relations: deterministic or statistical? Learning powerful invariances and the “right” representations. Is statistical learning over-emphasized? Principles governing the processing, segregation, & integration of information streams (e.g., color, form, “what” & “where”)? Common ground between perception & human concept formation: Learning similarity metrics that are useful for forming generalizations & for behavior. How is information coded? (Firing rates, spike timing, place coding, synchrony & phase-locking, …?) What representations are really used by the brain? Some surprises -- e.g., “change blindness” (R Rensink demo). The “binding problem”; self-awareness & consciousness Tools: How to probe circuit dynamics (of multiple interconnected cells) at fine spatial & temporal resolution?