Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

CSC2535: 2013 Advanced Machine Learning Lecture 8b Image retrieval using multilayer neural networks Geoffrey Hinton.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 21 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
Functional Link Network. Support Vector Machines.
Artificial Neural Networks - Introduction -
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Artificial Neural Networks - Introduction -
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Lecture 14 – Neural Networks
How to do backpropagation in a brain
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CIAR Second Summer School Tutorial Lecture 2b Autoencoders & Modeling time series with Boltzmann machines Geoffrey Hinton.
How to do backpropagation in a brain
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
1 6. Feed-forward mapping networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
1 1. Introduction Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering Graduate.
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
fMRI Methods Lecture 12 – Adaptation & classification
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Cognitive Science Overview Cognitive Science Defined The Brain Assumptions of Cognitive Science Cognitive Information Processing Cognitive Science and.
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Lecture 5 Neural Control
Chapter 2. From Complex Networks to Intelligent Systems in Creating Brain-like Systems, Sendhoff et al. Course: Robots Learning from Humans Baek, Da Som.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
Chapter 4. Analysis of Brain-Like Structures and Dynamics (2/2) Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans 09/25.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
CSC321 Lecture 27 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 18, 2011.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Basics of Computational Neuroscience. What is computational neuroscience ? The Interdisciplinary Nature of Computational Neuroscience.
CSC2535: Lecture 4: Autoencoders, Free energy, and Minimum Description Length Geoffrey Hinton.
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
Spiking Neuron Networks
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Byoung-Tak Zhang Biointelligence Laboratory
Neural Networks.
Nicolas Alzetta CoNGA: Cognition and Neuroscience Group of Antwerp
Article Review Todd Hricik.
OVERVIEW OF BIOLOGICAL NEURONS
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 2 Stimulus-Response Agents
Computational neuroscience
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Neuronal Signals.
Artificial Intelligence Chapter 3 Neural Networks
Representation Learning with Deep Auto-Encoder
7. Associators and synaptic plasticity
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
CSC 578 Neural Networks and Deep Learning
Neural Condition: Synaptic Transmission
Presentation transcript:

Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012 Byoung-Tak Zhang Biointelligence Laboratory Computer Science and Engineering & Brain Science, Cognitive Science, and Bioinformatics Programs & Brain-Mind-Behavior Concentration Program Seoul National University

(c) 2009 SNU Biointelligence Laboratory, 2 Lecture Overview Part I: Computational Brain  How the brain encodes and processes information? Part II: Brain-Inspired Computation  How to build intelligent machines inspired by brain processes? Part III: Cognitive Brain Networks  How the brain networks perform cognitive processing?

© 2009, SNU CSE BioIntelligence Lab, 3 Human Brain: Functional Architecture Brodmann’s areas & functions

© 2009, SNU CSE BioIntelligence Lab, 4 Cortex: Perception, Action, and Cognition Fig 3-18 Primary sensory and motor cortex & association cortex

5 (c) SNU CSE Biointelligence Lab, Mind, Brain, Cell, Molecule Brain Cell Molecule Mind cells mol. memory

Computational Neuroscience

7

From Molecules to the Whole Brain 8

Cortical Parameters 9

(C) 2009, SNU CSE Biointelligence Lab, 10 The Structure of Neurons

(C) 2006, SNU CSE Biointelligence Lab, 11 Information Transmission between Neurons Overview of signaling between neurons  Synaptic inputs  Synaptic inputs make postsynaptic current.  Passive depolarizing currents  Action potential: depolarize the membrane, and trigger another action potential.  The inward current conducted down the axon.  This leads to depolarization of adjacent regions of membrane

(C) 2006, SNU CSE Biointelligence Lab, 12 Voltage-gated channel in the neuronal membrane. Mechanisms of neurotransmitter receptor molecules.

13

Hodgkin-Huxley Model Hodgkin-Huxley model  C: capacitance  I(t): external current Three ionic currents 14 Fig. 2.7

15 (c) SNU CSE Biointelligence Lab, Molecular Basis of Learning and Memory in the Brain

Neuronal Connectivity 16

Population Coding The average population activity A(t) of neurons Very small time windows 17 Pool or local population of neurons with similar response characteristics. The pool average is defined as the average firing rate over the neurons in the pool within a relatively small time window.

Associative Networks 18 Associative node and network architecture. (A) A simplified neuron that receives a large number of inputs r i in. The synaptic efficiency is denoted by w i. the output of the neuron, r out depends on the particular input stimulus. (B) A network of associative nodes. Each component of the input vector, r i in, is distributed to each neuron in the network. However, the effect of the input can be different for each neuron as each individual synapse can have different efficiency values w ij, where j labels the neuron in the network. Auto-associative node and network architecture. (A) Schematic illustration of an auto-associative node that is distinguished from the associative node as illustrated in Fig. 7.1A in that it has, in addition, a recurrent feedback connection. (B) An auto-associative network that consist of associative nodes that not only receive external input from other neural layers but, in addition, have many recurrent collateral connections between the nodes in the neural layer.

Principles of Brain Processing

Memory, Learning, and the Brain 기억과 학습은 뇌의 사고, 행동, 인지의 기반 메카니즘 McGaugh, J. L. Memory & Emotion: The Making of Lasting Memories, © 2009, SNU Biointelligence Lab, 20 It is our memory that enables us to value everything else we possess. Lacking memory, we would have no ability to be concerned about our hearts, achievements, loved ones, and incomes. Our brain has an amazing capacity to integrate the combined effects of our past experiences together with our present experiences in creating our thought and actions. This is all possible by the memory and the memories are formed by the learning process.

21 Memory Systems in the Brain Source: Gazzaniga et al., Cognitive Neuroscience: The Biology of the Mind, 2002.

Summary: Principles of Cognitive Learning © 2009, SNU Biointelligence Lab, 22 Continuity. Learning is a continuous, lifelong process. “The experiences of each immediately past moment are memories that merge with current momentary experiences to create the impression of seamless continuity in our lives” [McGaugh, 2003] Glocality. “Perception is dependent on context” and it is important to maintain both global and local, i.e. glocal, representations [Peterson and Rhodes, 2003] Compositionality. “The brain activates existing metaphorical structures to form a conceptual blend, consisting of all the metaphors linked together” [Feldman, 2006]. “Mental chemistry” [J. S. Mill] [Zhang, IEEE Computational Intelligence Magazine, 2008]

1. Temporal Nature of Memory and Learning © 2009, SNU Biointelligence Lab, 23

2. Multiple Levels of Representation Source: J. W. Rudy, The Neurobiology of Learning and Memory, 2008.

3. Creation of New Memory Source: J. W. Rudy, The Neurobiology of Learning and Memory, 2008.

26 (c) SNU CSE Biointelligence Lab, What is the information processing principle underlying human intelligence?

27 (c) SNU CSE Biointelligence Lab, Von Neumann’s The Computer and the Brain (1958) John von Neumann ( )

(c) 2008 SNU Biointelligence Laboratory, 28 Some Facts about the Brain Volume and mass: 1.35 liter & 1.35 kg Processors: neurons Communication: synapses Speed: sec  Computer: 1 GHz = sec Memory: 2.8 x bits  = 14 bits/sec x neurons x (2 x 10 9 ) sec (2 x 10 9 sec = 60 years of life time)  Computer disk: tera bits = bits Reliability: 10 4 neurons dying everyday Plasticity: biochemical learning

29 (c) SNU CSE Biointelligence Lab, Principles of Information Processing in the Brain The Principle of Uncertainty  Precision vs. prediction The Principle of Nonseparability “UN-IBM”  Processor vs. memory The Principle of Infinity  Limited matter vs. unbounded memory The Principle of “Big Numbers Count”  Hyperinteraction of neurons (or > molecules) The Principle of “Matter Matters”  Material basis of “consciousness” [Zhang, 2005]

Neural Computers

Learning to extract the orientation of a face patch (Salakhutdinov & Hinton, NIPS 2007)

The training and test sets for predicting face orientation 11,000 unlabeled cases100, 500, or 1000 labeled cases face patches from new people

The root mean squared error in the orientation when combining GP’s with deep belief nets GP on the pixels GP on top-level features GP on top-level features with fine-tuning 100 labels 500 labels 1000 labels Conclusion: The deep features are much better than the pixels. Fine-tuning helps a lot.

Deep Autoencoders (Hinton & Salakhutdinov, 2006) They always looked like a really nice way to do non-linear dimensionality reduction: –But it is very difficult to optimize deep autoencoders using backpropagation. We now have a much better way to optimize them: –First train a stack of 4 RBM’s –Then “unroll” them. –Then fine-tune with backprop neurons 500 neurons 250 neurons neurons 28x28 linear units

A comparison of methods for compressing digit images to 30 real numbers. real data 30-D deep auto 30-D logistic PCA 30-D PCA

Retrieving documents that are similar to a query document We can use an autoencoder to find low- dimensional codes for documents that allow fast and accurate retrieval of similar documents from a large set. We start by converting each document into a “bag of words”. This a 2000 dimensional vector that contains the counts for each of the 2000 commonest words.

How to compress the count vector We train the neural network to reproduce its input vector as its output This forces it to compress as much information as possible into the 10 numbers in the central bottleneck. These 10 numbers are then a good way to compare documents reconstructed counts 500 neurons 2000 word counts 500 neurons 250 neurons 10 input vector output vector

Performance of the autoencoder at document retrieval Train on bags of 2000 words for 400,000 training cases of business documents. –First train a stack of RBM’s. Then fine-tune with backprop. Test on a separate 400,000 documents. –Pick one test document as a query. Rank order all the other test documents by using the cosine of the angle between codes. –Repeat this using each of the 400,000 test documents as the query (requires 0.16 trillion comparisons). Plot the number of retrieved documents against the proportion that are in the same hand-labeled class as the query document.

Proportion of retrieved documents in same class as query Number of documents retrieved

First compress all documents to 2 numbers using a type of PCA Then use different colors for different document categories

First compress all documents to 2 numbers. Then use different colors for different document categories