Presentation is loading. Please wait.

Presentation is loading. Please wait.

Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012.

Similar presentations


Presentation on theme: "Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012."— Presentation transcript:

1 Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012 Byoung-Tak Zhang Biointelligence Laboratory Computer Science and Engineering & Brain Science, Cognitive Science, and Bioinformatics Programs & Brain-Mind-Behavior Concentration Program Seoul National University http://bi.snu.ac.kr/

2 (c) 2009 SNU Biointelligence Laboratory, http://bi.snu.ac.kr/ 2 Lecture Overview Part I: Computational Brain  How the brain encodes and processes information? Part II: Brain-Inspired Computation  How to build intelligent machines inspired by brain processes? Part III: Cognitive Brain Networks  How the brain networks perform cognitive processing?

3 © 2009, SNU CSE BioIntelligence Lab, http://bi.snu.ac.kr/ 3 Human Brain: Functional Architecture Brodmann’s areas & functions

4 © 2009, SNU CSE BioIntelligence Lab, http://bi.snu.ac.kr/ 4 Cortex: Perception, Action, and Cognition Fig 3-18 Primary sensory and motor cortex & association cortex

5 5 (c) 2000-2007 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ Mind, Brain, Cell, Molecule Brain Cell Molecule Mind 10 11 cells 10 10 mol. memory

6 Computational Neuroscience

7 7

8 From Molecules to the Whole Brain 8

9 Cortical Parameters 9

10 (C) 2009, SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ 10 The Structure of Neurons

11 (C) 2006, SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ 11 Information Transmission between Neurons Overview of signaling between neurons  Synaptic inputs  Synaptic inputs make postsynaptic current.  Passive depolarizing currents  Action potential: depolarize the membrane, and trigger another action potential.  The inward current conducted down the axon.  This leads to depolarization of adjacent regions of membrane

12 (C) 2006, SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ 12 Voltage-gated channel in the neuronal membrane. Mechanisms of neurotransmitter receptor molecules.

13 13

14 Hodgkin-Huxley Model Hodgkin-Huxley model  C: capacitance  I(t): external current Three ionic currents 14 Fig. 2.7

15 15 (c) 2000-2007 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ Molecular Basis of Learning and Memory in the Brain

16 Neuronal Connectivity 16

17 Population Coding The average population activity A(t) of neurons Very small time windows 17 Pool or local population of neurons with similar response characteristics. The pool average is defined as the average firing rate over the neurons in the pool within a relatively small time window.

18 Associative Networks 18 Associative node and network architecture. (A) A simplified neuron that receives a large number of inputs r i in. The synaptic efficiency is denoted by w i. the output of the neuron, r out depends on the particular input stimulus. (B) A network of associative nodes. Each component of the input vector, r i in, is distributed to each neuron in the network. However, the effect of the input can be different for each neuron as each individual synapse can have different efficiency values w ij, where j labels the neuron in the network. Auto-associative node and network architecture. (A) Schematic illustration of an auto-associative node that is distinguished from the associative node as illustrated in Fig. 7.1A in that it has, in addition, a recurrent feedback connection. (B) An auto-associative network that consist of associative nodes that not only receive external input from other neural layers but, in addition, have many recurrent collateral connections between the nodes in the neural layer.

19 Principles of Brain Processing

20 Memory, Learning, and the Brain 기억과 학습은 뇌의 사고, 행동, 인지의 기반 메카니즘 McGaugh, J. L. Memory & Emotion: The Making of Lasting Memories, 2003. © 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ 20 It is our memory that enables us to value everything else we possess. Lacking memory, we would have no ability to be concerned about our hearts, achievements, loved ones, and incomes. Our brain has an amazing capacity to integrate the combined effects of our past experiences together with our present experiences in creating our thought and actions. This is all possible by the memory and the memories are formed by the learning process.

21 21 Memory Systems in the Brain Source: Gazzaniga et al., Cognitive Neuroscience: The Biology of the Mind, 2002.

22 Summary: Principles of Cognitive Learning © 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ 22 Continuity. Learning is a continuous, lifelong process. “The experiences of each immediately past moment are memories that merge with current momentary experiences to create the impression of seamless continuity in our lives” [McGaugh, 2003] Glocality. “Perception is dependent on context” and it is important to maintain both global and local, i.e. glocal, representations [Peterson and Rhodes, 2003] Compositionality. “The brain activates existing metaphorical structures to form a conceptual blend, consisting of all the metaphors linked together” [Feldman, 2006]. “Mental chemistry” [J. S. Mill] [Zhang, IEEE Computational Intelligence Magazine, 2008]

23 1. Temporal Nature of Memory and Learning © 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ 23

24 2. Multiple Levels of Representation Source: J. W. Rudy, The Neurobiology of Learning and Memory, 2008.

25 3. Creation of New Memory Source: J. W. Rudy, The Neurobiology of Learning and Memory, 2008.

26 26 (c) 2000-2007 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ What is the information processing principle underlying human intelligence?

27 27 (c) 2000-2007 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ Von Neumann’s The Computer and the Brain (1958) John von Neumann (1903-1957)

28 (c) 2008 SNU Biointelligence Laboratory, http://bi.snu.ac.kr/ 28 Some Facts about the Brain Volume and mass: 1.35 liter & 1.35 kg Processors: 10 11 neurons Communication: 10 14 synapses Speed: 10 -3 sec  Computer: 1 GHz = 10 -9 sec Memory: 2.8 x 10 21 bits  = 14 bits/sec x 10 11 neurons x (2 x 10 9 ) sec (2 x 10 9 sec = 60 years of life time)  Computer disk: tera bits = 10 12 bits Reliability: 10 4 neurons dying everyday Plasticity: biochemical learning

29 29 (c) 2000-2007 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr/ Principles of Information Processing in the Brain The Principle of Uncertainty  Precision vs. prediction The Principle of Nonseparability “UN-IBM”  Processor vs. memory The Principle of Infinity  Limited matter vs. unbounded memory The Principle of “Big Numbers Count”  Hyperinteraction of 10 11 neurons (or > 10 17 molecules) The Principle of “Matter Matters”  Material basis of “consciousness” [Zhang, 2005]

30 Neural Computers

31 Learning to extract the orientation of a face patch (Salakhutdinov & Hinton, NIPS 2007)

32 The training and test sets for predicting face orientation 11,000 unlabeled cases100, 500, or 1000 labeled cases face patches from new people

33 The root mean squared error in the orientation when combining GP’s with deep belief nets 22.2 17.9 15.2 17.2 12.7 7.2 16.3 11.2 6.4 GP on the pixels GP on top-level features GP on top-level features with fine-tuning 100 labels 500 labels 1000 labels Conclusion: The deep features are much better than the pixels. Fine-tuning helps a lot.

34 Deep Autoencoders (Hinton & Salakhutdinov, 2006) They always looked like a really nice way to do non-linear dimensionality reduction: –But it is very difficult to optimize deep autoencoders using backpropagation. We now have a much better way to optimize them: –First train a stack of 4 RBM’s –Then “unroll” them. –Then fine-tune with backprop. 1000 neurons 500 neurons 250 neurons 30 1000 neurons 28x28 linear units

35 A comparison of methods for compressing digit images to 30 real numbers. real data 30-D deep auto 30-D logistic PCA 30-D PCA

36 Retrieving documents that are similar to a query document We can use an autoencoder to find low- dimensional codes for documents that allow fast and accurate retrieval of similar documents from a large set. We start by converting each document into a “bag of words”. This a 2000 dimensional vector that contains the counts for each of the 2000 commonest words.

37 How to compress the count vector We train the neural network to reproduce its input vector as its output This forces it to compress as much information as possible into the 10 numbers in the central bottleneck. These 10 numbers are then a good way to compare documents. 2000 reconstructed counts 500 neurons 2000 word counts 500 neurons 250 neurons 10 input vector output vector

38 Performance of the autoencoder at document retrieval Train on bags of 2000 words for 400,000 training cases of business documents. –First train a stack of RBM’s. Then fine-tune with backprop. Test on a separate 400,000 documents. –Pick one test document as a query. Rank order all the other test documents by using the cosine of the angle between codes. –Repeat this using each of the 400,000 test documents as the query (requires 0.16 trillion comparisons). Plot the number of retrieved documents against the proportion that are in the same hand-labeled class as the query document.

39 Proportion of retrieved documents in same class as query Number of documents retrieved

40 First compress all documents to 2 numbers using a type of PCA Then use different colors for different document categories

41 First compress all documents to 2 numbers. Then use different colors for different document categories


Download ppt "Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012."

Similar presentations


Ads by Google