A Theory of Cerebral Cortex (or, “How Your Brain Works”) Andrew Smith (CSE)

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

How the brain relates inputs to conclude in an output/outputs Mostafa M. Dini.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Brain functions and kinematics Mostafa M. Dini July 2012.
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
A new approach to Artificial Intelligence.  There are HUGE differences between brain architecture and computer architecture  The difficulty to emulate.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
FT228/4 Knowledge Based Decision Support Systems Rule-Based Systems Ref: Artificial Intelligence A Guide to Intelligent Systems Michael Negnevitsky – Aungier.
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Soft computing Lecture 6 Introduction to neural networks.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
CSE 153 Cognitive ModelingChapter 3 Representations and Network computations In this chapter, we cover: –A bit about cortical architecture –Possible representational.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Self-Organizing Hierarchical Neural Network
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
SME Review - September 20, 2006 Neural Network Modeling Jean Carlson, Ted Brookings.
Critical periods A time period when environmental factors have especially strong influence in a particular behavior. –Language fluency –Birds- Are you.
Lecture 09 Clustering-based Learning
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
The Von Neumann Model The approach to solving problems is to process a set of instructions and data stored in locations. The instructions are processed.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Pathways and Higher-Order Functions. Introduction There is a continuous flow of information between the brain, spinal cord, and peripheral nerves - millions.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Neural Network with Memory and Cognitive Functions Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University,
NEURAL NETWORKS FOR DATA MINING
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Hebbian Coincidence Learning
0 Chapter 1: Introduction Fundamentals of Computational Neuroscience Dec 09.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Group-7. Mental World What is Cognition? Cognition : Understanding and trying to make sense of the world Information processing Development of concepts.
Artificial Intelligence & Neural Network
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Why Can't A Computer Be More Like A Brain?. Outline Introduction Turning Test HTM ◦ A. Theory ◦ B. Applications & Limits Conclusion.
Neural Networks. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks 1mm Areas /
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
The Process of Forming Perceptions SHMD219. Perception The ability to see, hear, or become aware of something through the senses. Perception is a series.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
2. Neuronal Structure and Function. Neuron Pyramidal cell Basal Dendrites Axon Myelin sheath Apica Dendrites Postsynaptic cells Preynaptic cells Synapse.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Basics of Computational Neuroscience. What is computational neuroscience ? The Interdisciplinary Nature of Computational Neuroscience.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Intro to Localisation of function in the brain
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
Capacity of auto-associative networks
ECE 471/571 - Lecture 19 Hopfield Network.
How Neurons Do Integrals
OVERVIEW OF BIOLOGICAL NEURONS
Covariation Learning and Auto-Associative Memory
Artificial neurons Nisheeth 10th January 2019.
The Network Approach: Mind as a Web
Presentation transcript:

A Theory of Cerebral Cortex (or, “How Your Brain Works”) Andrew Smith (CSE)

Outline Questions Preliminaries Feature Attractor Networks Antecedent Support Networks Attractive properties of the theory / Conclusions

Questions (to be answered!) What is cortical knowledge and how is it stored? How is it used to carry out thinking? How is it integrated with sensory input and motor output?

Preliminaries Thinking is a symbolic process. Thinking relies only on classical mechanics. (Unlike the Penrose/Hameroff model.) Thinking is not a mathematically grounded reasoning process, rather confabulation!

Feature Attractor Neuronal Networks An object (sensory, abstract, etc.) or action (movement process, thought process, etc.) is represented by a collection of feature attractor tokens, each expressing a single token (node) from its lexicon. cerebral cortex cortical region (one of about 120,000) thalamus paired thalamic region human cortical surface area  240,000 mm 2 Each region encompasses a cortical surface area of roughly 2 mm 2 and possesses a total of about 200,000 neurons. bidirectional connections Each Feature Attractor Network Implements one ‘Column’ of Tokens a feature attractor network

Feature Attractor Networks Each network has a lexicon of random (!) tokens, sparsely encoded; each token has 100’s of neurons on at a time, out of 50,000. This lexicon is fixed very early in life and never changes. The function of the network is to change the pattern of activation within a particular region so that it expresses the token in its lexicon “closest” to the original pattern of activation. (aka “vector quantizers”) The Feature Attractor Networks are extremely robust to noise/partial tokens. - A region can start out with 10% of a particular token and within one iteration, express the complete token. - A region can start out expressing many (100’s) of partial tokens and within one iteration, express just one token that was most complete. (more on this later…) Now we have ~120,000 powerful pattern recognizers, let’s wire them up…

Antecedent Support Networks (ASNs) The role of the ASN is to do the thinking. - If several active tokens have strong links to an inactive token, the ASN will activate that token (e.g. “smoke” + “heat” -> “fire”). - Learning occurs when the ASN increases the link weight between two tokens. Short term memory = Which tokens are currently active Long term memory = The link strengths between tokens

source region token i cerebral cortex transponder neurons target region token j Antecedent Support Neuronal Network Implementation – Randomness to the rescue! these are the synapses that are strengthened “Axons from neuron of token i send their collaterals randomly to millions of neurons in the local area. Of these, a few thousand transponder neurons just happen to receive sufficient input from i to become active. Of those, a few hundred just happen to send axons to neurons belonging to token j on the target region, activating (part of) token j.” The wiring of transponder neurons (pyramidal neurons) is also fixed at a very early age.

Input / Output Input: Sensory neurons connect to token neurons (layers III and IV), just like transponder neurons. Output: Motor neurons can receive their inputs from the token neurons, just like transponder neurons.

Attractive features (no pun intended…) The Hecht-Nielsen model shows: - how neurons can grow randomly and become organized. - that a large range of synaptic weights is not necessary. - how you can get a song stuck in your head. (You’re unable to reset regions of your cortex. One bar evokes the next…) - a model that can be viewed as implementing Paul Churchland’s “semantic maps” from last lecture of CogSci 200. (IMHO…) A simulation of this has solved the classic “cocktail-party problem.”

Conclusions “[All knowledge comes from creating associations between experiences.]” - Aristotle “Within 12 to 36 months, this theory will revolutionize Artificial Intelligence.” - Hecht-Nielsen (as of last week…)