CSC321: Neural Networks Lecture 19: Simulating Brain Damage Geoffrey Hinton.

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Cognitive Neuroscience of Language 1. Premise 1: Constituent Cognitive Processes Phonological analysis Syntactic analysis Semantic analysis Premise 2:
Neural Correlates of Visual Awareness. A Hard Problem Are all organisms conscious?
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Lesions of Retinostriate Pathway Lesions (usually due to stroke) cause a region of blindness called a scotoma Identified using perimetry note macular sparing.
CSC321: Neural Networks Lecture 3: Perceptrons
9.012 Brain and Cognitive Sciences II Part VIII: Intro to Language & Psycholinguistics - Dr. Ted Gibson.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
Deficits of vision What do visual deficits tell us about the structure of the visual system?
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
COGNITIVE NEUROSCIENCE
Chapter Seven The Network Approach: Mind as a Web.
Reading. Reading Research Processes involved in reading –Orthography (the spelling of words) –Phonology (the sound of words) –Word meaning –Syntax –Higher-level.
Reading & Speech Perception Connectionist Approach E.g., Seidenberg and McClelland (1989) and Plaut (1996). Central to these models is the absence of.
Language Comprehension reading
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.

1 Language disorders We can learn a lot by looking at system failure –Which parts are connected to which Examine the relation between listening/speaking.
1 Linguistics lecture #9 November 23, Overview Modularity again How visual cognition affects language How spatial cognition affects language Can.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture4 1 August 2007.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Stochastic Optimization and Simulated Annealing Psychology /719 January 25, 2001.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Chapter 9 Neural Network.
Bookheimer 2003 Annual Rev. Neurosci.. Phonology in IFG Gelfand and Bookheimer, Neuron 2002.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Artificial Intelligence Techniques Multilayer Perceptrons.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 5: Distributed Representations Geoffrey Hinton.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
CSC321: Neural Networks Lecture 2: Learning with linear neurons Geoffrey Hinton.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
1 Introduction to Neural Networks And Their Applications.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 19. Systems Concepts 1 Michael Arbib: CS564 - Brain Theory and.
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
EMPATH: A Neural Network that Categorizes Facial Expressions Matthew N. Dailey and Garrison W. Cottrell University of California, San Diego Curtis Padgett.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CSC321: Lecture 7:Ways to prevent overfitting
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
CSC321: Neural Networks Lecture 18: Distributed Representations
CSC321 Lecture 5 Applying backpropagation to shape recognition Geoffrey Hinton.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 15: Mixtures of Experts Geoffrey Hinton.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Perceptrons Michael J. Watts
CSC321: Introduction to Neural Networks and Machine Learning Lecture 23: Linear Support Vector Machines Geoffrey Hinton.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Some PubMed search tips that you might not already know
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Deep Learning Amin Sobhani.
Data Mining, Neural Network and Genetic Programming
2nd Language Learning Chapter 2 Lecture 4.
© 2016 by W. W. Norton & Company Recognizing Objects Chapter 4 Lecture Outline.
CSC2535: Computation in Neural Networks Lecture 13: Representing things with neurons Geoffrey Hinton.
Word embeddings (continued)
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
The Network Approach: Mind as a Web
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

CSC321: Neural Networks Lecture 19: Simulating Brain Damage Geoffrey Hinton

The Effects of Brain Damage Performance deteriorates in some unexpected ways when the brain is damaged. This can tell us a lot about how information is processed. –Damage to the right hemisphere can cause neglect of the left half of visual space and a lack of a sense of ownership of body parts. –Damage to parts of the infero-temporal cortex can prevent face recognition. –Damage to other areas can destroy the perception of color or of motion. Before brain scans, the performance deficits caused by physical damage were the main way to localize functions in the human brain – recording from human brain cells is not usually allowed (but it can give surprising results!).

Acquired dyslexia Occasionally, damage to the brain of an adult causes bizarre reading deficits – Surface dyslexics can read regular nonsense words like “mave” but mispronounce irregular words like “yacht”. –Deep dyslexics cannot deal with nonsense words at all. They can read “yacht” correctly sometimes but sometimes misread “yacht” as “boat”. They are also much better at concrete nouns than at abstract nouns (like “peace”) or verbs.

Some weird effects PEACH “apricot” SYMPATHY “orchestra” HAM “food”

The dual route theory of reading Marshall and Newcombe proposed that there are two routes that can be separately damaged. –Deep dyslexics have lost the phonological route and may also have damage to the semantic route. But there are consistent peculiarities that are hard to explain this way. PEACH The meaning of the word The sound of the word Speech production visual features of the word

An advantage of neural network models Until quite recently, nearly all the models of information processing that psychologists used were inadequate for explaining the effects of damage. –Either they were symbol processing models that had no direct relationship to hardware –Or they were just vague descriptions that could not actually do the information processing. There is no easy way to make detailed predictions of how hardware damage will affect performance in models of this type. Neural net models have several advantages: –They actually do the required information processing rather than just describing it. –They can be physically damaged and the effects can be observed.

A model of the semantic route Grapheme units: one unit for each letter/position pair Intermediate units to allow a non-linear mapping Sememe units: one per feature of the meaning Recurrently connected clean-up units: to capture regularities among sememes

What the network learns We used recurrent back-propagation for six time steps with the sememe vector as the desired output for the last 3 time steps. –The network creates semantic attractors. – Each word meaning is a point in semantic space and has its own basin of attraction. –Damage to the sememe or clean-up units can change the boundaries of the attractors. This explains semantic errors. Meanings fall into a neighboring attractor. –Damage to the bottom-up input can change the initial conditions for the attractors. This explains why early damage can cause semantic errors.

Sharing the work between attractors and the bottom-up pathway Feed-forward nets prefer to produce similar outputs for similar inputs. –Attractors can be used to make life easy for the feed- forward pathway. Damaging attractors can cause errors involving visually similar words. – This explains why patients who make semantic errors always make some visual errors as well. –It also explains why errors that are both visually and semantically similar are particularly frequent. semantic space visual space cot cat

Can very different meanings be next to each other in semantic space? Take two random binary vectors in a high- dimensional space. –Their scalar product depends on the fraction of the bits that are on in each vector. –The average of two random binary vectors is much closer to them than to other random binary vectors!

Fractal attractors? The semantic space may have structure at several different scales. –Large-scale structure represents broad categories. –Fine-scale structure represents finer distinctions Severe damage could blur out all the fine structure –Meanings get cleaned-up to the meaning of the broad category. Complex features get lost. –May explain regression to childhood in senility? tools animals hammer saw drill weasel cat dog

The advantage of concrete words We assume that concrete nouns have many more semantic features than abstract words. –So they can benefit much more from the semantic clean-up. The right meaning can be recovered even if the bottom-up input is severely damaged. But severe damage to the semantic part of the network will hurt concrete nouns more because they are more reliant on the clean-up.