The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014.

Slides:



Advertisements
Similar presentations
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Advertisements

The Appeal of Parallel Distributed Processing J.L. McClelland, D.E. Rumelhart, and G.E. Hinton 인지과학 협동과정 강소영.
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Cognitive Psychology Chapter 7. Cognitive Psychology: Overview  Cognitive psychology is the study of perception, learning, memory, and thought  The.
Neural Networks Basic concepts ArchitectureOperation.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Cognitive Development
Psych 56L/ Ling 51: Acquisition of Language Lecture 8 Phonological Development III.
Stochastic Neural Networks, Optimal Perceptual Interpretation, and the Stochastic Interactive Activation Model PDP Class January 15, 2010.
Tom Griffiths CogSci C131/Psych C123 Computational Models of Cognition.
Chapter Seven The Network Approach: Mind as a Web.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach Jay McClelland Department of Psychology and Center for.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
James L. McClelland Stanford University
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
The PDP Approach to Understanding the Mind and Brain J. McClelland Cognitive Core Class Lecture March 7, 2011.
The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014.
The Interactive Activation Model. Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I.
Awakening from the Cartesian Dream: The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University February 7, 2013.
Psychology 3717 Dr. David Brodbeck. Introduction Memory is a part of cognitive psychology So, let ’ s start by defining cognition Matlin (1994) – Cognition,
The Boltzmann Machine Psych 419/719 March 1, 2001.
Models of Cognitive Processes: Historical Introduction with a Focus on Parallel Distributed Processing Models Psychology 209 Stanford University Jan 7,
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Perception, Thought and Language as Graded Constraint Satisfaction Processes Jay McClelland SymSys 100 April 12, 2011.
The Past Tense Model Psych /719 Feb 13, 2001.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Understanding Human Cognition through Experimental and Computational Methods Jay McClelland Symbolic Systems 100 Spring, 2011.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
PDP Class Stanford University Jan 4, 2010
Copyright © 2010, Pearson Education Inc., All rights reserved.  Prepared by Katherine E. L. Norris, Ed.D.  West Chester University of Pennsylvania This.
James L. McClelland Stanford University
Principled Probabilistic Inference and Interactive Activation Psych209 January 25, 2013.
Chapter 18 Connectionist Models
Connectionist Modelling Summer School Lecture Three.
The Emergent Structure of Semantic Knowledge
Perception and Thought as Constraint Satisfaction Processes Jay McClelland Symsys 100 April 27, 2010.
Chapter 6 Neural Network.
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Biological and cognitive plausibility in connectionist networks for language modelling Maja Anđel Department for German Studies University of Zagreb.
Connectionist Modelling Summer School Lecture Two.
The Emergentist Approach To Language As Embodied in Connectionist Networks James L. McClelland Stanford University.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Neural Networks: An Introduction and Overview
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Learning in Neural Networks
Some Aspects of the History of Cognitive Science
James L. McClelland SS 100, May 31, 2011
Emergence of Semantics from Experience
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 8, 2018.
of the Artificial Neural Networks.
Some Aspects of the History of Cognitive Science
Cognitive Development
Learning linguistic structure with simple recurrent neural networks
Representation of Language Knowledge: Is it All in your Connections?
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
The Network Approach: Mind as a Web
Neural Networks: An Introduction and Overview
Models of the brain hardware
Human Cognition: Is it more like a computer or a neural net?
Presentation transcript:

The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014

Early Computational Models of Human Cognition ( ) The digital computer instantiates a ‘physical symbol system’ Simon announces that he and Allan Newell have programmed a computer to ‘think’. Symbol processing languages are introduced allowing success at theorem proving, problem solving, etc. Human subjects asked to give verbal reports while problem solving follow paths similar to those followed by N&S’s programs. Psychologists investigate mental processes as sequences of discrete stages. Early neural network models fail to live up to expectations; Minsky and Pappert kill them off. Cognitive psychologists distinguish between algorithm and hardware; Neisser deems physiology to be only of ‘peripheral interest’.

Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I saw the sheep grazing in the field In comprehension –Margie was sitting on the front steps when she heard the familiar jingle of the “Good Humor” truck. She remembered her birthday money and ran into the house. In reaching, grasping, typing… David E. Rumelhart

Graded and variable nature of neuronal responses

Lateral Inhibition in Eye of Limulus (Horseshoe Crab)

The Interactive Activation Model

Input and activation of units in PDP models General form of unit update: An activation function that links PDP models to Bayesian computation: Or set activation to 1 probabilistically: unit i Input from unit j w ij net i max=1 a min=-.2 rest 0 a i or p i

Rules or Connections? The IA model only knows rules, but human perceivers show perceptual facilitation when they perceive letters in non-words as well. Does our perceptual system follow rules based on a ‘grammar’ or legal forms? Syl -> {Ons} + Body Body -> Vwl + {Coda} The IA model simulates perceptual facilitation in pseudowords as well as words The knowledge is in the connections

IA Model as a Bridge to a new Framework It is different from the PSS framework in that: –Knowledge is in the connections, hence –Directly wired into the processing machinery rather than stored as such –Patterns are not retrieved by constructed –Intrinsically inaccessible to inspection But it is similar in that: –Programmed by its designer –Embodies designer’s choices about how to represent knowledge –Units correspond directly to cognitive entities

Distributed Connectionist Models What if we could learn from experience, without making prior commitments to the way cognitive entities are represented –Do there have to be units corresponding to such entities in our minds? –Do we need separate subsystems for items that follow the rules and items that do not? Two prominent application areas: –Past tense inflection Pay – paid, lay – laid, tay – taid; See-saw, Say – said, Have – had… –Spelling to sound HINT, MINT, PINT

Core Principles of Parallel Distributed Processing Models using Learned Distributed Representations Processing occurs via interactions among neuron- like processing units via weighted connections. A representation is a pattern of activation. The knowledge is in the connections. Learning occurs through gradual connection adjustment, driven by experience. Learning affects both representation and processing. H I N T /h/ /i/ /n/ /t/

Learning in a Feedforward PDP Network Propagate activation ‘forward’ producing a r for all units using the logistic activation function. Calculate error at the output layer:  r = f’(t r – a r ) Propagate error backward to calculate error information at the ‘hidden’ layer:  s = f’(  r w rs  r ) Change weights:  w rs =  r a s H I N T /h/ /i/ /n/ /t/

Characteristics of Past Tense and Spelling-sound models They use a single system of connections to correctly capture performance with regular, exceptional, and novel items –MINT, PINT, VINT –LIKE, TAKE, FIKE Tend to over-regularize exceptions early in learning as if they have ‘discovered’ a rule. The knowledge in the connections that informs processing of regular items also informs processing of the regular aspects of exceptions –Quasi-regularity: The tendency for exceptions to exhibit characteristics of fully regular items PINT, YACHT – said, thought Exhibit graded sensitivity to frequency and regularity and a frequency by regularity interaction.

Frequency by Regularity Interaction PINT TREAD MINT LAKE

Decartes’ Legacy Descartes’ idea that the ‘mind’ and higher cognition is distinct from the physiological mechanisms of sensation and action persists to a degree to this day. Everyone accepts that neural networks underlie sensory and motor processes but are they really relevant to understanding thought? Recent and future work with neural networks addressing semantic and mathematical cognition addresses this question.