The Origins of Knowledge Debate How do people gain knowledge about the world around them? Are we born with some fundamental knowledge about concepts like.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Why are you here? REALLY…...
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Chapter 4 Key Concepts.
Using the Crosscutting Concepts As conceptual tools when meeting an unfamiliar problem or phenomenon.
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.
Emergence in Cognitive Science: Semantic Cognition Jay McClelland Stanford University.
Emergence of Semantic Structure from Experience Jay McClelland Stanford University.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Pattern Recognition Pattern - complex composition of sensory stimuli that the human observer may recognize as being a member of a class of objects Issue.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Scientific method - 1 Scientific method is a body of techniques for investigating phenomena and acquiring new knowledge, as well as for correcting and.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Basic Concepts The Unified Modeling Language (UML) SYSC System Analysis and Design.
Architectural Design.
Cooperation of Complementary Learning Systems in Memory Review and Update on the Complementary Learning Systems Framework James L. McClelland Psychology.
Using My NASA Data to Explore Earth Systems Lynne H. Hehr John G. Hehr University of Arkansas Department of Geosciences And Center for Math and Science.
Chapter 1 Invitation to Biology Hsueh-Fen Juan 阮雪芬 Sep. 11, 2012.
Crosscutting Concepts and Disciplinary Core Ideas February24, 2012 Heidi Schweingruber Deputy Director, Board on Science Education, NRC/NAS.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach Jay McClelland Department of Psychology and Center for.
Dynamics of learning: A case study Jay McClelland Stanford University.
Using Backprop to Understand Apects of Cognitive Development PDP Class Feb 8, 2010.
Representation, Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology.
Emergence of Semantic Structure from Experience Jay McClelland Stanford University.
The changing face of face research Vicki Bruce School of Psychology Newcastle University.
Theory of Cognitive Development
Integrating New Findings into the Complementary Learning Systems Theory of Memory Jay McClelland, Stanford University.
Crosscutting Concepts Next Generation Science Standards.
Putting Research to Work in K-8 Science Classrooms Ready, Set, SCIENCE.
Theories of First Language Acquisition
Contrasting Approaches To Semantic Knowledge Representation and Inference Psychology 209 February 15, 2013.
Artificial Intelligence By Michelle Witcofsky And Evan Flanagan.
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
The Next Generation Science Standards: 4. Science and Engineering Practices Professor Michael Wysession Department of Earth and Planetary Sciences Washington.
The Past Tense Model Psych /719 Feb 13, 2001.
Development, Disintegration, and Neural Basis of Semantic Cognition: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology.
Emergence of Semantic Structure from Experience Jay McClelland Stanford University.
Similarity and Attribution Contrasting Approaches To Semantic Knowledge Representation and Inference Jay McClelland Stanford University.
Rapid integration of new schema- consistent information in the Complementary Learning Systems Theory Jay McClelland, Stanford University.
Copyright © 2010, Pearson Education Inc., All rights reserved.  Prepared by Katherine E. L. Norris, Ed.D.  West Chester University of Pennsylvania This.
Semantic Cognition: A Parallel Distributed Processing Approach James L. McClelland Center for the Neural Basis of Cognition and Departments of Psychology.
1 How is knowledge stored? Human knowledge comes in 2 varieties: Concepts Concepts Relations among concepts Relations among concepts So any theory of how.
The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014.
Origins of Cognitive Abilities Jay McClelland Stanford University.
The Emergent Structure of Semantic Knowledge
Miss. Mona AL-Kahtani.  Basic assumption:  Language acquisition is one example of the human child’s remarkable ability to learn from experience and.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
From NARS to a Thinking Machine Pei Wang Temple University.
Semantic Knowledge: Its Nature, its Development, and its Neural Basis James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation.
Organization and Emergence of Semantic Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center for.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
How Languages Are Learned
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Psychology 209 – Winter 2017 January 31, 2017
What is cognitive psychology?
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Psychology 209 – Winter 2017 Feb 28, 2017
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
James L. McClelland SS 100, May 31, 2011
Does the Brain Use Symbols or Distributed Representations?
Today Review: “Knowing a Language” Complete chapter 1
Emergence of Semantic Structure from Experience
Emergence of Semantics from Experience
How is knowledge stored?
Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019.
Presentation transcript:

The Origins of Knowledge Debate How do people gain knowledge about the world around them? Are we born with some fundamental knowledge about concepts like object, number or space, or do we have to learn everything? On April 16, 2010, The Ohio State Center for Cognitive Science will host a debate on this question that has intrigued scientists and philosophers for more than 2,000 years. The debate will bring together two outstanding scientists representing two different answers to the question of Origins of Knowledge. Professor Susan Carey of Harvard University has been advocating the position of innate knowledge through her brilliant work on infant understanding of object and number. Professor James McClelland of Stanford University has been advocating the learning position through his pioneering research of learning in networks of interacting neuron-like elements. Each speaker will have 45 minutes to outline their cases. There will then be an hour and a half question/answer session, with a reception to follow.

Origins of Cognitive Abilities Jay McClelland Stanford University

Three Questions What is the basis of cognitive abilities? What do we start with? What causes abilities to change? The answers to these questions are inter- related, and need to be considered together

What is the Basis of Cognitive Abilities? Explicit data representations used to reason and for behavior even though inaccessible to overt report –Systems of rules (e.g. of grammar, math, or logic) ‘Written down as though in a book’ – Fodor, 1982 –Propositions, Principles (Spelke) –Trees, Graphs, Maps… (Tenenbaum et al) Wired-in dispositions to represent and to respond in particular ways –As in neural networks and connectionist models Explicit culturally transmitted systems of representation and reasoning

What is the Basis of Cognitive Abilities? Explicit data representations used to reason and for behavior even though inaccessible to overt report –Systems of rules (e.g. of grammar, math, or logic) ‘Written down as though in a book’ – Fodor, 1982 –Propositions, Principles (Spelke) –Trees, Graphs, Maps… (Tenenbaum et al) Wired-in dispositions to represent and to respond in particular ways –As in neural networks and connectionist models Explicit culturally transmitted systems of representation and reasoning

Should We Care? Some seek to characterize the basis our cognitive abilities at an abstract level Perhaps the actual substrate doesn’t matter, if the goal is to provide a perspicuous account of the “knowledge” itself, not the details of how it is actually used, acquired or represented So one proceeds ‘as though’ people reason over explicit data structures, whether on really thinks they actually do or not

Why the Choice Makes a Difference Representation –Neural networks can exhibit emergent behavior that approximates a (series of) explicit structures, but need not conform to any such structure exactly at any point –These networks may actually capture domain structure and/or human abilities better than a such data structures Learning –If we think we are using rules or propositions when we think and act, we must have a mechanism for rule induction, and, it is often argued, a set of starting principles on which to proceed –If we are learning by adjusting connections, there must still be a starting place and a mechanism for change, but their nature might be very different

Generic Principles of Learning for Neural Networks Adjust connections in proportion to a product of pre- and post-synaptic activation Adjust connections to reduce the discrepancy between expectation and observation Adjust connections to capture the input with neurons whose activations are sparse and independent

Origins of Sensory Representations Hebbian learning, local within- eye correlations, and lateral excitation and inhibition lead to ocular dominance columns before the eyes open (Miller, 1989) Representations chosen to maximize sparsity and independence lead to emergence of Gabor filters like V1 neurons when trained on natural images (Olshausen & Field, 2004) How important is experience?

Merzenich’s Joined Finger Experiment

Generic Principles of Learning for Neural Networks Adjust connections in proportion to a product of pre- and post-synaptic activation  Adjust connections to reduce the discrepancy between expectation and observation Adjust connections to capture the input with neurons whose activations are sparse and independent

The Balance Scale Task

The Torque Difference Effect

Natural Structure and Connectionist Networks Natural language structure is quasi- regular –paid / said; baked / kept –mint / pint, hive / give –hairy / sporty, dirty Approaches based on ‘algebra-like’ rules vs. exceptions don’t capture quasi-regularity well –All exceptions are cast out of the regular system, thereby failing to exploit what is known about the regulars Connectionist networks naturally capture quasi-regularity in exceptions Problems with early models have been addressed Current models are the state-of-the-art in tasks ranging –from digit recognition and single word reading –to backgammon and semantic cognition H I N T /h/ /i/ /n/ /t/

Quasi-regularity is pervasive in nature as well as in language Typicality like regularity is a matter of degree Some properties are more exceptional than others Typicalization errors occur in both lexical and object decision

Lexical and Object Decision fruit frute flute fluit

Conceptual Development in a Simple PDP Model (Rumelhart, 1990; Rogers & McClelland 2004) Progressive differentiation –Keil, J. Mandler U-shaped patterns of over- generalization –Mervis & others Advantage of the basic level –Rosch Frequency and expertise effects Sensitivity to linguistic distinctions –Lumping vs. splitting Idiosyncractic (lexical) Systematic (gender, classifiers…) Conceptual Reorganization –Carey

ExperienceExperience Early Later Later Still

Patterns of Coherent Covariation That Drive Learning

Conceptual Reorganization (Carey, 1985) Carey demonstrated that young children ‘discover’ the unity of plants and animals as living things with many shared properties only around the age of 10. She suggested that the coalescence of the concept of living thing depends on learning about diverse aspects of plants and animals including –Nature of life sustaining processes –What it means to be dead vs. alive –Reproductive properties Can reorganization occur in a connectionist net?

Conceptual Reorganization in the Model Suppose superficial appearance information, which is not coherent with much else, is always available… And there is a pattern of coherent covariation across information that is contingently available in different contexts. The model forms initial representations based on superficial appearances. Later, it discovers the shared structure that cuts across the different contexts, reorganizing its representations.

Organization of Conceptual Knowledge Early and Late in Development

A Challenge to The Core Knowledge Position? “The existence of conceptual change [...] challenges the view that knowledge develops by enrichment around a constant core, and it raises the possibility that there are no cognitive universals: no core principles of reasoning that are immune to cultural variation.” –Carey & Spelke, 1994 The simulation also raises the possibility that what we see early reflects simpler regularities that are easy to detect, and what we see later reflects less patently obvious regularities.

Inductive Biases that Affect Learning Like other approaches, connectionist models require inductive biases to avoid over-fitting and to promote good generalization The idea that such biases exist is not in dispute  The only question is their nature, and the degree to which they are domain-specific

What has to be built in? Theory-theory and related approaches –To learn and generalize correctly, we need a domain theory to constrain our inferences –To get started, we need initial domain-specific knowledge, to guide the learning process Connectionist and other learning-based approaches –There are initial biases that constrain learning in connectionist systems, but they may be less domain-specific –Domain specific constraints can emerge from the learning process

The architecture promotes sensitivity to shared structure across contexts Small initial weights promote initial sensitivity to broad generalizations These properties work together to allow patterns of coherent covariation to drive the network’s representation, explaining differentiation and reorganization These properties also promote cross- domain generalization, leading to abstraction and sharing of knowledge across domains, leading to implicit metaphor and grounding of abstract concepts A Inductive Biases of the Rumelhart Model

Abstracting Cross-Domain Structure: Input Similarities and Learned Similarities

How Important Is Structure Represented In the Input to Learning? Coding of input can bias a network’s learning and generalization But that coding itself may arise from a learning process Helpful representations of input can be learned and may not have to be pre-specified The choice of representation can arise strictly from relationships among inputs and outputs And even from second-order relationships (similarities across domains in the pattern of similiarities)

Emergence of Explicit Knowledge Humans can and do acquire explicit knowledge through instruction and explicit reasoning. By this I mean: –One or more stated propositions or observed events can lead to a new proposition or inferred state-of- affairs –These inferences can be used to make further inferences or stored for later use Note that the inference process need not be governed by explicit knowledge, as we illustrate by showing how they occur in the Rumelhart network

Start with a neutral representation on the representation units. Use backprop to adjust the representation to minimize the error.

The result is a representation similar to that of the average bird…

Use the representation to infer what this new thing can do.

Three Questions What is the basis of cognitive abilities? What do we start with? What causes abilities to change? The answers to these questions are inter- related, and need to be considered together

Some Tentative Concluding Suggestions Perhaps most explicit principles and systems of representation are cultural, scholarly, and scientific in origin –New ones are discovered by individuals, by processes that may have implicit as well as explicit components Perhaps the basis of many of our natural cognitive abilities is knowledge stored in connections –And perhaps this knowledge is the source of the intuitions that lead to genuine scientific discoveries Early development gives us starting places for further learning, but we might get there from rather different starting places –But it remains unclear how much domain-specific constraint needs to be built in for successful learning of interesting structure

Quick Points to Discuss More Later How do domain-specific constraints on generalization emerge from domain-general learning? –In Rogers and McClelland (2004) we showed how this can occur, and I will be happy to explain People can learn new things in a single trial, how does this happen in your approach? –It happens through the use of a complementary learning system in the hippocampus as discussed in McClelland, McNaughton, & O’Reilly (1995)

Proposed Architecture for the Organization of Semantic Memory color form motion action valance Temporal pole name Medial Temporal Lobe

Different Features Matter for Toys and Foods (Marcario, 1991) 3-4 yr old children see a puppet and are told he likes to eat, or play with, a certain object (e.g., top object at right) –Children then must choose another one that will “be the same kind of thing to eat” or that will be “the same kind of thing to play with”. –In the first case they tend to choose the object with the same color. –In the second case they will tend to choose the object with the same shape.

Adjustments to Training Environment Among the plants: –All trees are large –All flowers are small –Either can be bright or dull Among the animals: –All birds are bright –All fish are dull –Either can be small or large In other words: –Size covaries with properties that differentiate different types of plants –Brightness covaries with properties that differentiate different types of animals

Use BackProp to Representation to Assign Representations ‘Has skin’ plus all combinations of –Big or Small –Bright or Dull ‘Has roots’ plus all combinations

Similarities of Obtained Representations Size is relevant for Plants Brightness is relevant for Animals