PSY 324 Topic 8: Knowledge Dr. Ellen Campana Arizona State University

Slides:



Advertisements
Similar presentations
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Advertisements

What’s in memory?-- Categorization. The importance of categorization What’s this? Does it have seeds on the inside? Does it have lungs? How do you know.
Cognitive - knowledge.ppt © 2001 Laura Snodgrass, Ph.D.1 Knowledge Structure of semantic memory –relationships among concepts –organization of memory –memory.
Chapter 9 Knowledge.
Chapter 7 Knowledge Terms: concept, categorization, prototype, typicality effect, object concepts, rule-governed, exemplars, hierarchical organization,
Representation/organization in LTM Typical empirical testing paradigm: propositional verification task – rt to rose is flower, vs. rose is plant. Set Theoretical.
Organization of Semantic Memory The study of Collins & Quillian (1969):Collins & Quillian (1969): The authors were interested in the organization of semantic.
Knowledge ß How do we organize our knowledge? ß How do we access our knowledge? ß Do we really use categories?
PSY 369: Psycholinguistics Mental representations II.
Concepts and Categories. Functions of Concepts By dividing the world into classes of things to decrease the amount of information we need to learn, perceive,
Knowing Semantic memory.
Cognitive Processes PSY 334 Chapter 5 – Meaning-Based Knowledge Representation July 24, 2003.
Natural Categories Hierarchical organization of categories –Superordinate (e.g., furniture) –Basic-level (e.g., chair) –Subordinate (e.g., armchair) Rosch.
Cognitive Processes PSY 334 Chapter 5 (Cont.) Chapter 6 – Human Memory: Encoding and Storage July 29, 2003.
COGNITIVE NEUROSCIENCE
Knowledge Representation and Organization
Modeling Cross-Episodic Migration of Memory Using Neural Networks by, Adam Britt Definitions: Episodic Memory – Memory of a specific event, combination.
Chapter Seven The Network Approach: Mind as a Web.
Knowledge information that is gained and retained what someone has acquired and learned organized in some way into our memory.
Categorization  How do we organize our knowledge?  How do we retrieve knowledge when we need it?
How do we store information for the long-term in memory?
Cognitive Psychology, 2 nd Ed. Chapter 8 Semantic Memory.
Memory for General Knowledge
By the end of this lecture, you will learn: –How to see sound and hear colors –How to alter the perceptions of others –How to know what you don’t know.
Lecture 10 – Semantic Networks 1 Two questions about knowledge of the world 1.How are concepts stored? We have already considered prototype and other models.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Semantic Memory Memory for meaning
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Semantic Memory Psychology Introduction This is our memory for facts about the world This is our memory for facts about the world How do we know.
Semantic Memory Knowledge memory Main questions How do we gain knowledge? How is our knowledge represented and organised in the mind-brain? What happens.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Conceptual Hierarchies Arise from the Dynamics of Learning and Processing: Insights from a Flat Attractor Network Christopher M. O’ConnorKen McRaeGeorge.
Memory for General Knowledge and Concepts and Categorization
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
PSY 323 – COGNITION Chapter 9: Knowledge.  Categorization ◦ Process by which things are placed into groups  Concept ◦ Mental groupings of similar objects,
Memory and Cognition PSY 324 Chapter 2: Cognition and the Brain Part III: Neural Representation Dr. Ellen Campana Arizona State University.
Category Structure Psychology 355: Cognitive Psychology Instructor: John Miyamoto 05/20 /2015: Lecture 08-2 This Powerpoint presentation may contain macros.
Memory--what’s in there?. The problem You have a great deal of stuff in memory: How do you get the right thing out when you need it?
Similarity and Attribution Contrasting Approaches To Semantic Knowledge Representation and Inference Jay McClelland Stanford University.
Concepts and Knowledge Thomas G. Bowers, Ph.D. Penn State Harrisburg 2000.
What is a concept? Part of semantic memory (vs. episodic memory) A class of items that seem to belong together –‘dog’, ‘balloon’, ‘terrorist’ (things)
Long Term Memory: Semantic Kimberley Clow
Cognitive Processes PSY 334 Chapter 5 – Meaning-Based Knowledge Representation.
1 How is knowledge stored? Human knowledge comes in 2 varieties: Concepts Concepts Relations among concepts Relations among concepts So any theory of how.
The Emergent Structure of Semantic Knowledge
Organization of Semantic Memory Typical empirical testing paradigm: propositional verification task – rt to car has four wheels vs. car is a status symbol.
Associative Theories of Long- Term Memory. Network Theory The basic notion that we need to explore is that memory consists of a large number of associations.
Concepts And Generic Knowledge
Neuropsychological Evidence for Category Structure Then: The Functional Role of Mental Imagery Psychology 355: Cognitive Psychology Instructor: John Miyamoto.
Semantic Memory Psychology Introduction This is our memory for facts about the world How do we know that the capital of Viet Nam is Hanoi How is.
Verbal Representation of Knowledge
Lecture 8 – Categories 1 great detail is needed for planning & executing current responses some detail about context of new learning can be emcoded with.
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Module 5 Other Knowledge Representation Formalisms
Representational Hypotheses in Cognitive Psychology
PSY 323 – Cognition Chapter 9: Knowledge.
Knowledge Pt 2 Chapter 10 Knowledge Pt 2.
Emergence of Semantics from Experience
Class Schedule In-text Citations Long-term Memory: Organization
[Human Memory] 10.Knowledge
Knowledge Pt 2 Chapter 10 Knowledge Pt 2.
Knowledge Pt 2 Chapter 10 Knowledge Pt 2.
Cognitive Psychology Chapter 7 Knowledge, Part I.
How is knowledge stored?
The Network Approach: Mind as a Web
Categories My dog sleeping. My dog. All golden retrievers. All dogs. All canines. All mammals… Each of these is a category. Categorization is the process.
Presentation transcript:

PSY 324 Topic 8: Knowledge Dr. Ellen Campana Arizona State University Memory and Cognition PSY 324 Topic 8: Knowledge Dr. Ellen Campana Arizona State University

Concepts and Categories

Why do we have concepts? Last time … memories constructed through the process of inference “War of the ghosts” Estimating high school grades Flashbulb memories (and other episodic memories) Inference happens all the time, not just in memory New store opens, scripts help us know how to buy Strange hungry (but healthy) cat at the door… food?

Concepts Concepts are mental representations that make it possible to do inference, understand language, do reasoning, and remember Make up semantic memories from last section Used for construction and other inferences, so also part of episodic memory Can be used for categorization Entities placed into groups called categories

Categories Knowing what category an entity is a member of gives you a lot of information about it Without concepts all knowledge about each entity would come from experience with that entity Each restaurant, each person, each cat, each class, etc. Broken pencil replaced – learn to use it for writing all over Helps explain things that would otherwise be odd Pittsburgh Steelers fan

CAT Categories Difficult to train Likes to rub up against people and objects Has a tail Likes milk, fish Catches mice CAT A feline: related to lions and tigers Sleeps a lot, but more active at night

Categories TEAPOT

Models of Categorization Definitional Approach (Aristotle) Membership by definition, like a checklist Family Resemblances (Wittgenstein) Membership by similarity Prototype Approach (Rosch) Membership by similarity to “average” of category Exemplar Approach Membership by similarity to examples of members More specific version of family resemblances

Definitional Approach In geometry a square can be defined as “a plane figure having four equal sides” Features of a square: planar figure, four sides, sides are equal If something has all of these, it is a square If something is a square it must have all of these Definitional approach uses definitions like this for everything Bachelor: Male, unmarried, adult, human What about the pope?

Definitional Approach Assumes Sharp category boundary (in or out) Equality of members Representation of category is list of necessary and sufficient features If an entity meets the conditions it is a member If an entity is a member we know it meets the conditions In practice it is difficult to find such features Is a bookend furniture?

Family Resemblance No one feature that all have in common…. … Yet all are in some way similar to the other category members Variation within categories OK

Family Resemblance Assumes Think about it long and it gets confusing…. No strict “definition” of what’s in/out based on individual features Membership based on similarity Some members can be “better” examples than others Think about it long and it gets confusing…. Similarity, but similarity to what? Next two approaches are more detailed versions

Prototype Approach What’s a prototype? An “average” of all members The guy in the center is closest to the prototype (highest prototypicality) but he isn’t the prototype Prototype isn’t here

Prototype Approach Features: Prototype Glasses (yes/no) Hair (dark/light) Nose (big/small) Ears (big/small) Mustache (yes/no) Prototype 2/3 glasses, 7/9 light hair, 7/9 big nose, 7/9 big ears, 5/9 mustache Center guy has the highest prototypicality

Support for Prototypes Rosch (1975)- prototypicality rating Participants got category names (bird) and lists of 50 members (robin, canary, ostrich, penguin, sparrow…) Provided rating on how well the item represented the category Results: Much agreement on ratings between participants

Rosch (1975) Bat Penguin Owl Sparrow CATEGORY: BIRDS Telephone Mirror Poor Very Good CATEGORY: BIRDS Telephone Mirror China Closet Chair, Sofa Very Good Poor CATEGORY: FURNITURE

Rosch and Mervis (1975) Participants wrote down as many characteristics / attributes as they could think of for each item Bicycle: two wheels, you ride them, handlebars, pedals, don’t use fuel…. Dog: have four legs , bark, have fur… When attributes overlap with many other members’ attributes family resemblance is high Results: Items with high prototypicality ratings also have high family resemblance Chairs and sofas, birds, etc.

Other Studies about Prototypes Smith and Coworkers (1974)- Typicality Effect Used sentence verification T/F: an apple is a fruit T/F: a pomegranate is a fruit Results: sentences about highly prototypical objects are judged more quickly (typicality effect) Mervis and Coworkers (1976) When people name objects in a category, the most prototypical objects tend to come first

Other Studies about Prototypes Rosch (1975b) – repetition priming Prototypical members of a category are affected by a priming stimulus more than nonprototypical ones Task: Ignore words, just say whether the two color circles match or not 610ms “Same” Hear “Green” 780ms “Same” “Different”

Other Studies about Prototypes Rosch (1975b) – repetition priming Take-away message: people are faster to respond “same” when the colors are more highly prototypical Word “green” may be linked to the prototype 610ms “Same” Hear “Green” 780ms “Same” “Different”

Exemplar Approach Exemplars are specific examples – provide another account for the effects of we have seen Examples of category members are saved in memory (typical as well as atypical) Potential members compared to all exemplars Those with high family resemblance are like more of the exemplars Remember: Family resemblance correlates w/ prototypicality May be more useful for smaller categories (US presidents, very tall mountains)

Exemplar Approach Features: Why does center guy fit? Glasses (yes/no) Hair (dark/light) Nose (big/small) Ears (big/small) Mustache (yes/no) Why does center guy fit? Glasses like 2/3, light hair like 7/9, big nose like 7/9, big ears like 7/9, mustache like 5/9 Center guy has high family resemblance

Models of Categorization Four Approaches Definitional Approach (Aristotle) Family Resemblances (Wittgenstein) Prototype Approach (Rosch) Exemplar Approach Current consensus – people use both the prototype approach and the exemplar approach (perhaps for different types of categories)

Levels of Categories

Levels of Categories Categories have hierarchical organization Is one level more important or “privileged”? Furniture Superordinate Level Chair Table Basic Level Kitchen Dining room Kitchen Dining room Subordinate Level

Levels of Categories LEVEL EXAMPLE Superordinate Furniture Basic Table Subordinate Kitchen Table Common Features 3 Lose a lot of information 9 Gain just a little info 10.3

Levels of Categories Rosch, Mervis and Coworkers (1976) Rating of common features for each category BASIC level seems to be “special” (go higher and you lose a lot of information, go lower and it doesn’t help much) When given a picture (Levis/Jeans/Clothes) people label it with the basic level category Rosch, Simpson and Coworkers (1976) Category label followed by picture – is it a member? People were faster for basic level categories Coley and Coworkers (1997) Students described trees with the word “trees” rather than “oak tree” or “plant”

Experience and Categories When people become experts at a category, they tend to use more specific categories Tanaka & Taylor (1991) – bird experts and novices Experts: “robin, sparrow, jay, cardinal” Nonexperts: “bird” Members of Itza culture use “oak tree” , not “tree” Partly because they live in close contact with the natural environment Basic levels may not be so “special” for everyone (depends on experience)

Semantic Networks

Semantic Networks Semantic networks describe a theory for how concepts are organized in the mind Goal: to develop a computer model of memory Model we’ll discuss is Collins & Quillian (1969) Network of nodes connected by links Nodes = individual categories or concepts Links = relationships between categories or concepts Nodes are also associated with properties

Semantic Network Collins & Quillian (1969) Breathes Has skin Animal Can move around Eats Has fins Has wings Can swim Can fly Has gills Bird Fish Has feathers Is pink Has long thin legs Is edible Is tall Can’t fly Canary Ostrich Shark Salmon Can sing Can bite Is dangerous Swims upstream to lay eggs Is yellow

Semantic Network Collins & Quillian (1969) Information about the properties of an individual category is retrieved by accessing a node, then following links until we find the desired property (or it’s opposite) Properties are associated with the highest-level category that they apply to in general saves space in memory (Cognitive economy) Exceptions added at lower nodes to deal with unusual cases Example: Properties of a canary

Properties of a Canary Animal List of all properties Can sing Breathes Animal Has skin List of all properties Can move around Can sing Is yellow Eats Has wings Has wings Can fly Has feathers Bird Can fly Breathes Has skin Can move around Eats Has feathers Canary Can sing Is yellow

Properties of an Ostrich Breathes Animal Has skin List of all properties Can move around Can’t fly Has long thin legs Is tall Eats Has wings Bird Has wings Can fly Has feathers Can fly Has feathers Breathes Has skin Can move around Eats Can’t fly Ostrich Has long thin legs Is tall

Semantic Network Collins & Quillian (1969) Network is a functional model, not a physiological one It is designed to address how concepts are organized in the mind, not the brain Nodes DO NOT correspond to specific brain areas Links DO NOT correspond to connections between neurons or networks of neurons But how well does the model fit with data on how the mind organizes information?

Properties of a Canary Levels Animal away from “canary” Breathes Levels away from “canary” Animal Has skin List of all properties Can move around Can sing Is yellow Eats Has wings Has wings Can fly Has feathers Bird 1 Can fly Breathes Has skin Can move around Eats Has feathers 2 Canary Can sing Is yellow

Collins and Quillian (1969) Experiment: timed true/false judgments Compared properties with different distances from the original concept Measured reaction time to simple statements

Collins and Quillian (1969) A canary has skin A canary can fly A canary can sing Reaction time (higher is slower) A canary is an animal A canary is a bird A canary is a canary 1 2 Levels away from “canary”

Collins and Quillian (1969) Experiment: timed true/false judgments Compared properties with different distances from the original concept Measured reaction time to simple statements Results: As properties increased in distance from the concept node, reaction times increased Provides support for counter-intuitive claims related to cognitive economy Provides support for the model in general

Spreading Activation We have been talking about retrieval as traveling through the semantic network, but the model actually claims that retrieval happens through the process of spreading activation Whenever a node becomes “active”, some activity travels through links to closely associated nodes Spreading activation leads to further predictions in the model Priming of associated concepts

Spreading Activation Network as a person searches from canary to bird Animal Network as a person searches from canary to bird Activation spreads from bird to all linked nodes Nodes that get activation are primed (faster to recognize) Bird Canary Ostrich Robin

Priming Priming leads to faster recognition of concepts One type is repetition priming, which you know Myer and Schvaneveldt (1971) looked at priming of associates using a lexical decision method Presented pairs of words, participants decided whether or not both words in the pair were real words YES: Chair/Money or Bread/Wheat NO: Fundt/Glurb or Bleem/Dress Critical comparison: pairs of real words, close associates vs. distantly related concepts

Myer & Schvaneveldt (1971) Reaction Time Words Associated Words Not Associated

Problems with C & Q model Didn’t account for typicality effect A canary is a bird verified faster than an ostrich is a bird Model predicts equal reaction times for both Evidence against cognitive economy idea People store some properties at concept node Evidence against hierarchical structure A pig is an animal verified faster than a pig is a mammal Model predicts the opposite because mammal is supposed to be between pig and animal

Collins and Lofus Version Abandon hierarchy structure in favor of a structure based on individual experience Allowed multiple links between concepts Pig to animal AND pig to mammal to animal Link distances could affect spread of activation Shorter links for closely related concepts Longer links for more distantly related concepts This accounts for typicality effect

Collins and Loftus Version Street Vehicle Car Truck Bus Ambulance Too powerful! Fire Engine

A Theory that Explains Too Much? Collins & Loftus’s model rejected for being too powerful (Johnson-Laird & Coworkers, 1984) Too difficult to falsify Model can “explain” any pattern of data by adjusting link length No definite methods for determining length Can vary from person to person No constraints on how long activation hangs around, or how much information is needed to trigger a node

Elements of Good Psychological Theories Explanatory power – the theory can tell us what caused a specific behavior Predictive power – the theory can make predictions about future experiments Falsifiability – the theory can be shown to be wrong through specific experimental outcomes Generation of experiments – the theory stimulates a lot of research to test the theory, improve it, use methods suggested by it, and/or study new questions that it raises

The Connectionist Approach

Connectionist Networks Research in semantic networks dropped by 80’s but came back with rise of connectionism Book series: Parallel Distributed Processing… (McClelland & Rumelhart, 1986) Connectionist models contain structures like nodes and links, but they operate much differently from semantic networks Inspired by the biology of the nervous system, specifically neurons in the brain

Connectionist Networks Units – “neuron-like”, connect to form circuits, can be activated, can inhibit/excite other units Input units – activated by stimulation from environment (like receptors) Hidden units – get input from input units, connect to output units Output units – get input only from hidden units Concept = pattern of activation across units distributed coding (unlike semantic nets!)

Connectionist Networks Processing is achieved by weights at each connection between neurons Positive weights = excitation in neural system Negative weights = inhibition in neural system Can vary in connection strength, too Connectionist approach also called Parallel Distributed Processing (PDP) approach Representation is distributed across neurons Processing happens in parallel

A Connectionist Network Output Units +5 +3 +10 +4 Hidden Units Input Units

Supervised Learning Weights modified through supervised learning Like a child, making mistakes and being corrected In the beginning all weights are random This is a computer program (not a person)

Steps in Supevised Learning Step 1: input presented, activation propagates through the layers to the output layer

Output Units +10 +6 +5 +2 Hidden Units Input Units -1 -1 +1 +1 CANARY

Steps in Supevised Learning Step 1: input presented, activation propagates through the layers to the output layer Step 2: output compared to correct output, difference is an error signal

-5 -3 +5 +2 CANARY +5 +3 +10 +4 Correct Pattern Error Signal Output Units +10 +6 +5 +2 Hidden Units Input Units -1 -1 +1 +1 CANARY

Steps in Supevised Learning Step 1: input presented, activation propagates through the layers to the output layer Step 2: output compared to correct output, difference is an error signal Step 3: error signal is used to adjust weights using a process called back propagation Connections that contributed most error are adjusted the most

-5 -3 +5 +2 CANARY +5 +3 +10 +4 Correct Pattern Error Signal +10 +6 +5 Weights are all Adjusted -1 -1 +1 +1 CANARY

Steps in Supevised Learning Step 1: input presented, activation propagates through the layers to the output layer Step 2: output compared to correct output, difference is an error signal Step 3: error signal is used to adjust weights using a process called back propagation Connections that contributed most error are adjusted the most Repeat the whole thing many times Stop when error signal is 0

CANARY +5 +3 +10 +4 Correct Pattern Error Signal Output Units +5 +3 Output Units +5 +3 +10 +4 Hidden Units Input Units -1 -1 +1 +1 CANARY

Supervised Learning Supervised learning may seem straightforward, but the trick is training the network to represent many concepts at the same time It can do it if learning all concepts at the same time Making only small changes to the weights helps Hidden units free to respond in any pattern Over time (0-2500 trials), hidden unit patterns differ for different concepts Similar concepts have similar hidden unit activation

Advantages of Connectionist Models Goals of the approach Slow learning process creates network that can handle many different inputs Representation is distributed (as in the brain) Graceful degradation: Damage and/or incomplete information does not completely disrupt a trained network Learning can be generalized: because similar concepts have similar patterns, network can make predictions about concepts it has never seen Computer models have been developed that match some aspects of human performance and deficits related to brain damage

Connectionism Today Opinions are divided Some like the connection with biology, while some think the connection is tenuous at best Recent trend in cognitive science to return to ideas about semantic networks and combine them with connectionist approaches Those who are really interested in connectionist models and what they can express have begun to call their work connection science (not cognitive science or cognitive psychology)

Categories in the Brain Warrington and Shallice (1984) – patients with damage to the inferior temporal lobe (IT) Visual agnosia – patients can see, but not name Double dissociation for living / nonliving things Hills and Caramazza (1991) – IT damage as well Patient could not name nonliving things or some living things (fruits & veggies), could name other living things (animals) Chao and Coworkers (1999) – fMRI shows specialized areas for categories… with overlap

Categories in the Brain It looks like there are areas in the brain that are specialized for different categories Distributed coding with overlap Categories with similar features, similar activation Category-specific neurons respond to individual categories like “house” or “snake” Result from single-cell recording Specific example may cause a specialized pattern of activation across many category-specific neurons Example: faces from chapter 2

Reminder: Midterm 2 is next week! The End Reminder: Midterm 2 is next week!