Download presentation
Presentation is loading. Please wait.
Published byCaitlin Newton Modified over 9 years ago
1
Categorization Classical View – Defining properties E.g. Triangles have 3 sides and 3 angles adding up to 180 degrees – Unquestioned for most of time
2
Challenge to Classical View: Wittgenstein (1953) – Some categories don’t have necessary and sufficient properties (e.g. a game) – Family Resemblances Members of a category may be related to each other but have no common property (a cloud of point in space) – Centrality: some members are better than others – Gradience - some categories have degrees of membership
3
Challenge: Typicality If classical view is right, then all members should be equally “good”, but they’re not
4
Rosch’s Typicality Effects – Typicality varies (e.g. sparrow vs. ostrich) – Typicality associated with how often a member’s properties occur in category – Typical members categorized faster – Generated first and more often – Learned first by kids – Similarity asymmetries – Generalization asymmetries
5
Prototype Theory Most typical member is basis of the category Prototype is explicitly stored Category membership determined from similarity to prototype Not classical - no defining properties
6
Demonstration of Prototype Effect Franks and Binford Prototype Distortions LowHigh
7
Experiment Phase 1: View instances (no prototype) Phase II: View new instances and rate old/new confidence – Old Distortions – New Distortions – Prototypes
8
Results of Prototype Experiment Subjects were more confident of having seen prototypes More distorted from prototype = less confidence (regardless of actually having seen or not)
9
Problems with Prototypes Too limited – Contains only central tendency of category – Lose info about Variance (how much tolerance for distortion is ok) Correlations among properties (e.g. only small birds sing) Category size – An instance may be closer to prototype of one category but still belong in another
10
Exemplar-based Theories Store individual instances rather than prototypes How do you classify a new stimulus? – Compare to all instances of all categories – Assign it to category belonging to best- matching instance
11
Problems with Similarity as Category Basis Drawback of prototype and exemplar- based categories Both of these use “similarity” which is hard to define Similarity is “features”? Which ones?
12
Similarity Depends on Context Tversky Faces Demonstration: Split Class Demo
13
Similarity Depends on Context Tversky Faces Demonstration: Split Class Demo
14
Similarity is Especially Bad at Certain Categories Superordinate – How is car similar to boat? – How is amoeba similar to elephant? Ad-hoc – What is this category? Children, money photographs, jewelry, pets ? – On-the-fly categories - what would you take on a camping trip?
15
Theory-based Categories Animal started out with bird-like features Due to accident, had insect features Animal mated and produced bird-like offspring Animal judged as “bird”, even with insect features Judgment based upon some internal “birdness” not related to appearance
16
Hierarchical (Taxonomic Categories) Thing - Living Thing - Animal - Mammal - Dog - Schnauzer - “Smokey”
17
Basic Level (Rosch) Most natural Middle level (not too specific, not to general) Let’s be more specific – Maximizes within-category perceptual similarity – Minimizes between category perceptual similarity
18
Hierarchical Categories Examples Superordinate – Vehicles, furniture Basic – Cars, boats Subordinate – Accord, Camry Instance – Sean’s Camry, Sue’s Camry
19
Similarity and Hierarchical Categories
20
Basic-Level Effects Generated fastest and first Learned first by kids Shorter words (“dog” vs. “Schnauzer”) Relatively universal across cultures Biederman’s RBC theory is mostly basic level recognition
21
Semantic Memory: Concepts Geometric Approach: Concepts and items are represented as points in a high-dimensional space. Similarity between items is the inverse of distance between the points. Categorization is the task of finding which concept point is closest to the point that represents the item in question (i.e. “is it a cat?” is a question of whether the point representing “it” is close to the “cat” point than any other point in the space). cat dog horse pig duck closer together = more similar
22
Semantic Memory: Concepts Geometric Axioms: Minimality: Similarity between an object and itself is always maximum ( d[A,A] = 0 ) Symmetry: Similarity between A and B is the same as between B and A ( d[A,B] = d[B,A] ) Triangle Inequality: If A is similar to B and B is similar to C, then A can’t be too dissimilar to C. ( d[A,C] d[A,B + d[B,C] ) S(apple,apple) > S(pomogranite, pomogranite) Familiar things are more similar to themselves than unfamiliar things. Unfamiliar things are more similar to familiar things than vice-versa. S(pomogranite,apple) > S(apple, pomogranite) Things can be similar to for different reasons. (Jamaica, Cuba, North Korea example) DON’T WORK FOR PEOPLE!!!...
23
Semantic Memory: Concepts Featural Approach: Concepts and items are represented as lists of features. Similarity between items is given by: S(A,B) = a features(A&B) - b features(AnotB) - c features(BnotA) So similarity increases as two items have more in common, and decreases as each has it’s own non-shared features. Notice there can be biases: coefficients a, b, and c can be weighted differently, so that features in each category can have different effects. So, this model can account for the violations of the metric axioms...
24
Semantic Memory: Concepts Feature apple orange banana pomogranite edible + + ++ has a skin + + ++ round + ++ red + + edible skin + edible seeds ++ good for pies + good for juice + + Suppose the equation is: S(A,B) = 1*(A&B) - 1*(A~B) - 0.5*(B~A) S(apple,apple) = 7-0-0 = 7 S(pomagranite,pomagranite) = 5-0-0 = 5 S(apple,pomogranite) = 4-3-0.5*1 = 0.5 S(pomograntite,apple) = 4-1-0.5*3 = 1.5 violation of minimality violation of symmetry
25
Semantic Memory: Concepts How can we implement the featural model in a network? Units represent concepts and features, with links for connecting concepts that are related, and features that describe them. Assume spreading activation: when one unit is activated, it automatically spreads to all of the connected units over time Assume the fan effect: the more units activation has to spread across, the weaker it becomes. When we compare two things, both units are activated, and activation spreads outward from them. Their similarity is inversely proportional to how long it takes for a certain amount of activation from the two sources to overlap.
26
Semantic Memory: Concepts How can we implement the featural model in a network? Units represent concepts and features, with links for connecting concepts that are related, and features that describe them. Assume spreading activation, and the fan effect. applepomogranite edible has a skin round red edible skin edible seeds good for pies good for juices Units not shared decrease overlapping activation, by spreading it thinner (fan effect) The more units are shared, the more activation will overlap
27
Semantic Memory: Concepts This model can also account for categorization and typicality effects: Categorization: It takes longer to verify “A dog is an animal” than “A dog is a mammal”, because it has farther to travel. Weights between units can indicate how typical an instance is of a superordinate category, changing how strongly activation from one is spread to the other. animal birdmammal dogcat animal birdmammal robin penguin
28
Semantic Memory: Concepts What do feature lists leave out? Causal relations (e.g. the fact that fertilizer tends to grow plants) Relational dependencies (e.g. the fact that only small birds sing) In short: Feature lists leave out structured information. We recall from our discussion of Episodic Memory, this problem ca be solved with the use of schemata: complex structured frameworks. Thus, schemata can be used to semantic memory, too, to tell us what kinds of items are typically found in offices, what kinds of events typically happen in a restaurant, and so on.
29
Modeling Schemata? Challenge for the future: How to represent structured relational information in a network? Relational information (e.g. “Chris loves Pat”) has a problem in networks with distributed representation, similar to the binding problem: the catastrophic superposition problem. Suppose this pattern: and this pattern: Then this pattern: and this pattern: represents “Chris” represents “Pat” represents “Harry” represents “Sally” represents “loves” represents “Chris loves Pat” represents “Pat loves Chris” represents “Harry loves Sally” There is no way to tell the difference!
30
Modeling Schemata? One Answer: Temporal synchrony Suppose this pattern: and this pattern: represents “Chris” represents “Sally” represents “loves” But how do we distinguish between “Chris loves Sally” and “Sally loves Chris”?
31
Modeling Schemata? Need to combine structural and semantic information LISA (Learning and Inference with Schemas and Analogies ) – Hummel & Holyoak Binds semantic information (e.g. “Chris”) to roles (e.g., “loves” Agent) Can then make inferences like we do Chris loves Mary. Chris gives flowers to Mary Bill likes Sally. Bill gives candy to ??? Sally
32
1 – retrieve info from memory Remembered info is a subtle visual property Property not explicitly considered Property not easily deduced from other stored info e.g., what is the shape of Snoopy’s ears? 2 – anticipating navigation (what if I move my arm this way?) Mental imagery What’s it good for?
33
Image generation Image maintenance Image Transformation Mental imagery What are it’s parts?
34
Retain perceptual input Generate from memory Images with more parts take longer to generate Identity of image separate from location (Imagine Bush on a rocket ship heading into space) Global image versus parts image Parts need left hemipshere Left hemisphere better at categorical (above, below) Right hemisphere better at metric (precise distance) Mental imagery Image Generation
35
Can generate images by selectively attending to bathroom tiles Visual memory parts of brains not active during this task Mental imagery Image Generation
36
Holding visual image impairs visual detection but not auditory Smaller images harder inspect Hemispatial neglect affects imagery as well as perception Evidence generally supports idea that imagery may use perceptual mechanisms – BUT d.f. can do imagery fine although perceptual system is mangled Mental imagery Image Inspection
37
Can only hold small number of “Chunks” in visual memory Fades fast without active attention Mental imagery Image Maintenance
38
Shepard and Metzler Showed rotation of cuboid objects strongly related to Time to accomplish Argue that process mimics real world But introspection seems against this (don’t rotate whole object) Appears to be right hemisphere function Mental imagery Image Transformation
40
Propositional vs. Depictive Each has syntax and semantics Mental images What are they?
41
Syntax ON(BALL,BOX) Need relation to connect (BALL,BOX) has no meaning Semantics Meaning of individual symbols is arbitrary Representation is unambiguous Abstract Can refer to non-picturable entitites Can refer to classes of objects Not tied to specific modaility Is this fair? Mental images Propositional?
42
Syntax Points and empty space (pixel-like) Points arranged to make continuous pictures (i.e. comic strip dots) Points placed in spatial relation to each other Semantics Meaning of individual symbols is actual object Distance is maintained Mental images Depictive?
43
SCANNING results favors depictive? Experiments showing things further away from center of object took longer to “See” YES – BUT.. What if propositions are linked spatially (see page 285) This would produce same result.. Mental images Depictive or propositional?
44
SCANNING: The SEQUEL results favors depictive? Experiments of island map showing distances between map items took longer to process. In a network, same distance between nodes would predict no effect. YES – BUT.. What if dummy nodes are inserted for space between? (Starting to look more and more depictive to me) Mental images Depictive or propositional?
45
SCANNING: The TRIQUEL results favors depictive? Experiments of island map showing verification of other objects does not depend on time (which is predicted by Propositional) YES – BUT.. What if verification of other objects uses another mechanism Mental images Depictive or propositional?
46
The effect of demand characteristics Experiment showed effect of experimenter expectancy on results. Subjects told image distance would affect scanning time showed an effect of distance on scan times Subjects told image distance would NOT affect scanning time showed NO effect of distance on scan times Follow up study by another experimenter showed when subjects expected a “U” curve (and others), scanning time was always related to distance Mental images Depictive or propositional?
47
Cognitive Neuroscience Connections go from topographic areas e.g. VI) to non-topographic areas (e.g. object recognition) which do not care about location Presumably – image is generated in particular location through backward connections to topographic areas Mental images Depictive or propositional?
48
FMRI to the rescue? Larger images activated larger topographic areas of visual areas in cortex (e.g. fovea projects to posterior of visual areas while parafovea projects to anterior portions) YES BUT Is this epiphemonal (concurrent but not causal)? Perhaps not because damage to these areas inhibits imagery Mental images Depictive or propositional?
49
Final YES BUT Back to d.f. – why can she do imagery fine if her early visual areas are so damaged? IF YOU HAVE THE ANSWER TO THIS – LET ME KNOW AND WE WILL BOTH BECOME FAMOUS Mental images Depictive or propositional?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.