Download presentation
Presentation is loading. Please wait.
1
Processing and Constraint Satisfaction: Psychological Implications The Interactive-Activation (IA) Model of Word Recognition Psychology 85-419/719 January 30, 2001
2
Background: Fodor’s Definition of Modularity Input systems are composed of distinct processing modules Modules have certain properties: –Partial results not shared between modules. Communication is all-or-nothing. –Information is encapsulated in a module; only results are shared. Access to global data is limited. Contrast with view we’ve seen so far...
3
More Background: Models and Demonstrations A model is our best attempt at simulating a system. We think it’s basically true (to a certain level of approximation, anyway). A demonstration is a simulation that we know isn’t right, but that demonstrates a useful point; say, about computational principles. Which one is the IA model?
4
Letter Perception In Context: The Phenomena DOG #### BNR #### O or U?N or M?
5
The Empirical Findings... Subjects are more accurate in identifying letters in briefly presented stimuli if the letter was in a word (as opposed to random letters, or individual letters) Nonwords that are pronounceable (e.g., MAVE) show advantage over non- pronounceable strings (e.g., MVAE)
6
Assumptions of the Model Fodor’s wrong. –Processing is interactive, parallel, with partial results feeding different representational areas. There are (at least) 3 levels of analysis: features, letters, and words. The levels inhibit or excite each other depending on whether they are consistent with each other. Context effects can emerge from interactions between levels of representation
7
The Overall Model Word Level Letter Level Feature Level Visual Input “Context” Acoustic Level Phoneme Level Acoustic Input Implemented Model Spelling Speech
8
Representations of Visual Features 16 features, each corresponding to a line segment. 4 slots, one for each letter.
9
Levels of Representation Word Level: Inhibitory connections catdoglake cLetter level: Inhibitory and excitatory connections a Feature Level (still more connections)
10
Pre-Set Weights Negative, inhibitory weights between word nodes. All same value. Positive or negative weights between letter nodes and word nodes, and between feature nodes and letter nodes. Same values for all weights. Biases on word nodes a function of word’s frequency.
11
Processing Generally, the same formulation that we’ve been working with: –Network of weights, activities for units over time. … with additional mathematics to simulate a forced-choice response
12
The Running Average (Eq. 5) In simulations, r=0.05
13
Response Probability (Eq. 6 & 7) Strength of option i is: Probability of Response for i is: In simulations, u = 10
14
Model Behavior: Degraded Input work word wear
15
Letter Activations Too: k a r
16
The Word Preference Effect When stimuli is masked, letters embedded in words are perceived more accurately than letters standing alone “e” in “read” “e” alone Mask applied
17
Probabilities... Probability 0 1 “e” in “read” “e” alone Subjects: 80% for word 65% alone Model: 81% for word 66% alone
18
Difference Between Masked and Degraded Stimuli When stimuli is masked, there is actual information actively disrupting the visual system –By hypothesis, this actively turns off the letter representations In contrast, when stimuli is simply degraded, there is still some activity in letter units. It’s noisy, but not obliterated.
19
Simulating Masking and Degraded Stimuli Masking: Present stimuli in reliable fashion for period of time. Then, activate all segments (corresponds to mask) –Result: suppress all letter nodes Degraded stimuli: Present stimuli where features have a probability of being detected.
20
Interactions...
21
Why? In masked condition, letters lose all visual excitation. All activity, then, is a result of top down influences. For words, this is much larger than for single letters. In degraded stimuli, there is still some visual information. So single letters not so reliant on top down information.
22
Letter Perception in Nonwords Nonwords that look like words (e.g., MAVE) show letter advantage over letters in isolation too. IA model account: –Even though MAVE may not “win” with any word nodes, it overlaps with enough word nodes (GAVE, SAVE, HAVE) for the letters to get some top-down support
23
Neighbors, Friends and Enemies A neighbor of a word is one that differs only by one letter A letter (e.g., M) in a spelling pattern like MAVE has friends; words that are neighbors and have an M in the 1st position (MOVE, MAKE, MADE) There are also enemies. Words that are neighbors but don’t have an M in the 1st position (HAVE, GAVE, SAVE)
24
The “Rich Get Richer” Effect Have (high frequency) Gave (medium frequency) Save (low frequency)
25
The “Gang Effect” Save. Part of large gang Male. Also, large gang Move. Member of smaller gang
26
Other Phenomena? Word Level Letter Level Feature Level Visual Input Acoustic Level Phoneme Level Acoustic Input Lesch & Pollatsek ‘93: TOWED primes FROG Semantic Priming: TOAD primes FROG Impairments? Two-hop: STRIPES primes LION
27
The “Slot Problem”
28
Wrapping Up Section I: Constraint Satisfaction Complex patterns of behavior arise from “simple” interactions between processing units Weights encode knowledge about relationships between atomic facts, propositions, perceptions Networks are dynamic; representations evolve over time
29
Next Section: Simple Learning For next class: read handout (from Handbook, Chapter 4, pages 83-89; see web page) Homework 1 due (but two day grace period) Next homework handed out. Due Feb 15.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.