Presentation is loading. Please wait.

Presentation is loading. Please wait.

‘Signed and Spoken Thinking‘?

Similar presentations


Presentation on theme: "‘Signed and Spoken Thinking‘?"— Presentation transcript:

1 ‘Signed and Spoken Thinking‘?
Towards the Influence of Language Modality on Thought Klaudia Grote Center for Cultural Studies “Media and Cultural Communication” University of Cologne / DESIRE (Deaf and Sign Language Research Team) Aachen University of Technology Universität zu Köln I. Research Question The empirical work described in this poster explores the structural diversity in the semantic system of deaf signers and hearing non-signers. The main question addressed is whether different modalities (either auditory-vocal, visual-gestural) evoke modality specific semantic associations and affect cognitive categorization. II. Method Four experiments were used to assess the strength of semantic relations between semantic categories. Experiment 1+2 examined whether deaf and hearing participants (n=40) show different response times (RT) in a verification task where they had to judge the presence or absence of a semantic relation (paradigmatically vs. syntagmatically) between two signs/two spoken words (study 1) and two pictures (study 2), respectively. A reference item (e.g. shirt) was combined with either a related superordinate- (e.g. clothing), coordinate- (e.g. trousers), action- ( e.g. to iron) or attribute-target item (e.g. light) and with unrelated distractors of the same type as the target items. In Experiment 3 only deaf subjects were involved. They performed the same verification task as in study one and two, but this time only action- and attribute-related items were presented. One half of the items were classifier predicates (multimorphemic), representing action- and attribute-related items while simultaneously referring to the target item by a verb classifier or adjectival classifier. Experiment 1: Verification task with signs and spoken words Experiment 2: Verification task with pictures Example Stimulus Set 20 Different Stimulus Sets: 20 Reference Items + 80 Target Items + 80 Distractors 60 Trials 50 Trials 50 Trials In Experiment 4 the deaf subjects performed a recognition memory task. The purpose of this experiment was to determine if subjects would show a different pattern in false recognition of signs which were either (1) not related (distractor), (2) associatively related to the studied target item (monomorphemic) or (3) associatively related + using a classifier which referred to the target item (multimorphemic). Results: Experiment 1+2 Verification task with Sign Language / Pictures Deaf Subjects Results: Experiment 1 Verification task with Sign Language / Spoken Words Results: Experiment 1+2 Verification task with Spoken Words / Pictures Hearing Subjects 1400 2000 1400 Interaction: Modality x Semantic Relation F(3,36)= (p=.001) Interaction: Modality x Semantic Relation F (3,36) = (p=.001) 1300 1900 1300 p=.001 1200 p=.001 1200 p=.001 1800 1100 p=.001 p=.001 1100 1700 1000 Mean Response Time in msec (corrected) p=.001 Mean Response Time in msec Mean Response Time in msec signs 1000 900 1600 r=.57** (picture complexity) p=.001 p=.001 900 800 F(1,38) = 9,595 (p=.004) 1500 800 700 p=.001 p=.008 p=.007 pictures 1400 600 700 1300 500 600 Superordinate Coordinate Action Attribute Superordinate Coordinate Action Attribute Superordinate Coordinate Action Attribute III. Results In Experiment 1+2 the obtained data show a significantly different pattern of results for the two groups of participants in both studies. A t-test for paired samples revealed no differences in judging superordinates and coordinates between the two groups of participants. In judging actions (T(19)=-7,645; p=.00) and attributes (T(19)=-5,877; p=.001) the deaf participants were significantly faster than the hearing participants. No significant difference in RTs between the different types of distracters was found. The results in Experiment 3 show a significant facilitation effect for the action- and attribute-related multimorphemic signs. The false positive rate (%) in Experiment 4 was significantly different for all three classes of stimulus items. The lowest rate was found for distractor, followed by monomorphemic and the highest for multimorphemic signs. Results: Experiment 3 Verification task with monomorphemic and multimorphemic signs Results: Experiment 4 Memory recognition task with monomorphemic and multimorphemic associative signs F(1,22)= (p=.001) F(2,19)=39,272 (p=.001) attributes actions IV. Conclusion The results of the experiments imply that there are qualitative differences between the structures of the verbal and visual semantic systems of deaf and hearing subjects. The differences between deaf and hearing subjects may lie in the constraints of the visual-gestural and audio-vocal language modalities. In Sign Languages syntagmatically related concepts can be expressed simultaneously in space. Merging of signs is inherently economical and an advantage of languages in the visual-gestural mode. Architectures and Mechanisms of Language Processing Leiden (NL), September 2000


Download ppt "‘Signed and Spoken Thinking‘?"

Similar presentations


Ads by Google