Download presentation
1
Chapter 13: Speech Perception
2
Overview of Questions Can computers perceive speech as well as humans? Why does an unfamiliar foreign language often sound like a continuous stream of sound, with no breaks between words? Does each word that we hear have a unique pattern of air pressure changes associated with it? Are there specific areas in the brain that are responsible for perceiving speech?
3
The Acoustic Signal Produced by air that is pushed up from the lungs through the vocal cords and into the vocal tract Vowels are produced by vibration of the vocal cords and changes in the shape of the vocal tract by moving the articulators. These changes in shape cause changes in the resonant frequency and produce peaks in pressure at a number of frequencies called formants.
4
Figure 13.1 The vocal tract includes the nasal and oral cavities and the pharynx, as well as components that move, such as the tongue, lips, and vocal cords.
5
The Acoustic Signal - continued
The first formant has the lowest frequency, the second has the next highest, etc. Sound spectrograms show the changes in frequency and intensity for speech. Consonants are produced by a constriction of the vocal tract. Formant transitions - rapid changes in frequency preceding or following consonants
6
Figure 13.2 Left: the shape of the vocal tract for the vowels /I/ and /oo/. Right: the amplitude of the pressure changes produced for each vowel. The peaks in the pressure changes are the formants. Each vowel sound has a characteristic pattern of formants that is determined by the shape of the vocal tract for that vowel. (Denes and E. N. Pinson, 1993)
7
Basic Units of Speech Phoneme - smallest unit of speech that changes meaning of a word In English there are 47 phonemes: 13 major vowel sounds 24 major consonant sounds Number of phonemes in other languages varies—11 in Hawaiian and 60 in some African dialects
8
Table 13.1 Major consonants and vowels of English and their phonetic symbols
9
The Relationship between Phonemes and the Acoustic Signal
The variability problem - there is no simple correspondence between the acoustic signal and individual phonemes Variability comes from a phoneme’s context Coarticulation - overlap between articulation of neighboring phonemes also causes variation
10
Figure 13. 5 Hand-drawn spectrograms for /di/ and /du/
Figure 13.5 Hand-drawn spectrograms for /di/ and /du/. (From Liberman et al., 1967)
11
The Relationship between the Speech Stimulus and Speech Perception - continued
Variability from different speakers Speakers differ in pitch, accent, speed in speaking, and pronunciation This acoustic signal must be transformed into familiar words People perceive speech easily in spite of the variability problems due to perceptual constancy.
12
Categorical Perception
This occurs when a wide range of acoustic cues results in the perception of a limited number of sound categories An example of this comes from experiments on voice onset time (VOT) - time delay between when a sound starts and when voicing begins Stimuli are /da/ (VOT of 17ms) and /ta/ (VOT of 91ms)
13
Categorical Perception - continued
Computers were used to create stimuli with a range of VOTs from long to short. Listeners do not hear the incremental changes, instead they hear a sudden change from /da/ to /ta/ at the phonetic boundary. Thus, we experience perceptual constancy for the phonemes within a given range of VOT.
14
Figure The results of a categorical perception experiment indicate that /da/ is perceived for VOTs to the left of the phonetic boundary, and that /ta/ is perceived at VOTs to the right of the phonetic boundary. (From Eimas & Corbit, 1973)
15
Figure In the discrimination part of a categorical perception experiment, two stimuli are presented, and the listener indicates whether they are the same or different. The typical result is that two stimuli with VOTs on the same side of the phonetic boundary (VOT = 0 and 25 ms; solid arrows) are judged to be the same, and two stimuli on different sides of the phonetic boundary (VOT = 25 and 50 ms; dashed arrows) are judged to be different.
16
Speech Perception is Multimodal
Auditory-visual speech perception The McGurk effect Visual stimulus shows a speaker saying “ga-ga.” Auditory stimulus has a speaker saying “ba-ba.” Observer watching and listening hears “da-da”, which is the midpoint between “ga” and “ba.” Observer with eyes closed will hear “ba.”
17
Figure 13. 10 The McGurk effect
Figure The McGurk effect. The woman’s lips are moving as if she is saying /ga-ga/, but the actual sound being presented is /ba-ba/. The listener, however, reports hearing the sound /da-da/. If the listener closes his eyes, so that he no longer sees the woman’s lips, he hears /ba-ba/. Thus, seeing the lips moving influences what the listener hears.
18
VIDEO: The McGurk Effect
19
Speech Perception is Multimodal - continued
The link between vision and speech has a physiological basis. Calvert et al. showed that the same brain areas are activated for lip reading and speech perception. Von Kreigstein et al. showed that the FFA is activated when listeners hear familiar voices. This shows a link between perceiving faces and voices.
20
Meaning and Phoneme Perception
Experiment by Rubin et al. Short words (sin, bat, and leg) and short nonwords (jum, baf, and teg) were presented to listeners. The task was to press a button as quickly as possible when they heard a target phoneme. On average, listeners were faster with words (580 ms) than non-words (631 ms).
21
Meaning and Phoneme Perception - continued
Experiment by Warren Listeners heard a sentence that had a phoneme covered by a cough. The task was to state where in the sentence the cough occurred. Listeners could not correctly identify the position and they also did not notice that a phoneme was missing -- called the phonemic restoration effect.
22
Meaning and Word Perception
Experiment by Miller and Isard Stimuli were three types of sentences: Normal grammatical sentences Anomalous sentences that were grammatical Ungrammatical strings of words Listeners were to shadow (repeat aloud) the sentences as they heard them through headphones.
23
Meaning and Word Perception - continued
Results showed that listeners were 89% accurate with normal sentences 79% accurate for anomalous sentences 56% accurate for ungrammatical word strings Differences were even larger if background noise was present
24
Perceiving Breaks between Words
The segmentation problem - there are no physical breaks in the continuous acoustic signal. Top-down processing, including knowledge a listener has about a language, affects perception of the incoming speech stimulus. Segmentation is affected by context, meaning, and our knowledge of word structure.
25
Figure 13. 11 Sound energy for the words “Speech Segmentation
Figure Sound energy for the words “Speech Segmentation.” Notice that it is difficult to tell from this records where one word ends and the other begins. (Speech signal courtesy of Lisa Saunders).
26
Figure Speech perception is the result of top-down processing (based on knowledge and meaning) and bottom-up processing (based on the acoustic signal) working together.
27
Perceiving Breaks between Words - continued
Knowledge of word structure Transitional probabilities - the chance that one sound will follow another in a language Statistical learning - the process of learning transitional probabilities and other language characteristics Infants as young as eight months show statistical learning.
28
Statistical Learning Experiment by Saffran et al. Learning phase - infants heard nonsense words in two-minute strings of continuous sound that contained transitional probabilities Nonsense words were in random order within the string. If infants use transitional probabilities, they should recognize the words as units even though the string of words had no breaks.
29
Statistical Learning - continued
Examples of transitional probabilities Syllables within a word - bidaku - syllable da always followed bi, which is a transitional probability of 1.0 Syllables between words - ku from bidaku was not always followed by pa from padoti or tu from tupiro The transitional probability of either of these combinations occurring was .33
30
Statistical Learning - continued
Testing phase - infants presented with two types of three syllable stimuli from the strings Whole-words - stimuli (bidaku, tupiro, padoti) that had transitional probabilities of 1.0 between the syllables Part-words - stimuli created from the beginning and ends of two words (tibida from the end of padoti and the beginning of bidaku)
31
Statistical Learning - continued
During the testing phase, each stimulus was preceded by a flashing light near the speaker that would present the sound. Once the infant looked at the light, the sound would play until the infant looked away. Infants listened longer to the part-words, which were new stimuli, than to the whole-words. The instructor can tell the students that infants will lose interest in stimuli that are repeated or are familiar and will pay more attention to novel stimuli. Thus, the infants in this study recognized the “words” they had heard before but did not recognize the new combinations. This provides support for the hypothesis that transitional probabilities are used by infants to learn to segment natural adult speech.
32
Speaker Characteristics
Indexical characteristics - characteristics of the speaker’s voice such as age, gender, emotional state, level of seriousness, etc. Experiment by Palmeri et al. Listeners were to indicate when a word was new in a sequence of words. Results showed that they were much faster if the same speaker was used for all the words.
33
Speech Perception and the Brain
Broca’s aphasia - individuals have damage in Broca’s area in frontal lobe Labored and stilted speech and short sentences but they understand others Wernicke’s aphasia - individuals have damage in Wernicke’s area in temporal lobe Speak fluently but the content is disorganized and not meaningful They also have difficulty understanding others and word deafness may occur in extreme cases.
34
Figure Broca’s and Wernicke’s areas were identified in early research as being specialized for language production and comprehension.
35
Speech Perception and the Brain - continued
Brain images show that some patients with brain damage can discriminate syllables but are able to understand words. Brain scans have also shown that there is A “voice area” in the STS that is activated more by voices than other sounds. A ventral stream for recognizing speech and a dorsal stream that links the acoustic signal to movements for producing speech - called the dual stream model of speech perception.
36
Figure The dual-stream model of speech perception proposes a ventral pathway that is responsible for recognizing speech and a dorsal pathway that links the acoustic signal and motor movements. The blue areas are associated with the dorsal pathway; the yellow area with the ventral pathway. The red and green areas are also involved in the analysis of speech stimuli. (Adapted from Hickock and Poeppel, 2007).
37
Experience Dependent Plasticity
Before age one, human infants can tell difference between sounds that create all languages. The brain becomes “tuned” to respond best to speech sounds that are in the environment. Other sound differentiation disappears when there is no reinforcement from the environment.
38
Motor Theory of Speech Perception
Liberman et al. proposed that motor mechanisms responsible for producing sounds activate mechanisms for perceiving sound. Evidence from monkeys comes from the existence of audiovisual mirror neurons. Experiment by Watkins et al. Participants had their motor cortex for face movements stimulated by transcranial magnetic stimulation (TMS). The instructor should refer back to the earlier discussion of mirror neurons in Chapter 10.
39
Motor Theory of Speech Perception - continued
Results showed small movements for the mouth called motor evoked potentials (MEP). This response became larger when the person listened to speech or watched someone else’s lip movements, but not other movements. This effect may be due to mirror neurons in humans.
40
Figure The transcranial magnetic stimulation experiment that provides evidence for a link between speech perception and production in humans. See text for details. (Watkins et al., 2003).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.