AS14.1 – The Role of Neural Oscillations in Auditory Resolution and Speech Perception – Implications for CAPD, Phonological Awareness and Reading Disorders.

Slides:



Advertisements
Similar presentations
Plasticity, exemplars, and the perceptual equivalence of ‘defective’ and non-defective /r/ realisations Rachael-Anne Knight & Mark J. Jones.
Advertisements

Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Ling 240: Language and Mind Acquisition of Phonology.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Induced Brain Waves by Binaural Beats: A Study on Numerosity.
Profile of Phoneme Auditory Perception Ability in Children with Hearing Impairment and Phonological Disorders By Manal Mohamed El-Banna (MD) Unit of Phoniatrics,
Brain Rhythms and Short-Term Memory Earl K. Miller The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts.
Carnegie Mellon 1. Toward Exploiting EEG Input in a Reading Tutor Jack Mostow, Kai-min Chang, and Jessica Nelson Project LISTEN
PSY 369: Psycholinguistics
Experiment Design 4: Theoretical + Operational Def’ns Martin Ch. 7.
Discussion Section: Review, Viirre Lecture Adrienne Moore
TOPIC 4 BEHAVIORAL ASSESSMENT MEASURES. The Audiometer Types Clinical Screening.
Sound source segregation (determination)
Altering cognitive and brain states through cortical entrainment: A pilot study of binaural beats in visual target detection using dense-array EEG Nicholas.
Wavelet transformation Emrah Duzel Institute of Cognitive Neuroscience UCL.
Measuring the brain’s response to temporally modulated sound stimuli Chloe Rose Institute of Digital Healthcare, WMG, University of Warwick, INTRODUCTION.
Speech Perception. Phoneme - a basic unit of a speech sound that distinguishes one word from another Phonemes do not have meaning on their own but they.
Functional Brain Signal Processing: Current Trends and Future Directions Kaushik Majumdar Indian Statistical Institute Bangalore Center
Theta-Coupled Periodic Replay in Working Memory Lluís Fuentemilla, Will D Penny, Nathan Cashdollar, Nico Bunzeck, Emrah Düzel Current Biology, 2010,20(7):
Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany A hierarchy of time-scales and the brain Stefan Kiebel.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Reading disorders in mental retardation. Dyslexia or not ? Annick COMBLAIN, University of Liege – FAPSE Department of Cognitive Sciences Speech and language.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
CAPD: ”Behavioral assessment”
Mr Background Noise and Miss Speech Perception in: by Elvira Perez and Georg Meyer.
Clinical Applications
Speech Perception 4/4/00.
Functional Brain Signal Processing: EEG & fMRI Lesson 4
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
LATERALIZATION OF PHONOLOGY 2 DAY 23 – OCT 21, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
How Does auditory perception organization works ? by Elvira Perez and Georg Meyer Dept. Psychology, Liverpool University, UK Hoarse Meeting, Chrysler Ulm,
Late Talkers Phoniatric Dept., 1st Faculty of Medicine Charles University Prague, Czech Republic O. Dlouhá.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Temporal masking of spectrally reduced speech: psychoacoustical experiments and links with ASR Frédéric Berthommier and Angélique Grosgeorges ICP 46 av.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
Presented by: Odelya Ohana. Gathercole & Baddeley, 1989 NWR phonological short-term memory. Gathercole, 2006 Phonological storage is the key capacity.
The effects of working memory load on negative priming in an N-back task Ewald Neumann Brain-Inspired Cognitive Systems (BICS) July, 2010.
Chapter 13: Speech Perception. The Acoustic Signal Produced by air that is pushed up from the lungs through the vocal cords and into the vocal tract Vowels.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
The brain at rest. Spontaneous rhythms in a dish Connected neural populations tend to synchronize and oscillate together.
Näätänen et al. (1997) Language-specific phoneme representations revealed by electric and magnetic brain responses. Presented by Viktor Kharlamov September.
Speech Perception.
Language Perception.
Motor Theory + Signal Detection Theory
Understanding Auditory Processing.  Referred to as those internal processes that a person uses to make sense out of auditory messages.  Has been described.
Transitions + Perception March 25, 2010 Tidbits Mystery spectrogram #3 is now up and ready for review! Final project ideas.
Motion Perception Deficits and Reading Impairment It’s the noise, not the motion A. Sperling, Z-L. Lu, F. Manis & M. Seidenberg.
Motor Theory of Perception March 29, 2012 Tidbits First: Guidelines for the final project report So far, I have two people who want to present their.
The role of synchronous gamma-band activity in schizophrenia Jakramate 2009 / 01 / 14.
Computer Architecture and Networks Lab. 컴퓨터 구조 및 네트워크 연구실 EEG Oscillations and Wavelet Analysis 이 윤 섭이 윤 섭.
Sundeep Teki Wellcome Trust Centre for Neuroimaging University College London, UK Auditory figure-ground segregation in complex acoustic scenes.
Auditory Perception 1 Streaming 400 vs. 504 Hz 400 vs. 566 Hz 400 vs. 635 Hz 400 vs. 713 Hz A 400-Hz tone (tone A) is alternated with a tone of a higher.
EEG Definitions EEG1: electroencephalogram—i.e., the “data”
Risto Näätänen University of Tartu, Estonia
Neurofeedback of beta frequencies:
Speech Perception.
Andy Dykstra HST.722 November 1, 2007
Giovanni M. Di Liberto, James A. O’Sullivan, Edmund C. Lalor 
Cycle 10: Brain-state dependence
Phonological Priming and Lexical Access in Spoken Word Recognition
Ciaran Cooney, Raffaella Folli, Damien Coyle  iScience 
Benedikt Zoefel, Alan Archer-Boyd, Matthew H. Davis  Current Biology 
Progress Seminar 권순빈.
Volume 54, Issue 6, Pages (June 2007)
Presentation transcript:

AS14.1 – The Role of Neural Oscillations in Auditory Resolution and Speech Perception – Implications for CAPD, Phonological Awareness and Reading Disorders Presented by Sharon Cameron National Acoustic Laboratories Date: 11 December 2014 Time: am Location: Denis Byrne Seminar Room Fabrice Bardy Harvey Dillon Tim Beechey Nicky Chong-White

Overall Aim To investigate the relationship between speech perception and the ability of the central auditory nervous system to sample and process rapidly- changing frequency and amplitude information contained in an incoming acoustic signal.

Hypotheses Neural oscillations enable sampling of the incoming acoustic signal that enables:  identification of changes in amplitude, which is critical for parsing the speech stream into syllabic units; and  adequate sampling of speech during periods of rapid spectral variation.

Hypotheses In some children with CAPD, phonological awareness and reading deficits, neural oscillation are:  not sufficiently powerful or synchronised at important frequency ranges, or  are inappropriately balanced across the brain hemispheres.

Hypotheses  This results in inefficient sampling of the acoustic signal and thus an unclear or “fuzzy” representation of the incoming sound.  This may, in turn, lead to creation of poorly-defined stored phonemic representations, difficulties matching the identifying features of the acoustic signal to stored templates, or a deficit in parsing the speech stream into syllabic units.

Specific Aims 1.Development of two behavioural tests to assess participants’ identification skills for (a) fast formant transitions and (b) amplitude modulation (syllable parsing). 2.Development of EEG (and potentially MEG) test stimuli and analytical protocols for assessment of spontaneous, evoked (time locked) and induced neural oscillations. 3.Assessment of 150 typically developing and a-typically children (CAPD; phonological awareness and reading deficits) on a range of assessments.

Cognition memory, IQ, attention (TAPS-3; WNV; TEA-Ch Sentence perception Sentences in competing noise (LiSN-S) Auditory resolution Formant Transition (FT) Detection Amplitude Modulation (AM) Discrimination (syllable parsing) Neural oscillations Evoked & spontaneous (delta, theta, alpha, beta, gamma) EEG/MEG Reading Non-word versus irregular word (CC2) Phonology Phonemic & phonological awareness (CTOPP) γ δ, θ δ, θ, γ α, β Structural Equation Modelling

Objective The development of clinical protocols and tools that will allow identification, and differential diagnosis, of deficits in speech perception, auditory processing, phonological awareness and reading, where the underlying cause is related to the resolution with which the auditory signal is analysed by the brain.

Behavioural Experiments

Behavioural Phoneme Identification Test – Stimuli Synthesized (Praat) stimuli along /ba/-/da/ continuum presented in random order: Part 1 (44 stimuli): 11 steps: [0, 10, 20, 30, …, 90, 100]% da 4 presentations of each Part 2 (48 stimuli): 7 steps: [0, 100]% da + 5 steps around threshold (between 1-99% of psychometric function) o 4 presentations [0, 100%] da o 8 presentations of others

/ba/-/da/ Categorisation Stimuli 50% 100% /ba/ 100% /da/ 60% /ba/ 60% /da/ Initial F2 frequency interpolated from 100% /ba/ to 100% /da/. F1, F3, F4 transition do not vary between stimuli.

2AFC Phoneme Identification Test - GUI Practice Session: 5 x /ba/ 100% 5 x /da/ 100% Test Session: 92 stimuli Participant presses BA or DA button depending on what phoneme they heard.

Phoneme Identification Test - Scoring Alternate stimuli assessed for test-retest reliability. Psychometric function fitted to data to find threshold. Reaction time, and number of repeats measured for each stimulus.

Results: /ba/-/da/ Categorisation (n = 10 TD Adults) Number of times participants press /ba/ divided by total number of presentations at each step size. Change in second formant frequency from 100% /ba/ to 100% /da

Results: /ba/-/da/ Categorisation (n = 1 TD Adult)

Behavioural Amplitude Modulation Detection Test - Stimuli Synthesized (Praat) steady state vowel [a:]. Two and three syllable stimuli with varying amplitude modulation depths are presented in random order. Part 1 (44 stimuli) : 11 steps: [0, 10, 20, 30, …, 90, 100]% modulation depth (x 2 and 3 Syllables). 2 presentations of each Part 2: (48 stimuli) 7 steps: [0, 100]% + 5 steps around threshold (between 1-99% of psychometric function) o 2 presentations [0, 100%]: x 2/3 syllables o 4 presentations of others: x 2/3 syllables

Amplitude modulation stimuli: Synthesized [a:] – Three syllables 100% AM 10% AM Unmodulated 50% AM

100% AM 10% AM Unmodulated Amplitude modulation stimuli: Synthesized [a:] – Two syllables 50% AM

3AFC Amplitude Modulation Detection Test - GUI Practice Session: 3 x 100% AMD (2&3) 3 x 0% AMD (2&3) Test Session: 92 stimuli Subject presses “ 1 ”, “ 2 ” or “ 3 ” depending on how many syllables they heard. A small amount of AMD will sound like 1 syllable.

Modulation Identification Score: o Modulation identified = score of 1 o Modulation not identified = score of 0 o Incorrect number of syllables identified (2 vs. 3) = score of 0.5. Alternate stimuli assessed for test-retest reliability. Psychometric function fitted to data (2 and 3 syllable responses combined) to find threshold Reaction time, and number of repeats measured for each stimulus. Behavioural Amplitude Modulation Detection Test - Scoring

Results - 3FC Amplitude Modulation Detection Test (n = 5 TD Adults) Modulation identification score divided by total number of presentations at each step size. Change in amplitude modulation depth from 0% to 100%

Results - 3FC Amplitude Modulation Detection Test (n = 1 TD Adult)

Normal categorical perception What do we expect to see in our clinical vs. TD children?

Most of the acoustic space is categorised Ambiguous/speech error Discrete, non-confusable phonemes

Continuous perception Over-sampling hypothesis: redundant acoustic details are perceived

No discrete categories Over-sampling hypothesis: redundant acoustic details are perceived

Under-sampling hypothesis: Only canonical/endpoint tokens are categorised Otherwise chance performance Add Noise?

Very little acoustic Space is usable Discrete categories if acoustic input is canonical. Categorisation is affected by any degradation of the Signal.

Proposed EEG/(MEG?) Experiments

Brain oscillation reflecting computation in the brain Delta (1-3 Hz) Δ or δ Theta (4-8 Hz) θ or ∅ (Intrinsic/obligatory brain activity) Alpha (10-14 Hz) α Beta (18-22 Hz) β Gamma (30-60 Hz) γ δ θ α β 1 β 2 γ 1 γ 2 γ 3 γ 4

Frequency Coupling Delta -Theta / Beta => prediction (e.g. rhythm) For example, the phase relationship and/or amplitude ratio between oscillatory bands δ θ α β 1 β 2 γ 1 γ 2 γ 3 γ 4

Delta -Theta / Beta => prediction (e.g. rhythm) Theta / Gamma => sampling (e.g. acoustic encoding) Frequency Coupling

δ θ α β 1 β 2 γ 1 γ 2 γ 3 γ 4 Alpha => attention Brain oscillations reflecting computation in the brain

EEG paradigm 1 – Phoneme Identification Active Listening Paradigm 1.EEG cap recording 64 electrodes (Neuroscan). 2.Stimuli presented over insert earphones in sentence form, e.g. “I saw BAWA/DAWA eating chips.” 3.Question presented, e.g. “Who is eating chips?” 4.Participant uses response button to answer “BAWA or DAWA”. 5.Stimuli are 100%/70% [ba]; 100%/70% [da]. Passive Listening Paradigm: All 4 stimuli randomly presented. 2 Conditions: -Active listening - button press – Stim 1: BAWA or DAWA (who is eating chips?) Stim 2: 100%/50%/0% (1 or 2 Syllables) -Passive listening

I saw DAWA eating chips (100%) I saw BAWA (100%) eating chips EEG Auditory Stimuli – Phoneme Perception Stimuli Synthesized speech in Praat 100% [da] 70% [da] 100% [ba] 70 % [ba]

EEG paradigm 1 – Amplitude Modulation Detection Active Listening Paradigm 1.EEG cap recording 64 electrodes (Neuroscan). 2.Single amplitude modulated vowel /a:/ presented over insert earphones. 3.Response button used to indicate whether 1 or 2 syllables were heard. 4.Stimuli are: 100% amplitude modulation depth (2 syllables) 50% AMD 0% AMD (1 syllable). Passive Listening Paradigm: All 4 stimuli randomly presented. 2 Conditions: -Active listening - button press – Stim 1: BAWA or DAWA (who is eating chips?) Stim 2: 100%/50%/0% (1 or 2 Syllables) -Passive listening

EEG Auditory Stimuli – Amplitude Modulation Detection Amplitude Modulated Signal: Synthesized /a:/ with different depths 0% Modulation 50% Modulation 100% Modulation

Data Analysis: Example of time-frequency analysis (Gilley et al. 2014) Control Group LLP Group Gamma Theta Alpha Beta