Download presentation
Presentation is loading. Please wait.
Published byMelanie Pope Modified over 9 years ago
1
Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces
2
The Team Adriano Claro Monteiro Alain de Cheveign Anahita Mehta
Byron Galbraith Dimitra Emmanouilidou Edmund Lalor Deniz Erdogmus Jim O’Sullivan Mehmet Ozdas Lakshmi Krishnan Malcolm Slaney Mike Crosse Nima Mesgarani Jose L Pepe Contreras-Vidal Shihab Shamma Thusitha Chandrapala
3
The Goal To determine a reliable measure of imagined audition using electroencephalography (EEG). To use this measure to communicate.
4
What types of imagined audition?
Speech: Short (~3-4s) sentences “The whole maritime population of Europe and America.” “Twinkle-twinkle little star.” “London bridge is falling down, falling down, falling down.” Music Short (~3-4s) phrases Imperial March from Star Wars. Simple sequence of tones. Steady-State Auditory Stimulation 20 s trials Broadband signal amplitude modulated at 4 or 6 Hz
5
The Experiment 64 – channel EEG system (Brain Vision LLC – thanks!)
500 samples/s Each “trial” consisted of the presentation of the actual auditory stimulus (“perceived” condition) followed (2 s later) by the subject imagining hearing that stimulus again (“imagined” condition).
6
+ The Experiment 4, 3, 2, 1, Careful control of experimental timing.
Perceived...2s... Imagined...2 s x 5 ... Break... next stimulus 4, 3, 2, 1, +
7
Data Analysis - Preprocessing
Filtering Independent Component Analysis (ICA) Time-Shift Denoising Source Separation (DSS) Looks for reproducibility over stimulus repetitions
8
Data Analysis: Hypothesis-driven.
The hypothesis: EEG recorded while people listen to (actual) speech varies in a way that relates to the amplitude envelope of the presented (actual) speech. EEG recorded while people IMAGINE speech will vary in a way that relates to the amplitude envelope of the IMAGINED speech.
9
Data Analysis: Hypothesis-driven.
Phase consistency over trials... EEG from same sentence imagined over several trials should show consistent phase variations. EEG from different imagined sentences should not show consistent phase variations.
10
Data Analysis: Hypothesis-driven.
Actual speech Imagined speech Consistency in theta (4-8Hz) band Consistency in alpha (8-14Hz) band
11
Data Analysis: Hypothesis-driven.
12
Data Analysis: Hypothesis-driven.
Red line – perceived music Green line – imagined music
13
Data Analysis - Decoding
14
Data Analysis - Decoding
Original Reconstruction London’s Bridge Twinkle Twinkle r = 0.30, p = 3e-5 r = 0.19, p = 0.01
15
Data Analysis - SSAEP
16
Data Analysis - SSAEP Perceived Imagined 4Hz 6Hz
17
Data Analysis Data Mining/Machine Learning Approaches:
18
Data Analysis Data Mining/Machine Learning Approaches:
19
SVM Classifier Input ⋮ ⋮ Class Labels EEG data (channels × time) :
1 ⋮ EEG data (channels × time) : 𝐸𝐸𝐺= 𝑒 𝑡 1 ⋮ 𝑒(𝑡) 64 Concatenate channels: 𝐸𝐸𝐺= 𝑒 𝑡 1 … 𝑒(𝑡) 64 Group N trials: Predicted Labels 𝑋= 𝐸𝐸𝐺 1 ⋮ 𝐸𝐸𝐺 𝑁 1 ⋮ Input covariance matrix: 𝐶 𝑋 =𝑋 𝑋 𝑇
20
SVM Classifier Results
Decoding imagined speech and music: Mean DA = 87% Mean DA = 90% Mean DA = 90%
21
DCT Processing Chain Raw EEG Signal (500Hz data) DSS Output
(Look for repeatability) DCT Output (Reduce dimensionality)
22
DCT Classification Performance
Percentage accuracy
23
Data Analysis Data Mining/Machine Learning Approaches:
Linear Discriminant Analysis on Different Frequency Bands Music vs Speech Speech 1 vs Speech 2 Music 1 vs Music 2 Speech vs Rest Music vs Rest - results ~ 50 – 66%
24
Summary Both hypothesis drive and machine-learning approaches indicate that it is possible to decode/classify imagined audition Many very encouraging results that align with our original hypothesis More data needed!! In a controlled environment!! To be continued...
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.