Download presentation
Presentation is loading. Please wait.
Published bySamantha Freeman Modified over 8 years ago
2
Speech Segregation Based on Oscillatory Correlation DeLiang Wang The Ohio State University
3
Outline of Presentation l Introduction l Auditory Scene Analysis (ASA) Problem l Binding Problem l Oscillatory Correlation Theory l LEGION network l Multistage Model for Computational ASA (CASA) l Recent Results l Discussion and Summary
4
ASA Problem (Bregman’90) l Listeners are able to parse the complex mixture of sounds arriving at the ears in order to retrieve a mental representation of each sound source l ASA takes place in two conceptual stages: l Segmentation. Decompose the acoustic signal into ‘sensory elements’ (segments) l Grouping. Combine segments into groups, such that segments in the same group are likely to have arisen from the same environmental source
5
ASA Problem - continued l The grouping process involves two mechanisms: l Primitive grouping. Innate data-driven mechanisms, consistent with those described by the Gestalt psychologists for visual perception (proximity, similarity, common fate, good continuation etc.) l Schema-driven grouping. Application of learned knowledge about speech, music and other environmental sounds
6
Binding Problem l Information about acoustic features (pitch, spectral shape, interaural differences, AM, FM) is extracted in distributed areas of the auditory system l How are these features combined to form a whole? l Hierarchies of feature-detecting cells exist, but do not constitute a solution to the binding problem; no evidence for ‘grandmother cells’
7
Oscillatory Correlation (von der Malsburg & Schneider’86; Wang’96) l Neural oscillators used to represent auditory features l Oscillators representing features of the same source are synchronized (phase-locked with zero phase lag), and are desynchronized from oscillators representing different sources l Supported by experimental findings, e.g. oscillations in auditory cortex measured by EEG, MEG and local field potentials
8
Oscillatory Correlation Theory FD: Feature Detector
9
LEGION Architecture for Stream Segregation l LEGION: Locally Excitatory Globally Inhibitory Oscillator Network (Terman & Wang’95)
10
Single Relaxation Oscillator With stimulus Without stimulus Typical x trace (membrane potential)
11
LEGION on a Chip The chip area is 6.7mm 2 (Core 3mm 2 ) and implements a 16x16 LEGION network (By Jordi Cosp, Polytechnic University of Catalonia, SPAIN)
12
Computational Auditory Scene Analysis l The ASA problem and the binding problem are closely related; the oscillatory correlation framework can address both issues l Previous work also suggests that: l Representation of the auditory scene is a key issue l Temporal continuity is important (although it is ignored in most frame-based sound processing algorithms) l Fundamental frequency (F0) is a strong cue for grouping
13
A Multi-stage Model for CASA
14
Auditory Periphery Model l A bank of gammatone filters l n: filter order (fourth-order is used) l b: bandwidth l H: Heaviside function l Meddis hair cell model converts gammatone output to neural firing
15
Fourth-order Gammatone Filters - Example Impulse responses of gammatone filters
16
Auditory Periphery - Example l Hair cell response to utterance: “Why were you all weary?” mixed with phone ringing l 128 filter channels arranged in ERB
17
l Mid-level representations form the basis for segment formation and subsequent grouping processes l Correlogram extracts periodicity information from simulated auditory nerve firing patterns l Summary correlogram can be used to identify F0 l Cross-correlation between adjacent correlogram channels identifies regions that are excited by the same frequency component Mid-level Auditory Representations
18
Mid-level Representations - Example Correlogram and cross-correlation for the speech/telephone mixture
19
Oscillator Network: Segmentation Layer l An oscillator consists of reciprocally connected excitatory variable x ij and inhibitory variable y ij (Terman & Wang’95): l Stable limit cycle occurs for I ij > 0 l Each oscillator is connected to four nearest neighbors
20
Segmentation Layer - continued l Horizontal weights are unity, vertical weights are unity if correlation exceeds threshold, otherwise 0 l Oscillators receive input if energy in corresponding channel exceeds a threshold l All oscillators are connected to a global inhibitor, which ensures that different segments are desynchronized from one another l A LEGION network
21
Segmentation Layer - Example l Output of the segmentation layer to the speech/telephone mixture
22
Oscillator Network: Grouping Layer l The second layer is a two-dimensional oscillator network without global inhibition, which embodies the grouping stage of ASA l Oscillators in the second layer only receive input if the corresponding oscillator in the first layer is stimulated l At each time frame, a F0 estimate from the summary correlogram is used to classify channels into two categories; those that are consistent with the F0, and those that are not
23
Grouping Layer - continued l Enforce a rule that all channels of the same time frame within each segment must have the same F0 category as the majority of channels Result of the speech telephone example
24
Grouping Layer - continued l Grouping is limited to the time window of the longest segment l There are horizontal connections between oscillators in the same segment l Vertical connections are formed between pairs of channels within each time frame; mutual excitation if the channels belong to the same F0 category, otherwise mutual inhibition
25
Grouping Layer - Example l Two streams emerge from the group layer l Foreground: left (original mixture ) l Background: right
26
l Evaluated on a corpus of 100 mixtures (Cooke’93): 10 voiced utterances x 10 noise intrusions l Noise intrusions have a large variety l Resynthesis pathway allows estimation of SNR after segregation; improvement in SNR after processing for each noise condition Evaluation
27
Changes in SNR Results of Evaluation Speech energy retained
28
Summary l An oscillatory correlation framework has been proposed for ASA l Neurobiologically plausible l Engineering applications - robust automatic speech recognition in noisy environments, hearing prostheses, and speech communication l Key issue is integration of various grouping cues
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.