Download presentation
Presentation is loading. Please wait.
1
K. Boakye: Qualifying Exam Presentation Speech Detection, Classification, and Processing for Improved Automatic Speech Recognition in Multiparty Meetings Kofi A. Boakye Advisor: Nelson Morgan January 17 th, 2007
2
January 17, 2007 K. Boakye QE » Introduction 2 At-a-glance Goal: Reduce errors caused by crosstalk and overlapped speech to improve speech recognition in meetings Trying to do automatic speech recognition (ASR) in meetings Personal mics pick up other speakers (crosstalk) Distant mics pick up multiple speakers at the same time (overlapped speech)
3
January 17, 2007 K. Boakye QE » Introduction 3 Outline of talk Introduction Speech activity detection for nearfield microphones Overlap speech detection for farfield microphones Overlap speech processing for farfield microphones Preliminary experiments
4
January 17, 2007 K. Boakye QE » Introduction 4 The meeting domain Multiparty meetings are a rich content source for spoken language technology –Rich transcription –Indexing and summarization –Machine translation –High-level language and behavioral analysis using dialog act annotation Good automatic speech recognition (ASR) is important
5
January 17, 2007 K. Boakye QE » Introduction 5 Meeting ASR set-up For a typical set-up, meeting ASR audio data is obtained from various sensors located in the room. Common types include: Individual Headset Microphone –Head-mounted mic positioned close to speaker –Best-quality signal for speaker
6
January 17, 2007 K. Boakye QE » Introduction 6 Meeting ASR set-up For a typical set-up, meeting ASR audio data is obtained from various sensors located in the room. Common types include: Lapel Microphone –Individual mic placed on participant’s clothing –More susceptible to interfering speech
7
January 17, 2007 K. Boakye QE » Introduction 7 Meeting ASR set-up For a typical set-up, meeting ASR audio data is obtained from various sensors located in the room. Common types include: Tabletop Microphone –Omni-directional pressure-zone mic –Placed between participants on table or other flat surface –Number and placement vary
8
January 17, 2007 K. Boakye QE » Introduction 8 Meeting ASR set-up For a typical set-up, meeting ASR audio data is obtained from various sensors located in the room. Common types include: Linear Microphone Array –Collection of omni-directional mics with a fixed linear topology –Composition can range from 4 to 64 mics –Enables beamforming for high SNR signals
9
January 17, 2007 K. Boakye QE » Introduction 9 Meeting ASR set-up For a typical set-up, meeting ASR audio data is obtained from various sensors located in the room. Common types include: Circular Microphone Array –Combines central location of tabletop mic and fixed topology of linear array –Consists of 4 to 8 omni-directional mics –Enables source localization and speaker tracking
10
January 17, 2007 K. Boakye QE » Introduction 10 ASR in multiparty meetings Nearfield recognition is generally performed by decoding each audio channel separately S/NS Detection Feature Extraction Prob. Estimation Decoding Words S/NS Detection Feature Extraction Prob. Estimation Decoding Words S/NS Detection Feature Extraction Prob. Estimation Decoding Words S/NS Detection Words Feature Extraction Prob. Estimation Decoding
11
January 17, 2007 K. Boakye QE » Introduction 11 ASR in multiparty meetings Nearfield recognition is generally performed by decoding each audio channel separately S/NS Detection Feature Extraction Prob. Estimation Decoding Words S/NS Detection Feature Extraction Prob. Estimation Decoding Words S/NS Detection Feature Extraction Prob. Estimation Decoding Words S/NS Detection Words Feature Extraction Prob. Estimation Decoding Words Probability Estimate Feature Extraction Speech Pronunciation Models Decode Grammar Model
12
January 17, 2007 K. Boakye QE » Introduction 12 Farfield recognition is done in one of two ways: 1) Signal combination ASR in multiparty meetings S/NS Detection Feature Extraction Prob. Estimation Decoding Words Signal Combination
13
January 17, 2007 K. Boakye QE » Introduction 13 Farfield recognition is done in one of two ways: 2) Hypothesis combination ASR in multiparty meetings Words S/NS Detection Feature Extraction Prob. Estimation Decoding S/NS Detection Feature Extraction Prob. Estimation Decoding S/NS Detection Feature Extraction Prob. Estimation Decoding S/NS Detection Feature Extraction Prob. Estimation Decoding Hypothesis Combination
14
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 14 Performance metrics Word error rate (WER) –Token-based ASR performance metric Diarization error rate (DER) –Time-based diarization performance metric
15
January 17, 2007 K. Boakye QE » Introduction 15 Crosstalk and overlapped speech ASR in meetings presents specific challenges owing to the domain Multiple individuals speaking at various times leads to two phenomena in particular –Crosstalk Associated with close-talking microphones This non-local speech produces primarily insertion errors Morgan et al. ’03: WER differed 75% relative between segmented and unsegmented waveforms due largely to crosstalk –Overlapped (co-channel) speech Most pronounced (and severe) in distant microphone condition Also produces errors for recognizer Shriberg et al. ’01: 12% absolute WER difference for overlapped and non-overlapped speech segments for nearfield case
16
January 17, 2007 K. Boakye QE » Introduction 16 Scope of project Speech activity detection (SAD) for nearfield mics –Investigate features for SAD using HMM segmenter Metrics: word error rate (WER) and diarization error rate (DER) Baseline features: standard cepstral features for an ASR system Features will mainly be cross-channel in nature Overlap detection for farfield mics –Investigate features for overlap detection using HMM segmenter Metric: diarization error rate Baseline features: standard cepstral features Features will mainly be single-channel and pitch-related Overlap speech processing for farfield mics –Determine if speech separation methods can reduce WER Harmonic enhancement and suppression (HES) Adaptive decorrelation filtering (ADF) Overlapped speech Crosstalk
17
17 K. Boakye QE » Part I: Speech Activity Detection Part I: Speech Activity Detection for Nearfield Microphones
18
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 18 Related work Amount of work specific to multi-speaker SAD is rather small Wrigley et al. ’03 and ’05 –Performed a systematic analysis of features for classifying multi- channel audio –Key result: from among 20 features examined, best performing for each class was one derived from cross-channel correlation Pfau et al. ’01 –Thresholding cross-channel correlations as a post-processing step for HMM based SAD yielded 12% relative frame error rate reduction Laskowski et al. ’04 –Cross-channel correlation thresholding produced ASR WER improvements of 6% absolute over energy-thresholding
19
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 19 Candidate features Cepstral features –Consist of 12 th -order Mel frequency cepstral coefficients, log-energy, and their first- and second-order time derivatives –Common to a number of speech-related fields –Log-energy is a fundamental component of most SAD systems –MFCCs could distinguish local speech from phenomena with similar energy levels (breaths, coughs, etc.)
20
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 20 Candidate features Cross-channel correlation –Clear first-choice for cross-channel feature –Wrigley et al.: normalized cross-channel correlation most effective feature for crosstalk detection –Normalization seeks to compensate for channel gain differences and is done based on frame-level energy of Target channel Non-target channel Square root of target and non-target (spherical normalization) C ij ( t ) = max ¿ P ¡ 1 X k = 0 x i ( t ¡ k ) x j ( t ¡ k ¡ ¿ ) w ( k )
21
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 21 Candidate features Log-energy differences –Just as energy is a good feature for single-channel SAD, relative energy between channels should work well for our scenario –Represents ratio of short-time energy between channels –Much less utilized than cross-channel correlation, though can be more robust Normalized log-energy difference –Compensate for channel gain differences D ij ( t ) = E i ( t ) ¡ E j ( t ) E norm ; i ( t ) = E i ( t ) ¡ E m i n ; i
22
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 22 Candidate features Time delay of arrival (TDOA) estimates –Performed well as features for farfield speaker diarization Ellis and Liu ’04 and Pardo et al. ’06 –Seem particularly well suited to distinguish local speech from crosstalk –Proposed estimation method: generalized cross- correlation with phase transform (GCC-PHAT) Standard cross-correlation GCC-PHAT
23
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 23 Feature generation and combination One issue with cross-channel features: variable number of channels –Varies between 3 and 12 for some corpora Proposed solution: use order statistics (max and min) Considered feature combination as well –Simple concatenation –Combination with dimensionality reduction Principal component analysis (PCA) Linear discriminant analysis (LDA) Multilayer perceptron (MLP)
24
January 17, 2007 K. Boakye QE » Part I: Speech Activity Detection 24 Work plan for part I Compare performance of HMM segmentation using proposed features –Metrics: WER and DER DER typically correlates with WER DER can be computed quickly –Data: NIST Rich Transcription (RT) Meeting Recognition evaluations 10-12 min. excerpts of meeting recordings from different sites –Baseline measure: standard cepstral features –Feature performance measured in isolation and with baseline features –Try to determine best combination of features and combination technique that obtains this –Significant amount of this work has been done
25
25 K. Boakye QE » Part II: Overlap Detection Part II: Overlap Detection for Farfield Microphones
26
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 26 Related work “Usable” speech for speaker recognition –Lewis and Ramachandran ’01 Compared MFCCs, LPCCs, and proposed pitch prediction feature (PPF) for speaker count labeling on both closed- and open-set scenarios –Shao and Wang ‘03 Used multi-pitch tracking to identify usable speech for closed- set speaker recognition task –Yantorno et al. Proposed spectral autocorrelation peak-valley ratio (SAPVR), adjacent pitch period comparison (APPC), and kurtosis
27
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 27 Candidate features Cepstral features –As a representation of speech spectral envelope, should provide information on whether multiple speakers are active –Zissman et al.’90 Gaussian classifier with cepstral features reported 80% classification accuracy between target-only, jammer-only, and target plus jammer speech
28
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 28 Candidate features Cross-channel correlation –Recall Wrigley et al.: correlation best feature for nearfield audio classification –Unclear if this extends to farfield in overlap case For nearfield, overlapped speech tends to have low cross-channel correlation For farfield, large asymmetry in speaker-to- microphone distances not typically present → low correlation may not occur
29
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 29 Candidate features Pitch estimation features –Explore how pitch detectors behave in presence of overlapped speech –Methods can be applied at subband level May be appropriate here since harmonic energy from different speakers may be concentrated in different bands –Issue regarding unvoiced regions Include feature that indicates voicing –Energy, zero-crossing rate, spectral tilt Zero-crossing distance Auto-correlation function Average magnitude difference function
30
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 30 Candidate features Spectral autocorrelation peak valley ratio SAPVR = 20 l og 10 R ( p 1 ) R ( q 1 ) R ( p 1 ) R ( q 1 ) Local maximum of non zero-lag spectral autocorrelation Next local maximum not harmonically related, or local minimum between and p 1 p 2
31
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 31 Candidate features Kurtosis –For zero-mean RV, kurtosis defined as: –Measures “Gaussianity” of a RV –Speech signals, which are modeled as Laplacian or Gamma tend to be super-Gaussian –Summing such signals produces a signal with reduced kurtosis (Leblanc and DeLeon ’98 and Krishnamachari et al. ’00) · x = E f x 4 g f E f x 2 gg 2 ¡ 3 x
32
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 32 Feature generation and combination As with nearfield condition, number of channels varies with meeting Aside from cross-channel correlation, features can be generated with a single channel Explore two methods: 1)Select a single “best” channel based on SNR estimates 2)Combine audio signals using delay-and-sum beamforming to produce a single channel – May adversely affect pitch-derived features Examine same combination approaches as before
33
January 17, 2007 K. Boakye QE » Part II: Overlap Detection 33 Work plan for part II Compare performance of HMM segmentation using proposed features –Metric: DER –Data: NIST Rich Transcription (RT) Meeting Recognition evaluations –Baseline measure will be standard cepstral features –Feature performance measured in isolation and in conjunction with baseline features –Try to determine overall best combination of features and best combination technique that obtains this
34
34 K. Boakye QE » Part II: Overlap Speech Processing Part III: Overlap Speech Processing for Farfield Microphones
35
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 35 Related work Blind source separation (BSS) X = A S X = [ x 0 ::: x N ] T S = [ s 0 ::: s M ] T ; M · N Given and related by we seek to find ^ S = W T X If we assume ’s independent and at most one Gaussian distributed, solving method becomes one of independent component analysis (ICA) s i Real-world audio signals have convolutive mixing -Reformulate problem in Z-transform domain → similar solutions -Most techniques iterative, based on infomax criteria -Lee and Bell ’97: BSS yielded improved recognition results for digit recognition in real room environment
36
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 36 Related work Blind source separation (BSS) Another set of approaches based on minimizing cross- channel correlation — adaptive decorrelation filtering –Weinstein et al. ’93 & ’96 –Yen et al. ’96-‘99 Demonstrated improved recognition performance on simulated mixtures with coupling estimated from a real room
37
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 37 Related work Single-channel separation –Techniques based on computational auditory scene analysis (CASA) try to separate by partitioning audio spectrogram –Partitioning relies on certain types of structure in signal and uses cues such as pitch, continuity, and common onset and offset
38
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 38 Related work Single-channel separation –Bach and Jordan ’05 Used spectral clustering to create speech stream partitions –Morgan et al. ’97 Used simpler though related method exploiting harmonic structure Results on keyword spotting suggest approach may be useful in an ASR context
39
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 39 Harmonic enhancement and suppression Single-channel speech separation method Utilizes harmonic structure of voiced speech to separate –Speaker’s harmonics identified using pitch estimation and signal generated by enhancing – Alternatively, time-frequency bins of short-time Fourier transform in neighborhood of harmonics selected and the others zeroed, followed by signal reconstruction –For additional speaker, first speaker’s harmonics suppressed and/or other speaker’s harmonics enhanced, if pitch can be determined
40
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 40 Adaptive decorrelation filtering Multi-channel speech separation method Separates signals by adaptively determining filters governing coupling between channels Look at two-source, two-channel case: Y 1 ( f ) = H 11 ( f ) S 1 ( f ) + H 12 ( f ) S 2 ( f ) Y 2 ( f ) = H 21 ( f ) S 1 ( f ) + H 22 ( f ) S 2 ( f )
41
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 41 Adaptive decorrelation filtering Multi-channel speech separation method Separates signals by adaptively determining filters governing coupling between channels Look at two-source, two-channel case: Y 1 ( f ) = X 1 ( f ) + A ( f ) X 2 ( f ) Y 2 ( f ) = X 2 ( f ) + B ( f ) X 1 ( f ) where X i ( f ) = H ii ( f ) S i ( f ) ; i = 1 ; 2 A ( f ) = H 12 ( f ) H 22 ( f ) B ( f ) = H 21 ( f ) H 11 ( f )
42
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 42 Adaptive decorrelation filtering Now process the signals with the separation system: C ( f ) = 1 ¡ A ( f ) B ( f ) V 1 ( f ) = Y 1 ( f ) ¡ ^ A ( f ) Y 2 ( f ) V 2 ( f ) = Y 2 ( f ) ¡ ^ B ( f ) Y 1 ( f ) When and is invertible, the signals and can be perfectly restored C ( f ) x 1 ( t ) x 2 ( t ) ^ A ( f ) = A ( f ) ; ^ B ( f ) = B ( f ) ; Since, when is not invertible, linearly distorted versions of the signals can be obtained V i ( f ) = C ( f ) X i ( f ) C ( f )
43
January 17, 2007 K. Boakye QE » Part III: Overlap Speech Processing 43 Work plan for part III Employ speech separation algorithms on overlap segments to try to improve ASR performance –Metric: WER –Focus on WER in overlap regions –Data: NIST RT evaluation (same as in parts I and II) Subsequent analyses if improvements obtained: –Compare processing entire segment over just overlap region –Process overlap regions as determined by overlap detector in part II –Analyze patterns of improvement, or conversely, which error types persist
44
44 K. Boakye QE » Preliminary Experiments Preliminary Experiments
45
January 17, 2007 K. Boakye QE » Preliminary Experiments 45 Preliminary Experiments Experiments pertain to part I Performed using Augmented Multiparty Interaction (AMI) development set meetings for the NIST RT-05S evaluation –Scenario-based meetings each involving 4 participants wearing headset or head-mounted lapel mics Segmenter –Derived from HMM based speech recognition system –Two classes: “speech” and “nonspeech” each represented with a three-state phone model –Training data: First 10 minutes from 35 AMI meetings –Test data: 12-minute excerpts from four additional AMI meetings
46
January 17, 2007 K. Boakye QE » Preliminary Experiments 46 Expt. 1: Single feature performance LEDs and NLEDs outperform baseline cepstral features NMXC features do more poorly -Higher FA rate NLEDs give lower DER than LEDs -Indicates effectiveness of normalization procedure Diarization
47
January 17, 2007 K. Boakye QE » Preliminary Experiments 47 Expt. 1: Single feature performance NMXC features outperform baseline LEDs and NLEDs do not NLEDs give lower WER than LEDs Cross-channel features reduce insertion rate (between 39% and 46% relative) 4% difference between best feature (NMXC) and reference Recognition
48
January 17, 2007 K. Boakye QE » Preliminary Experiments 48 Expt. 2: Initial feature combination Combination with baseline yields similar performance for features -Exception: base + NMXC +LEDs Improved performance comes from reduced FA/insertions 3-way combos degrade performance -May be due to correlation between features 2% difference between best combo and reference
49
January 17, 2007 K. Boakye QE » Summary 49 Summary Goal: Reduce errors caused by crosstalk and overlapped speech to improve speech recognition in meetings Crosstalk –Use HMM based segmenter to identify local speech regions Investigate features to effectively do this S/NS Detection Feature Extraction Prob. Estimation Decoding Words Improve this… …to improve this
50
January 17, 2007 K. Boakye QE » Summary 50 Summary Goal: Reduce errors caused by crosstalk and overlapped speech to improve speech recognition in meetings Overlapped speech –Use HMM based segmenter to identify overlapped regions Investigate features to effectively do this –Process overlap regions to improve recognition performance Explore two method—HES and ADF—to see if they can do this S/NS Detection Feature Extraction Prob. Estimation Decoding Words Overlap Detection Overlap Processing Feature Extraction Prob. Estimation Decoding …to improve this Add this…
51
January 17, 2007 K. Boakye QE » Summary 51 Summary Goal: Reduce errors caused by crosstalk and overlapped speech to improve speech recognition in meetings Experiments –Some begun, many to be done
52
52 K. Boakye: Qualifying Exam Presentation Fin
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.