Design of a robust multi- microphone noise reduction algorithm for hearing instruments Simon Doclo 1, Ann Spriet 1,2, Marc Moonen 1, Jan Wouters 2 1 Dept.

Slides:



Advertisements
Similar presentations
Feedback Reliability Calculation for an Iterative Block Decision Feedback Equalizer (IB-DFE) Gillian Huang, Andrew Nix and Simon Armour Centre for Communications.
Advertisements

Figures for Chapter 7 Advanced signal processing Dillon (2001) Hearing Aids.
Advanced Speech Enhancement in Noisy Environments
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
Microphone Array Post-filter based on Spatially- Correlated Noise Measurements for Distant Speech Recognition Kenichi Kumatani, Disney Research, Pittsburgh.
1/44 1. ZAHRA NAGHSH JULY 2009 BEAM-FORMING 2/44 2.
Single-Channel Speech Enhancement in Both White and Colored Noise Xin Lei Xiao Li Han Yan June 5, 2002.
Speech Enhancement Based on a Combination of Spectral Subtraction and MMSE Log-STSA Estimator in Wavelet Domain LATSI laboratory, Department of Electronic,
3/24/2006Lecture notes for Speech Communications Multi-channel speech enhancement Chunjian Li DICOM, Aalborg University.
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
An Overview of Delay-and-sum Beamforming
Lecture 4 Measurement Accuracy and Statistical Variation.
1 New Technique for Improving Speech Intelligibility for the Hearing Impaired Miriam Furst-Yust School of Electrical Engineering Tel Aviv University.
Digital Audio Signal Processing Lecture-2: Microphone Array Processing
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
Dept. E.E./ESAT-STADIUS, KU Leuven homes.esat.kuleuven.be/~moonen/
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
Normalization of the Speech Modulation Spectra for Robust Speech Recognition Xiong Xiao, Eng Siong Chng, and Haizhou Li Wen-Yi Chu Department of Computer.
Introduction to estimation theory Seoul Nat’l Univ.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
Computer Vision - Restoration Hanyang University Jong-Il Park.
Dept. of Electrical Engineering, KU Leuven, Belgium
Digital Audio Signal Processing Lecture-4: Noise Reduction Marc Moonen/Alexander Bertrand Dept. E.E./ESAT-STADIUS, KU Leuven
Physical and perceptual evaluation of the Interaural Wiener Filter algorithm Simon Doclo 1, Thomas J. Klasen 1, Tim van den Bogaert 2, Marc Moonen 1,
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
Eigenstructure Methods for Noise Covariance Estimation Olawoye Oyeyele AICIP Group Presentation April 29th, 2003.
Nico De Clercq Pieter Gijsenbergh Noise reduction in hearing aids: Generalised Sidelobe Canceller.
Acoustic impulse response measurement using speech and music signals John Usher Barcelona Media – Innovation Centre | Av. Diagonal, 177, planta 9,
Blind Beamforming for Cyclostationary Signals
Multiuser Detection (MUD) Combined with array signal processing in current wireless communication environments Wed. 박사 3학기 구 정 회.
Blind Separation of Speech Mixtures Vaninirappuputhenpurayil Gopalan REJU School of Electrical and Electronic Engineering Nanyang Technological University.
Digital Audio Signal Processing Lecture-2: Microphone Array Processing - Fixed Beamforming - Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Dealing with Acoustic Noise Part 2: Beamforming Mark Hasegawa-Johnson University of Illinois Lectures at CLSP WS06 July 25, 2006.
Nico De Clercq Pieter Gijsenbergh.  Problem  Solutions  Single-channel approach  Multichannel approach  Our assignment Overview.
Reduced-bandwidth and distributed MWF-based noise reduction algorithms Simon Doclo, Tim Van den Bogaert, Jan Wouters, Marc Moonen Dept. of Electrical Engineering.
Noise reduction and binaural cue preservation of multi- microphone algorithms Simon Doclo, Tim van den Bogaert, Marc Moonen, Jan Wouters Dept. of Electrical.
Communication Group Course Multidimensional DSP DoA Estimation Methods Pejman Taslimi – Spring 2009 Course Presentation – Amirkabir Univ Title: Acoustic.
Detection and estimation of abrupt changes in Gaussian random processes with unknown parameters By Sai Si Thu Min Oleg V. Chernoyarov National Research.
Signal Processing Algorithms for Wireless Acoustic Sensor Networks Alexander Bertrand Electrical Engineering Department (ESAT) Katholieke Universiteit.
An Introduction to Blind Source Separation Kenny Hild Sept. 19, 2001.
§ 4.1 Instrumentation and Measurement Systems § 4.2 Dynamic Measurement and Calibration § 4.3 Data Preparation and Analysis § 4.4 Practical Considerations.
Digital Audio Signal Processing Lecture-2 Microphone Array Processing
Simon Doclo1, Ann Spriet1,2, Marc Moonen1, Jan Wouters2
Laboratory for Experimental ORL K.U.Leuven, Belgium Dept. of Electrotechn. Eng. ESAT/SISTA K.U.Leuven, Belgium Combining noise reduction and binaural cue.
Voice Activity Detection based on OptimallyWeighted Combination of Multiple Features Yusuke Kida and Tatsuya Kawahara School of Informatics, Kyoto University,
Design and low-cost implementation of a robust multichannel noise reduction scheme for cochlear implants Simon Doclo 1, Ann Spriet 1,2, Jean-Baptiste Maj.
Dept. E.E./ESAT-STADIUS, KU Leuven
Digital Audio Signal Processing Lecture-3 Noise Reduction
Spatial vs. Blind Approaches for Speaker Separation: Structural Differences and Beyond Julien Bourgeois RIC/AD.
Performance of Digital Communications System
State-Space Recursive Least Squares with Adaptive Memory College of Electrical & Mechanical Engineering National University of Sciences & Technology (NUST)
ARENA08 Roma June 2008 Francesco Simeone (Francesco Simeone INFN Roma) Beam-forming and matched filter techniques.
UNIT-IV. Introduction Speech signal is generated from a system. Generation is via excitation of system. Speech travels through various media. Nature of.
VIDYA PRATISHTHAN’S COLLEGE OF ENGINEERING, BARAMATI.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
1 LOW-RESOURCE NOISE-ROBUST FEATURE POST-PROCESSING ON AURORA 2.0 Chia-Ping Chen, Jeff Bilmes and Katrin Kirchhoff SSLI Lab Department of Electrical Engineering.
Speech Enhancement Summer 2009
Techniques to control noise and fading
Speech Enhancement with Binaural Cues Derived from a Priori Codebook
Analysis of Adaptive Array Algorithm Performance for Satellite Interference Cancellation in Radio Astronomy Lisha Li, Brian D. Jeffs, Andrew Poulsen, and.
A Tutorial on Bayesian Speech Feature Enhancement
On the Design of RAKE Receivers with Non-uniform Tap Spacing
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Dealing with Acoustic Noise Part 1: Spectral Estimation
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
INTRODUCTION TO ADVANCED DIGITAL SIGNAL PROCESSING
Presenter: Shih-Hsiang(士翔)
Real-time Uncertainty Output for MBES Systems
Combination of Feature and Channel Compensation (1/2)
Presentation transcript:

Design of a robust multi- microphone noise reduction algorithm for hearing instruments Simon Doclo 1, Ann Spriet 1,2, Marc Moonen 1, Jan Wouters 2 1 Dept. of Electrical Engineering (ESAT-SCD), KU Leuven, Belgium 2 Laboratory for Exp. ORL, KU Leuven, Belgium MTNS-2004,

2 Overview Problem statement: hearing in background noise Adaptive beamforming: GSC onot robust against model errors Design of robust noise reduction algorithm orobust fixed spatial pre-processor orobust adaptive stage Low-cost implementation of adaptive stage Experimental results + demo Conclusions

3 Problem statement Hearing problems effect more than 10% of population Digital hearing instruments allow for advanced signal processing, resulting in improved speech understanding Major problem: (directional) hearing in background noise oreduction of noise wrt useful speech signal omultiple microphones + DSP ocurrent systems: simple fixed and adaptive beamforming orobustness important due to small inter-microphone distance hearing aids and cochlear implants design of robust multi-microphone noise reduction scheme  Introduction -Problem statement -State-of-the-art -GSC  Robust spatial pre-processor  Adaptive stage  Conclusions

4 State-of-the-art noise reduction Single-microphone techniques: ospectral subtraction, Kalman filter, subspace-based oonly temporal and spectral information  limited performance Multi-microphone techniques: oexploit spatial information oFixed beamforming: fixed directivity pattern oAdaptive beamforming (e.g. GSC) : adapt to different acoustic environments  improved performance oMulti-channel Wiener filtering (MWF): MMSE estimate of speech component in microphones  improved robustness Sensitive to a-priori assumptions Robust scheme, encompassing both GSC and MWF  Introduction -Problem statement -State-of-the-art -GSC  Robust spatial pre-processor  Adaptive stage  Conclusions

5 Adaptive beamforming: GSC Fixed spatial pre-processor: o Fixed beamformer creates speech reference o Blocking matrix creates noise references Adaptive noise canceller: o Standard GSC minimises output noise power Spatial pre-processing Fixed beamformer A(z) Speech reference Blocking matrix B(z) Noise references Adaptive Noise Canceller   (adaptation during noise)  Introduction -Problem statement -State-of-the-art -GSC  Robust spatial pre-processor  Adaptive stage  Conclusions

6 Robustness against model errors Spatial pre-processor and adaptive stage rely on assumptions (e.g. no microphone mismatch, no reverberation,…) In practice, these assumptions are often not satisfied o Distortion of speech component in speech reference o Leakage of speech into noise references, i.e. Design of robust noise reduction algorithm: 1.Design of robust spatial pre-processor (fixed beamformer) 2.Design of robust adaptive stage Speech component in output signal gets distorted Limit distortion both in and  Introduction -Problem statement -State-of-the-art -GSC  Robust spatial pre-processor  Adaptive stage  Conclusions

7 Small deviations from assumed microphone characteristics (gain, phase, position)  large deviations from desired directivity pattern, especially for small-size microphone arrays In practice, microphone characteristics are never exactly known Consider all feasible microphone characteristics and optimise oaverage performance using probability as weight – requires statistical knowledge about probability density functions – cost function J : least-squares, eigenfilter, non-linear oworst-case performance  minimax optimisation problem Robust spatial pre-processor Incorporate specific (random) deviations in design Measurement or calibration procedure  Introduction  Robust spatial pre-processor  Adaptive stage  Conclusions

8 Simulations N=3, positions: [ ] m, L=20, f s =8 kHz Passband = 0 o -60 o, Hz (endfire) Stopband = 80 o -180 o, Hz Robust design - average performance: Uniform pdf = gain ( ) and phase (-5 o -10 o ) Deviation = [ ] and [5 o -2 o 5 o ] Non-linear design procedure (only amplitude, no phase)  Introduction  Robust spatial pre-processor  Adaptive stage  Conclusions

9 Non-robust designRobust design No deviations Deviations (gain/phase) Simulations Angle (deg) Frequency (Hz) dB Angle (deg) Frequency (Hz) dB Angle (deg) Frequency (Hz) dB Angle (deg) Frequency (Hz) dB  Introduction  Robust spatial pre-processor  Adaptive stage  Conclusions

10 Design of robust adaptive stage Distorted speech in output signal: Robustness: limit by controlling adaptive filter oQuadratic inequality constraint (QIC-GSC): = conservative approach, constraint  f (amount of leakage) oTake speech distortion into account in optimisation criterion (SDW-MWF) – 1/  trades off noise reduction and speech distortion – Regularisation term ~ amount of speech leakage noise reductionspeech distortion Limit speech distortion, while not affecting noise reduction performance in case of no model errors  QIC  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

11 Wiener solution Optimisation criterion: Problem: clean speech and hence speech correlation matrix are unknown! Approximation: VAD (voice activity detection) mechanism required! speech-and-noise periodsnoise-only periods  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

12 Spatially-preprocessed SDW-MWF Spatial preprocessing Fixed beamformer A(z) Speech reference Blocking matrix B(z) Noise references Multi-channel Wiener Filter (SDW-MWF)   Generalised scheme, encompasses both GSC and SDW-MWF: oNo filter   speech distortion regularised GSC (SDR-GSC) – special case: 1/  = 0 corresponds to traditional GSC oFilter  SDW-MWF on pre-processed microphone signals – Model errors do not effect its performance!  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

13 Low-cost implementation Stochastic gradient algorithm in time-domain: oCost function results in LMS-based updating formula oApproximation of regularisation term in TD using data buffers oAllows transition to classical LMS-based GSC by tuning some parameters (1/ , w 0 ) Complexity reduction in frequency-domain: oBlock-based implementation: fast convolution and correlation oApproximation of regularisation term in FD allows to replace data buffers by correlation matrices regularisation termClassical GSC  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

14 Complexity + memory Parameters: N = 3 (  mics), M = 2 (a), M = 3 (b), L = 32, f s = 16kHz, L y = Computational complexity: Memory requirement: AlgorithmComplexity (MAC)MIPS QIC-GSC(3N-1)FFT + 16N SDW-MWF (buffer) (3M+5)FFT + 30M (a), 4.27 (b) SDW-MWF (matrix)(3M+2)FFT + 10M M (a), 4.31 (b) AlgorithmMemorykWords QIC-GSC4(N-1)L + 6L0.45 SDW-MWF (buffer)2ML y + 6LM + 7L40.61 (a), (b) SDW-MWF (matrix) 4LM 2 + 6LM + 7L1.12 (a), 1.95 (b) Complexity comparable to FD implementation of QIC-GSC Substantial memory reduction through FD-approximation  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

15 Experimental validation (1) Set-up: o3-mic BTE on dummy head (d = 1cm, 1.5cm) oSpeech source in front of dummy head (0  ) o5 speech-like noise sources: 75 ,120 ,180 ,240 ,285  oMicrophone gain mismatch at 2 nd microphone Performance measures: oIntelligibility-weighted signal-to-noise ratio – I i = band importance of i th one-third octave band – SNR i = signal-to-noise ratio in i th one-third octave band oIntelligibility-weighted spectral distortion – SD i = average spectral distortion in i th one-third octave band (Power Transfer Function for speech component)  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

16 Experimental validation (2) SDR-GSC: oGSC (1/  = 0) : degraded performance if significant leakage o1/  > 0 increases robustness (speech distortion  noise reduction) SP-SDW-MWF: oNo mismatch: same, larger due to post-filter oPerformance is not degraded by mismatch  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

17 Experimental validation (3) GSC with QIC ( ) : QIC increases robustness GSC For large mismatch: less noise reduction than SP-SDW-MWF QIC  f (amount of speech leakage)  less noise reduction than SDR-GSC for small mismatch SP-SDW-MWF achieves better noise reduction than QIC-GSC, for a given maximum speech distortion level  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

18 Audio demonstration AlgorithmNo deviationDeviation (4dB) Noisy microphone signal Speech reference Noise reference Output GSC Output SDR-GSC Output SP-SDW-MWF  Introduction  Robust spatial pre-processor  Adaptive stage -SP SDW MWF -Implementation -Experimental validation  Conclusions

19 Conclusions Design of robust multimicrophone noise reduction algorithm: oDesign of robust fixed spatial preprocessor  need for statistical information about microphones oDesign of robust adaptive stage  take speech distortion into account in cost function SP-SDW-MWF encompasses GSC and MWF as special cases Experimental results: oSP-SDW-MWF achieves better noise reduction than QIC-GSC, for a given maximum speech distortion level oFilter w 0 improves performance in presence of model errors Implementation: stochastic gradient algorithms available at affordable complexity and memory Spatially pre-processed SDW Multichannel Wiener Filter  Introduction  Robust spatial pre-processor  Adaptive stage  Conclusions