Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications.

Slides:



Advertisements
Similar presentations
1 Closed-Form MSE Performance of the Distributed LMS Algorithm Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis ECE Department, University of.
Advertisements

L1 sparse reconstruction of sharp point set surfaces
Prediction with Regression
1 Maarten De Vos SISTA – SCD - BIOMED K.U.Leuven On the combination of ICA and CPA Maarten De Vos Dimitri Nion Sabine Van Huffel Lieven De Lathauwer.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
A Practical Guide to Troubleshooting LMS Filter Adaptation Prepared by Charles H. Sobey, Chief Scientist ChannelScience.com June 30, 2000.
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip.
Exploiting Sparse Markov and Covariance Structure in Multiresolution Models Presenter: Zhe Chen ECE / CMR Tennessee Technological University October 22,
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Manifold Sparse Beamforming
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Data mining and statistical learning - lecture 6
The loss function, the normal equation,
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Reduction of Additive Noise in the Digital Processing of Speech Avner Halevy AMSC 664 Final Presentation May 2009 Dr. Radu Balan Department of Mathematics.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Speech Enhancement Based on a Combination of Spectral Subtraction and MMSE Log-STSA Estimator in Wavelet Domain LATSI laboratory, Department of Electronic,
1 Blind Separation of Audio Mixtures Using Direct Estimation of Delays Arie Yeredor Dept. of Elect. Eng. – Systems School of Electrical Engineering Tel-Aviv.
Performance Analysis of Relative Location Estimation for Multihop Wireless Sensor Networks Qicai Shi; Kyperountas, S.; Correal, N.S.; Feng Niu Selected.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Adaptive Signal Processing
Yuan Chen Advisor: Professor Paul Cuff. Introduction Goal: Remove reverberation of far-end input from near –end input by forming an estimation of the.
1 Decentralized Jointly Sparse Optimization by Reweighted Lq Minimization Qing Ling Department of Automation University of Science and Technology of China.
Introduction to Adaptive Digital Filters Algorithms
The horseshoe estimator for sparse signals CARLOS M. CARVALHO NICHOLAS G. POLSON JAMES G. SCOTT Biometrika (2010) Presented by Eric Wang 10/14/2010.
1 Efficient Reference Frame Selector for H.264 Tien-Ying Kuo, Hsin-Ju Lu IEEE CSVT 2008.
Compressive Sensing Based on Local Regional Data in Wireless Sensor Networks Hao Yang, Liusheng Huang, Hongli Xu, Wei Yang 2012 IEEE Wireless Communications.
REVISED CONTEXTUAL LRT FOR VOICE ACTIVITY DETECTION Javier Ram’ırez, Jos’e C. Segura and J.M. G’orriz Dept. of Signal Theory Networking and Communications.
1 Security and Robustness Enhancement for Image Data Hiding Authors: Ning Liu, Palak Amin, and K. P. Subbalakshmi, Senior Member, IEEE IEEE TRANSACTIONS.
Image Restoration using Iterative Wiener Filter --- ECE533 Project Report Jing Liu, Yan Wu.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8433: Statistical Signal Processing Detection of Uncovered Background and Moving Pixels Detection of Uncovered Background & Moving Pixels Presented.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
1 11th DSP Workshop Taos Ski Valley, NM 2004 Centered Discrete Fractional Fourier Transform & Linear Chirp Signals Balu Santhanam & Juan G. Vargas- Rubio.
The Group Lasso for Logistic Regression Lukas Meier, Sara van de Geer and Peter Bühlmann Presenter: Lu Ren ECE Dept., Duke University Sept. 19, 2008.
Estimation of Number of PARAFAC Components
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
1 Blind Channel Identification and Equalization in Dense Wireless Sensor Networks with Distributed Transmissions Xiaohua (Edward) Li Department of Electrical.
RECPAD - 14ª Conferência Portuguesa de Reconhecimento de Padrões, Aveiro, 23 de Outubro de 2009 David Afonso and João Sanches.
EE513 Audio Signals and Systems
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Study of Broadband Postbeamformer Interference Canceler Antenna Array Processor using Orthogonal Interference Beamformer Lal C. Godara and Presila Israt.
Learning Using Augmented Error Criterion Yadunandana N. Rao Advisor: Dr. Jose C. Principe.
Synchronization of Turbo Codes Based on Online Statistics
Speech Enhancement for ASR by Hans Hwang 8/23/2000 Reference 1. Alan V. Oppenheim,etc., ” Multi-Channel Signal Separation by Decorrelation ”,IEEE Trans.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
Dept. E.E./ESAT-STADIUS, KU Leuven
VIRGINIA POLYTECHNIC INSTITUTE & STATE UNIVERSITY MOBILE & PORTABLE RADIO RESEARCH GROUP MPRG Performance of Turbo Codes in Interleaved Flat Fading Channels.
Spectrum Sensing In Cognitive Radio Networks
Ridge Regression: Biased Estimation for Nonorthogonal Problems by A.E. Hoerl and R.W. Kennard Regression Shrinkage and Selection via the Lasso by Robert.
A Low-Complexity Universal Architecture for Distributed Rate-Constrained Nonparametric Statistical Learning in Sensor Networks Avon Loy Fernandes, Maxim.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Li-Wei Kang and Chun-Shien Lu Institute of Information Science, Academia Sinica Taipei, Taiwan, ROC {lwkang, April IEEE.
Variable Step-Size Adaptive Filters for Acoustic Echo Cancellation Constantin Paleologu Department of Telecommunications
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Boosting and Additive Trees (2)
2018/9/16 Distributed Source Coding Using Syndromes (DISCUS): Design and Construction S.Sandeep Pradhan, Kannan Ramchandran IEEE Transactions on Information.
Wenqian Shen, Linglong Dai, Zhen Gao, and Zhaocheng Wang
Whitening-Rotation Based MIMO Channel Estimation
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Ian C. Wong and Brian L. Evans ICASSP 2007 Honolulu, Hawaii
Speech Enhancement Based on Nonparametric Factor Analysis
Optimization under Uncertainty
Presentation transcript:

Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications Research Group, Department of Electronics, University of York, U.K. * CETUC, PUC-Rio, Brazil and

Outline Introduction Problem statement and the oracle algorithm Proposed alternating optimization with shrinkage scheme – Adaptive algorithms – Computational complexity Statistical analysis Simulation results Conclusions

Introduction Algorithms that can exploit sparsity have attracted a lot of interest in the last few years [1] and [2]. The basic idea is to exploit prior knowledge about the sparsity present in the data that need to be processed for several applications. Key problem: existing methods often exhibit performance degradation as compared to the oracle algorithm. We propose novel sparsity-aware adaptive filtering scheme and algorithms based on an alternating optimization strategy with shrinkage. [1] Y. Gu, J. Jin, and S. Mei, “L0 Norm Constraint LMS Algorithm for Sparse System Identification,” IEEE Signal Processing Letters, vol. 16, pp , [2] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” in Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, Apr , 2009, pp

Problem statement

Oracle algorithm It can identify the positions of the non-zero coefficients and fully exploit the sparsity of the system under consideration. Optimization problem: where is an M × M diagonal matrix with the actual K positions of the non-zero coefficients. It requires an exhaustive search over all the possible K positions over M possibilities plus the computation of the optimal filter.

Proposed alternating optimization with shrinkage scheme Two adaptive filters that are optimized in an alternating fashion: – The 1 st adaptive filter p[i] performs the role of the oracle. – The 2 nd adaptive filter w[i] estimates the non-zero parameters. Output of the proposed scheme:

Adaptive algorithms Minimization of the cost function given by where and are the regularization terms and f(.) is a shrinkage function. Sparsity-aware LMS (SA-ALT-LMS) algorithm with alternating optimization: where is the error and μ and η are the step sizes.

Computational complexity and shrinkage functions

Statistical analysis (1/3) With w o as the optimal filter and p o as the oracle vector, the error vectors are Error signal: where is the error signal of the optimal filter and the oracle. The MSE can be written as

Statistical analysis (2/3) The expectation of the scalar values that are functions of triple vector products can be rewritten and the MSE expressed by where ⊙ is the Hadamard product, and In what follows we proceed with simplifications that are valid for uncorrelated input signals.

Statistical analysis (3/3) After some simplifications the MSE can be written as where w n o and p n o are the elements of w o and p o, respectively, and where and and are the elements of for w[i] and p[i]

Simulation Results (1/3) We compare our proposed SA-ALT-LMS algorithms with: – The LMS – The SA-LMS using the shrinkage functions: – the l1-norm [2], – The log-sum penalty [2], [5], [8], – the l0-norm [1], [6]. The input x[i] and the noise n[i] vectors are drawn from independent and identically distributed complex Gaussian random variables with zero mean and variances σ x 2 and σ n 2, respectively. The signal-to-noise ratio (SNR) is given by SNR = σ x 2 / σ n 2 The filters are initialized as p[0] = 1 and w[0] = 0. [5] E. M. Eksioglu, “Sparsity regularized RLS adaptive filtering,” IET Sig. Proc., vol.5, no.5, pp , August [6] E. M. Eksioglu, A. L Tanc, “RLS Algorithm With Convex Regularization,” IEEE Signal Processing Letters, vol.18, no.8, pp , Aug [8] E. J. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted l1 minimization,” Journal of Fourier Analysis and Applications, 2008.

Simulation Results (2/3) Time-invariant system with K = 2 non-zero out of N = 16 coefficients. Correlated input obtained by x c [i] = 0.8x c [i − 1] + x[i] that are normalized. After 1000 iterations, the system is suddenly changed to a system with N = 16 coefficients with K = 4 non- zero coefficients. The positions of the non-zero coefficients are chosen randomly for each independent simulation trial. The curves are averaged over 200 independent trials and the parameters are optimized. Parameters: SNR = 40dB, σ x 2 = 1, μ = 0.015, η = 0.012, τ = 0.02, λ = 0.02, ϵ = 10, and β = 10.

Simulation Results (3/3) Analytical expressions vs simulations. Approximations: Time-invariant system with N = 32 that are randomly generated and only K = 4. The positions of the non- zero coef. are random for each simulation trial. The curves are averaged over 200 independent trials and the algorithms operate for 1000 iterations to ensure their convergence. MSE performance against step size for μ = η. Parameters: SNR = 30dB, σ x 2 = 1, τ = 0.02, λ = 0.02, ϵ = 10, and β = 10.

Conclusions We have proposed a novel sparsity-aware adaptive filtering scheme and algorithms based on an alternating optimization strategy. We have devised alternating optimization LMS algorithms, termed as SA- ALT-LMS, for the proposed scheme We also developed an MSE statistical analysis, which resulted in analytical formulas that can predict the performance of the SA-ALT-LMS algorithms. Simulations for system identification have shown that the proposed scheme and SA-ALT-LMS algorithms outperform existing methods.

References [1] Y. Gu, J. Jin, and S. Mei, “ norm constraint LMS algorithm for sparse system identification,” IEEE Signal Process. Lett., vol. 16, pp. 774–777, [2] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, Apr , 2009, pp. 3125–3128. [3] B. Babadi, N. Kalouptsidis, and V. Tarokh, “SPARLS: The sparse RLS algorithm,” IEEE Trans. Signal Process., vol. 58, no. 8, pp. 4013–4025, [4] D. Angelosante, J. A. Bazerque, and G. B. Giannakis, “Online adaptive estimation of sparse signals: Where RLS meets the -norm,” IEEE Trans. Signal Process., vol. 58, no. 7, pp. 3436–3447, [5] E. M. Eksioglu, “Sparsity regularized RLS adaptive filtering,” IET Signal Process., vol. 5, no. 5, pp. 480–487, Aug [6] E. M. Eksioglu and A. L. Tanc, “RLS algorithm with convex regularization,” IEEE Signal Process. Lett., vol. 18, no. 8, pp. 470–473, Aug [7] N. Kalouptsidis, G. Mileounis, B. Babadi, and V. Tarokh, “Adaptive algorithms for sparse system identification,” Signal Process., vol. 91, no. 8, pp. 1910–1919, Aug [8] E. J. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted l1 minimization,” J. Fourier Anal. Applicat., [9] R. C. de Lamare and R. Sampaio-Neto, “Adaptive reduced-rank MMSE filtering with interpolated FIR filters and adaptive interpolators,” IEEE Signal Process. Lett., vol. 12, no. 3, Mar [10] R. C. de Lamare and R. Sampaio-Neto, “Adaptive reduced-rank processing based on joint and iterative interpolation, decimation, and filtering,” IEEE Trans. Signal Process., vol. 57, no. 7, pp. 2503–2514, Jul [11] S. Haykin, Adaptive Filter Theory, 4th ed. ed. Upper Saddle River, NJ, USA: Prentice-Hall, 2002.