1 Applications on Signal Recovering Miguel Argáez Carlos A. Quintero Computational Science Program El Paso, Texas, USA April 16, 2009
2 Abstract Recent theoretical developments have generated a great deal of interest in sparse signal representation. A full-rank matrix generates an underdetermined system of linear equations Our purpose is to find the sparsest solution. i.e., the one with the fewest nonzero entries. Finding sparse representations ultimately requires solving for the sparsest solution of an underdetermined system of linear equations. Some recently works had shown that the minimum l 1 -norm solution to an underdetermined linear system is also the sparsest solution to that system under some conditions.
3 Objective We develop an algorithm using a fixed point method to solve the l 1 minimization problem And for solving the linear system associated to the problem we use a conjugate gradient method. Our principal purpose for this work is to show that our algorithm is capable to work efficiently to recover the reflection coefficients from seismic data in the area of seismic reflection and separating two speakers in a single channel recording in the audio separation problem.
4 Seismic Reflection Seismic reflection is a method of exploration geophysics to estimate the properties of the Earth's subsurface from reflected seismic waves. The method requires a controlled seismic source of energy (such as dynamite or vibrators). Using the time it takes for a reflection to arrive at a receiver, it is possible to estimate the depth of the feature that generated the reflection. The reflected signal is detected on surface using an array of high frequency geophones.
5 Seismic Reflection
6 How can we obtain the reflectivity function from the recorded signal? Problems: The seismic trace is the result of a convolution of the input pulse and the reflectivity function. The recorded signal has noise.
7 We can express the recorded signal as: where is the convolution between the signals and represents the reflectivity function we want to recover. The convolution kernel is a “wavelet” which depends on the pressure wave sent in the underground. ε is noise that has entered the recorded signal. Sparse-spike deconvolution
8 Clearbout and Mouir [3] proposed in 1973 to use l 1 minimization to recover x from the recorder signal y. Santosa and Symes (Rice University) [4] implemented this idea in 1986 with an l 1 relaxed minimization. The resulting sparse spike deconvolution algorithm defines the solution as:
9 To invert the convolution equation (1), we can model the reflectivity x as a sum of Diracs, that is: Each Dirac is located at a depth i. Using (3) we can express y as a function of the depth z Sparse-spike deconvolution
10 Sparse-spike deconvolution If we introduce a dictionary constructed by translating the wavelet at all locations This is a matrix whose columns vectors are: We solve the problem using a equivalent problem given by
Seismic Reflection. Sparco Problem (903): m=n= Numerical Experimentations 11
The Speech Separation Problem Separate a single-channel mixture of speech from known speakers 12
13 Non-negative Sparse Coding We assume an additive mixing model and we can represent the signal as where A and x are non-negative and x is sparse Dictionary, A Source dependent (over-complete) basis Learned from data Sparse Code, x Time and amplitude for each dictionary element Sparseness: Only a few dictionary elements active simultaneously
14 Non-negative Sparse Coding 1. Learn a dictionary for each source 2. Compute sparse coding x of mixture 3. Reconstruct each source separately
Numerical Experimentation Problem Sparco (401): m=29166, n=
16 References and Acknowledgements 1) Stochastic sparse-spike deconvolution, Danilo R. Velis. 2) Deconvolution with curvelet-domain sparsity, Vishal Kumar, EOS-UBC and Felix J. Herrmann. 3) Robust modeling of erratic data, J.F. Clearbout and F. Muir. Geophyscis, 38 4) Linear inversion of band limited reflection seismograms. F. Santosa and W.W. Symes. SIAM J. Sci. Statistic.Comput 5) Sparse coding and NMF, J. Eggert and E. Körner, Proceedings of Neural Networks The authors thank the financial support from: ARL Grant No. W911NF Computational Science Program, NSF CyberShARE grant No. NSF HRD (Some partial support) We also acknowledge the office space provided by the Department of Mathematical Sciences.