Quantization and compressed sensing Dmitri Minkin.

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

FINITE WORD LENGTH EFFECTS
EET260: A/D and D/A conversion
QoS-based Management of Multiple Shared Resources in Dynamic Real-Time Systems Klaus Ecker, Frank Drews School of EECS, Ohio University, Athens, OH {ecker,
ECE 667 Synthesis and Verification of Digital Circuits
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Sampling and Pulse Code Modulation
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Onur G. Guleryuz & Ulas C.Kozat DoCoMo USA Labs, San Jose, CA 95110
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
ECE 353 Introduction to Microprocessor Systems Michael G. Morrow, P.E. Week 14.
ECE 353 Introduction to Microprocessor Systems Michael G. Morrow, P.E. Week 14.
Contents 1. Introduction 2. UWB Signal processing 3. Compressed Sensing Theory 3.1 Sparse representation of signals 3.2 AIC (analog to information converter)
Speech Group INRIA Lorraine
Extensions of wavelets
More MR Fingerprinting
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Visual Recognition Tutorial
“Random Projections on Smooth Manifolds” -A short summary
Kuang-Hao Liu et al Presented by Xin Che 11/18/09.
Compressive Oversampling for Robust Data Transmission in Sensor Networks Infocom 2010.
Overview of Adaptive Multi-Rate Narrow Band (AMR-NB) Speech Codec
Distributed Regression: an Efficient Framework for Modeling Sensor Network Data Carlos Guestrin Peter Bodik Romain Thibaux Mark Paskin Samuel Madden.
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN.
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Nonlinear Sampling. 2 Saturation in CCD sensors Dynamic range correction Optical devices High power amplifiers s(-t) Memoryless nonlinear distortion t=n.
Science is organized knowledge. Wisdom is organized life.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
A Low-Power Low-Memory Real-Time ASR System. Outline Overview of Automatic Speech Recognition (ASR) systems Sub-vector clustering and parameter quantization.
Compressed Sensing Compressive Sampling
Normalised Least Mean-Square Adaptive Filtering
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Pulse Modulation 1. Introduction In Continuous Modulation C.M. a parameter in the sinusoidal signal is proportional to m(t) In Pulse Modulation P.M. a.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Hossein Sameti Department of Computer Engineering Sharif University of Technology.
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
Introduction to Adaptive Digital Filters Algorithms
Game Theory Meets Compressed Sensing
Frame by Frame Bit Allocation for Motion-Compensated Video Michael Ringenburg May 9, 2003.
By Grégory Brillant Background calibration techniques for multistage pipelined ADCs with digital redundancy.
Cs: compressed sensing
Hierarchical Distributed Genetic Algorithm for Image Segmentation Hanchuan Peng, Fuhui Long*, Zheru Chi, and Wanshi Siu {fhlong, phc,
Learning With Structured Sparsity
CS :: Fall 2003 Media Scaling / Content Adaptation Ketan Mayer-Patel.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Name Iterative Source- and Channel Decoding Speaker: Inga Trusova Advisor: Joachim Hagenauer.
Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
1 Iterative Integer Programming Formulation for Robust Resource Allocation in Dynamic Real-Time Systems Sethavidh Gertphol and Viktor K. Prasanna University.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
A Low-Complexity Universal Architecture for Distributed Rate-Constrained Nonparametric Statistical Learning in Sensor Networks Avon Loy Fernandes, Maxim.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Digital Signal Processing
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
SketchVisor: Robust Network Measurement for Software Packet Processing
Data Driven Resource Allocation for Distributed Learning
The Johns Hopkins University
Learning With Dynamic Group Sparsity
لجنة الهندسة الكهربائية
Sparse and Redundant Representations and Their Applications in
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Quantization and compressed sensing Dmitri Minkin

2 Contents Sigma-Delta Quantization for Compressed Sensing Quantization of Sparse Representations Petros Boufounos Richard G. Baraniuk

3 Sigma-Delta Quantization Introduction The goal: HW implementation of compressed sensing A/D Explore: Sigma-Delta quantizer with random dictionary

4 Traditional A/D Difficult - tight specifications for: Anti-aliasing LPF (analog) Precision Quantizer (digital)

5 Traditional Sigma-Delta A/D Pros: Coarse Quantizer Cons: C/D – high rate oversampling LPF/Downsample at digital domain Trade-off: quantization accuracy and sampling rate

6 Standard CS A/D Random projections (analog), long switched capacitors filter Precision Q (digital) – hard to implement

7 Proposed Sigma-Delta CS A/D Advantages: Coarse Q (analog) => short filter Random projections => LPF/ Down sample at digital domain Same performance as fine quantizer at Nyquist rate

8 Proposed Sigma-Delta Quantizer Architecture Sigma-Delta Quantizer Random Projections

9 Compressed Sensing Overview Signal at sampling basis b n : Signal is K-sparse in basis {s k }: Signal is K-compressible, if it is well approx. by K coefficients

10 Compressed Sensing Overview Sampling with measurement vectors {u k } x n, u k,n – coeff. at b n basis Synthesis of y from dictionary f n :

11 CS Overview – RIP Property y is sufficient to recover x, if dictionary f satisfies RIP of order K with constant δ K : Remark: Robust, efficient solution requires 2K or stricter

12 Back to Sigma-Delta, Noise Iteratively get x n Non-linear quantization produces error Subsequent coeff. are updated p coeff. at each instance time n, memory of length p, quantizer order

13 Proposed Sigma-Delta Quantizer Architecture

14 Design Problem Definition Total quantization error: Select the feedback coeff. c n,n+i to reduce the total error ε, Subject to: hardware and stability constraints

15 Error Models White random, variance σ e 2, minimize total error power: Upper bound of total error from eq.(5) At each step select coeff. to minimize incremental error e n

16 Optimization Problem Minimize: Actually find projection of f n on Span{f n+1,…f n+p } “Close” solution produces small assuming ||f i ||=1, Hardware and stability often necessitate different approach

17 Sigma-Delta Design Digitally compute random projections from x’ n according to Change c n,n+i in the analog feedback loop in accordance to random dictionary to minimize error:

18 Quantizer Order and RIP Substitute vectors {f n+1,…,f n+p } with residual into Eq.(8) RIP of order K and constant δ K guaranties that Sigma-Delta of order p≤K-1, is not effective. Thus RIP forces p≥K

19 Sigma-Delta Stability Quantizer saturation level: Stability if |x in |< saturation level Thus from (4): x’ n does not saturate if:

20 Sigma-Delta Stability Define s max =max(s n ) Increasing s max forces increasing q max, that is the dynamic range of quantizer This is worst case Often violated for design flexibility with small probability of overflow

21 Randomized Dictionaries Assume no structure of dictionary, then: No simplification in feedback coeff. c n-i,n c n-i,n changes at a very high rate c n-i,n has continuous range: Precise tunable elements Multipliers with highly linear response Very short settling time

22 Practical Alternatives for Random Dictionary Restrict c n-i,n to the set {-c,0,c} or {-Lc,…,-c,0,c,…,Lc} If dictionary is not produced at runtime, c n-i,n can be calculated at design time

23 Reminder Minimize the error: That is: Subject to stability constraint: And possible, coeff. restriction: {-Lc,…,-c,0,c,…,Lc}

24 Greedy Optimization Iterative algorithm that uses history coeff. c n-i,m,i=1..p, i<m≤n Output: set of p coeff. for the next time n+1: c n-i+1,n+1, i=1..p Done at every random generation of dictionary vector f n+1

25 Greedy Optimization r l,n : residual error vector after using {f n+1,…f n+p } to compensate for the quantization error of x n, corresponding to frame vector f l. Note: - the residual to be minimized

26 Greedy Optimization Solution Define incremental error: Use additive noise model: Unconstrained minimization:

27 Unconstrained Solution Difficulties No stability guarantee Arbitrary coeff. – hard to implement Still can provide a lower bound for performance Solving with stability constraint at every step n – is hard

28 Constrained Coefficients Restrict to c n-i,n to: {-c,0,c} Thus implementing with invertor, open circuit, unit-gain, followed by constant gain c MUX e n-1 control 2 1 ∑ from c n-1,n from c n-2,n c

29 Constrained Coeff. Values Task: minimize residual error at n+1 compared to n: Stability constraint forces only part of coeff. to be non-zero:

30 Constrained Coeff. Values Choose coeff. that contribute largest improvement in power of residual errors Extension: coeff. Take values {-Lc,…,- c,0,c,…,Lc}

31 Experimental Setup Measure: MSE, E[s n ] = Parameters: r – frame redundancy M – frame support p – feedback order c – constrained coeff. value Compare: (un)constrained s n - stability (un)restricted coeff. - {-c,0,c}

32 Experimental Performance Unrestricted & Unconstrained Const P=8M

33 Experimental Performance Unrestricted & Unconstrained Const M=64

34 Experimental Performance Restricted & Unconstrained unrestricted

35 Experimental Performance Restricted & Constrained S proj – unrestricted & unconstrained

36 Conclusions Reducing hw. complexity by adapting feedback coeff. to dynamically randomly generated dictionary High-order feedback loop due to RIP, but low order compared to other implementations Algorithms guarantee stable feedback loop and quantizer does not exceed its dynamic range

37 Quantization of Sparse Representations Petros Boufounos Richard G. Baraniuk

38 Quantization of Sparse Representations - Introduction The goal: Examine the effect of quantization of random CS measurements. Conclusion: CS with scalar quantization does not use it’s allocated rate efficiently.

39 Signal Model S – signal space, dim(S) = N, Sampling: basis: space: operator: reconstruction (non)linear: - vector of sampling coeff.

40 Quantization M -sampling coefficients, quantized each L -quantization levels L M – possible quantization points B ≥ M log 2 L – bits, if no subsequent entropy coding, or For simplicity: L= 2 B/M

41 Quantization Error Quantization levels at define quantization cells y falling inside cell with center is quantized to Define error:

42 Example of Uniform Scalar Quantization, M=2,L=4

43 Recovery of Sparse Signal x is K-sparse, satisfies RIP with high probability, if random measurements are used If RIP satisfied, then reconstruction: If RIP and then:

44 Quantization of Subspaces K - dim subspace of W (dim(W)=M) intersects a number of cells:

45 1-dim subspace H intersection with L M =8 2 quantization cells

46 Quantization of Sparse Signal In case of K-sparse signals at S: The sampling produces at most of subspaces W i of dim(W i ) ≤ K Intersects a number of cells:

47 Quantization of Sparse Signal Assumption To minimize the e max, distribute these cells equally to each of the subspaces, I 0 points are used to encode each subspace  K components of sparse signal are uniformly chosen from N possible components

48 Rate Penalty for CS Send B bits, but use only log 2 (I N ) bits: Quantization efficiency:

49 Rate Penalty for CS Substitute RIP to (9): small

50 Rate Penalty at Fixed Bit-Rate B-number of bits send ( constant)

51 Increasing Rate to 1 Task: B=log 2 (I N ) Theoretical solution: subsequent lossless entropy coding Not currently available for CS measurements Problem: randomness of sampling dictionary, that gives university property

52 Error Bounds Signal assumptions: K-sparse, power limited: Use: vector quantizer, that partitions the space with P cells Worst case error: Min. MSE: Note: Assumption K sparse coeff. are uniformly distributed between N

53 Lower Bound on Quantization Error Equally distribute the quantization points to all subspaces Const. bit-rate B: quantization points per subspace The rest bits are used to encode the subspace Substitute P to (3) + Stirling formula

54 Vector vs. Scalar Quantizer Error Bounds Lower error bound, vector quantizer: Lower error bound, scalar quantizer:

55 Scalar Quantizer Efficiency Reduced Sampling, followed by scalar quantization with L=2 B/M levels intersects only I 0 (M,K,L) cells: Applying RIP: M=cKlog 2 (N/K)

56 CS Linear Program Reconstruction Error Bounds It is worse then: uniform quantization interval at rate B

57 Comparing the Terms 2 B/M – encoding M space instead of K space N/K - CS doesn’t have to encode the location of sparse components √M – vector vs. scalar space advantage log 2 (N/K) – inefficient recovery due to university

58 Compressible Signals Signal x from S N and K-sparse: x inside a p-ball: Quantization error at prev. article [5]: As p->0 error -> 0, then a tighter bound is (14):

59 Error Bound at Small p As p->0, the signal x converges to 1-Sparse Property: Error bound is non-decreasing in p because But as B increases, bound (19) becomes tighter

60 Conclusions CS of sparse signals followed by scalar quantization is inefficient at: Rate utilization Error performance Reason: University: requires quantization cells uniformly cover the sampling space and signal covers only a subspace of it. Reconstruction using linear programming Possible solutions: Vector quantization Reconstruction consistency with the quantization measurements

61 Thank you. Questions ?