Presentation is loading. Please wait.

Presentation is loading. Please wait.

Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.

Similar presentations


Presentation on theme: "Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference."— Presentation transcript:

1 Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

2 Pressure is on Digital Sensors Success of digital data acquisition is placing increasing pressure on signal/image processing hardware and software to support higher resolution / denser sampling »ADCs, cameras, imaging systems, microarrays, … x large numbers of sensors »image data bases, camera arrays, distributed wireless sensor networks, … x increasing numbers of modalities »acoustic, RF, visual, IR, UV = deluge of data »how to acquire, store, fuse, process efficiently?

3 Sensing by Sampling Long-established paradigm for digital data acquisition –sample data at Nyquist rate (2x bandwidth) –compress data (signal-dependent, nonlinear) –brick wall to resolution/performance compress transmit/store receivedecompress sample sparse / compressible wavelet transform

4 Compressive Sensing (CS) Directly acquire “compressed” data Replace samples by more general “measurements” compressive sensing transmit/store receivereconstruct

5 Compressive Sensing When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss Random projection will work measurements [Candes-Romberg-Tao, Donoho, 2004] sparse signal nonzero entries

6 Why CS Works Random projection not full rank, but stably embeds signals with concise geometrical structure –sparse signal models is K-sparse –compressible signal models with high probability provided M large enough

7 Why CS Works Random projection not full rank, but stably embeds signals with concise geometrical structure –sparse signal models is K-sparse –compressible signal models with high probability provided M large enough Stable embedding: preserves structure –distances between points, angles between vectors, … K -dim planes K -sparse model

8 CS Signal Recovery Recover sparse/compressible signal x from CS measurements y via optimization K -dim planes K -sparse model recovery linear program

9 Information Scalability Many applications involve signal inference and not reconstruction detection < classification < estimation < reconstruction computational complexity for linear programming

10 Information Scalability Many applications involve signal inference and not reconstruction detection < classification < estimation < reconstruction Good news: CS supports efficient learning, inference, processing directly on compressive measurements Random projections ~ sufficient statistics for signals with concise geometrical structure Extend CS theory to signal models beyond sparse/compressible

11 Application: Compressive Detection/Classification via Matched Filtering

12 Matched Filter Detection/classification with K unknown articulation parameters –Ex: position and pose of a vehicle in an image –Ex: time delay of a radar signal return Matched filter: joint parameter estimation and detection/classification –compute sufficient statistic for each potential target and articulation –compare “best” statistics to detect/classify

13 Matched Filter Geometry Detection/classification with K unknown articulation parameters Images are points in Classify by finding closest target template to data for each class (AWG noise) –distance or inner product data target templates from generative model or training data (points)

14 Matched Filter Geometry Detection/classification with K unknown articulation parameters Images are points in Classify by finding closest target template to data As template articulation parameter changes, points map out a K-dim nonlinear manifold Matched filter classification = closest manifold search articulation parameter space data

15 CS for Manifolds Theorem: random measurements preserve manifold structure [Wakin et al, FOCM ’08] Enables parameter estimation and MF detection/classification directly on compressive measurements –K very small in many applications

16 Example: Matched Filter Detection/classification with K=3 unknown articulation parameters 1.horizontal translation 2.vertical translation 3.rotation

17 Smashed Filter Detection/classification with K=3 unknown articulation parameters (manifold structure) Dimensionally reduced matched filter directly on compressive measurements

18 Smashed Filter Random shift and rotation ( K =3 dim. manifold) Noise added to measurements Goal: identify most likely position for each image class identify most likely class using nearest-neighbor test number of measurements M avg. shift estimate error classification rate (%) more noise

19 Application: Compressive Data Fusion

20 Multisensor Inference Example: Network of J cameras observing an articulating object Each camera’s images lie on K-dim manifold in How to efficiently fuse imagery from J cameras to maximize classification accuracy and minimize network communication?

21 Multisensor Fusion Fusion: stack corresponding image vectors taken at the same time Fused images still lie on K-dim manifold in Joint Articulation Manifold (JAM)

22 Can take CS measurements of stacked images and process or make inferences CS + JAM w/ unfused sensing

23 Can compute CS measurements in-network as we transmit to collection/processing point CS + JAM

24 Simulation Results J=3 CS cameras, each N=320x240 resolution M=200 random measurements per camera Two classes 1.truck w/ cargo 2.truck w/ no cargo class 1class 2

25 Simulation Results J=3 CS cameras, each N=320x240 resolution M=200 random measurements per camera Two classes –truck w/ cargo –truck w/ no cargo Smashed filtering –independent –majority vote –JAM fused

26 Application: Compressive Manifold Learning

27 Manifold Learning Given training points in, learn the mapping to the underlying K-dimensional articulation manifold ISOMAP, LLE, HLLE, … Ex:images of rotating teapot articulation space = circle

28 Compressive Manifold Learning ISOMAP algorithm based on geodesic distances between points Random measurements preserve these distances Theorem:If, then the ISOMAP residual variance in the projected domain is bounded by the additive error factor full data (N=4096) ‏ M = 100M = 50M = 25 translating disk manifold (K=2) ‏ [Hegde et al ’08]

29 Conclusions Why CS works: stable embedding for signals with concise geometric structure –sparse signals (K-planes), compressible signals ( balls) –smooth manifolds Information scalability –detection< classification < estimation < reconstruction –compressive measurements ~ sufficient statistics –many fewer measurements may be required to detect/classify/estimate than to reconstruct –leverages manifold structure and not sparsity Examples –smashed filter –JAM for data fusion –manifold learning dsp.rice.edu/cs

30

31 Partnership on open educational resources –content development –peer review Contribute your course notes, tutorial article, textbook draft, out-of-print textbook, … (you must own the copyright) MS Word and LaTeX importers For more info:IEEEcnx.org

32 capetowndeclaration.org

33

34 Why CS Works (#3) Random projection not full rank, but stably embeds –sparse/compressible signal models –smooth manifolds –point clouds into lower dimensional space with high probability Stable embedding: preserves structure –distances between points, angles between vectors, … provided M is large enough: Johnson-Lindenstrauss Q points


Download ppt "Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference."

Similar presentations


Ads by Google