Lattices for Distributed Source Coding - Reconstruction of a Linear function of Jointly Gaussian Sources -D. Krithivasan and S. Sandeep Pradhan - University.

Slides:



Advertisements
Similar presentations
On an Improved Chaos Shift Keying Communication Scheme Timothy J. Wren & Tai C. Yang.
Advertisements

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Aggregating local image descriptors into compact codes
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Sampling and Pulse Code Modulation
General Classes of Lower Bounds on Outage Error Probability and MSE in Bayesian Parameter Estimation Tirza Routtenberg Dept. of ECE, Ben-Gurion University.
Paul Cuff THE SOURCE CODING SIDE OF SECRECY TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA.
1 Outline  Introduction to JEPG2000  Why another image compression technique  Features  Discrete Wavelet Transform  Wavelet transform  Wavelet implementation.
Achilleas Anastasopoulos (joint work with Lihua Weng and Sandeep Pradhan) April A Framework for Heterogeneous Quality-of-Service Guarantees in.
Yi Wu (CMU) Joint work with Parikshit Gopalan (MSR SVC) Ryan O’Donnell (CMU) David Zuckerman (UT Austin) Pseudorandom Generators for Halfspaces TexPoint.
Fundamental limits in Information Theory Chapter 10 :
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Efficient Statistical Pruning for Maximum Likelihood Decoding Radhika Gowaikar Babak Hassibi California Institute of Technology July 3, 2003.
Asymptotic Enumerators of Protograph LDPCC Ensembles Jeremy Thorpe Joint work with Bob McEliece, Sarah Fogal.
Multiple Description Coding and Distributed Source Coding: Unexplored Connections in Information Theory and Coding Theory S. Sandeep Pradhan Department.
1 Network Source Coding Lee Center Workshop 2006 Wei-Hsin Gu (EE, with Prof. Effros)
Distributed Video Coding Bernd Girod, Anne Margot Aagon and Shantanu Rane, Proceedings of IEEE, Jan, 2005 Presented by Peter.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Independent Component Analysis (ICA) and Factor Analysis (FA)
© 2005, it - instituto de telecomunicações. Todos os direitos reservados. Gerhard Maierbacher Scalable Coding Solutions for Wireless Sensor Networks IT.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
BASiCS Group University of California at Berkeley Generalized Coset Codes for Symmetric/Asymmetric Distributed Source Coding S. Sandeep Pradhan Kannan.
ECE 776 Information Theory Capacity of Fading Channels with Channel Side Information Andrea J. Goldsmith and Pravin P. Varaiya, Professor Name: Dr. Osvaldo.
A Graph-based Framework for Transmission of Correlated Sources over Multiuser Channels Suhan Choi May 2006.
Compression with Side Information using Turbo Codes Anne Aaron and Bernd Girod Information Systems Laboratory Stanford University Data Compression Conference.
Distributed Video Coding Bernd Girod, Anne Margot Aaron, Shantanu Rane, and David Rebollo-Monedero IEEE Proceedings 2005.
Linear Codes for Distributed Source Coding: Reconstruction of a Function of the Sources -D. Krithivasan and S. Sandeep Pradhan -University of Michigan,
Lecture II-2: Probability Review
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Modern Navigation Thomas Herring
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Hierarchical Cooperation Achieves Linear Scaling in Ad Hoc Wireless Networks David Tse Wireless Foundations U.C. Berkeley AISP Workshop May 2, 2007 Joint.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Joint Physical Layer Coding and Network Coding for Bi-Directional Relaying Makesh Wilson, Krishna Narayanan, Henry Pfister and Alex Sprintson Department.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
CMPT 365 Multimedia Systems
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Channel Capacity.
Adaptive Data Aggregation for Wireless Sensor Networks S. Jagannathan Rutledge-Emerson Distinguished Professor Department of Electrical and Computer Engineering.
Cooperative Communication in Sensor Networks: Relay Channels with Correlated Sources Brian Smith and Sriram Vishwanath University of Texas at Austin October.
A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds Paris Smaragdis, Madhusudana Shashanka, Bhiksha Raj NIPS 2009.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Introduction to Digital Signals
Name Iterative Source- and Channel Decoding Speaker: Inga Trusova Advisor: Joachim Hagenauer.
Limits On Wireless Communication In Fading Environment Using Multiple Antennas Presented By Fabian Rozario ECE Department Paper By G.J. Foschini and M.J.
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 10 Rate-Distortion.
Vector Quantization CAP5015 Fall 2005.
Joint Moments and Joint Characteristic Functions.
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
A Low-Complexity Universal Architecture for Distributed Rate-Constrained Nonparametric Statistical Learning in Sensor Networks Avon Loy Fernandes, Maxim.
Chapter 8 Lossy Compression Algorithms. Fundamentals of Multimedia, Chapter Introduction Lossless compression algorithms do not deliver compression.
1 On the Channel Capacity of Wireless Fading Channels C. D. Charalambous and S. Z. Denic School of Information Technology and Engineering, University of.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Samuel Cheng, Shuang Wang and Lijuan Cui University of Oklahoma
Sequential Off-line Learning with Knowledge Gradients Peter Frazier Warren Powell Savas Dayanik Department of Operations Research and Financial Engineering.
- A Maximum Likelihood Approach Vinod Kumar Ramachandran ID:
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
Data Transformation: Normalization
The Johns Hopkins University
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
2018/9/16 Distributed Source Coding Using Syndromes (DISCUS): Design and Construction S.Sandeep Pradhan, Kannan Ramchandran IEEE Transactions on Information.
Lecture 2 – Monte Carlo method in finance
Foundation of Video Coding Part II: Scalar and Vector Quantization
Compute-and-Forward Can Buy Secrecy Cheap
Presentation transcript:

Lattices for Distributed Source Coding - Reconstruction of a Linear function of Jointly Gaussian Sources -D. Krithivasan and S. Sandeep Pradhan - University of Michigan, Ann Arbor TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A AAA A A

Presentation Overview Problem Formulation A Straightforward coding scheme using random coding A new coding scheme using lattice coding –Motivation for the coding scheme –An overview of lattices and lattice coding –A lattice based coding scheme Results and Extensions Conclusion

Problem Formulation Source (X 1,X 2 ) is a bivariate Gaussian. Encoders observe X 1,X 2 separately and quantize them at rates R 1,R 2 Decoder interested in a linear function Z, X 1 -cX 2 Lossy reconstruction to within mean square distortion D Objective: Achievable rates (R 1,R 2 ) at distortion D

Pictorial Representation Reconstruct, the best estimate of Z = F(X,Y)

Features of the proposed scheme Multi-terminal source coding problem. Random coding techniques typically used to tackle such problems. Our approach based on “structured” lattice codes. –Previously, structured codes used only to achieve known bounds with low complexity. –Our coding scheme relies critically on the structure of the code. –Structured codes give performance gains not attainable by random codes.

Berger-Tung based Coding Scheme Encoders: Quantize X 1 to W 1, X 2 to W 2. Transmit W 1 and W 2 Decoder: Reconstruct Can use Gaussian test channels for P(W 1 j X 1 ) and P(W 2 j X 2 ) to derive achievable rates and distortion Known to be optimal for c<0 with Gaussian test channels

Motivation for Our Coding Scheme Korner and Marton considered lossless coding of Z = X 1 © X 2 Coding scheme if the encoding is centralized? –Compute Z = X 1 © X 2. Compress it to f(Z) and transmit. –f( ¢ ) is any good source coding scheme. Suppose f( ¢ ) distributes over © ? –Compress X 1, X 2 as f(X 1 ) and f(X 2 ). –Decoder computes f(X 1 ) © f(X 2 ) = f(X 1 © X 2 ) = f(Z). No difference between centralized and distributed coding. Choosing f( ¢ ) as a linear code will work.

An Overview of Lattices An n dimensional lattice ¤ – collection of integer combinations of columns of a n £ n generator matrix G. Nearest neighbor quantizer Quantization error : x mod ¤, x – Q ¤ (x) Voronoi region Second moment ¾ 2 ( ¤ ) of lattice ¤ – expected value per dimension of a random vector uniformly distributed in Normalized second moment G( ¤ ) = ¾ 2 ( ¤ ) / V 2/n`

Introduction to Lattice Codes Lattice code – a subset of the lattice points are used as codewords. Have been used for both source and channel coding problems. Various notions of goodness: –Good source D-code if log (2 ¼ eG( ¤ )) · ² and ¾ 2 ( ¤ )=D –Good channel ¾ 2 Z ( ¤ ) if it achieves the Poltyrev exponent on the unconstrained AWGN channel of variance Why Lattice codes? –Lattices achieve the Gaussian rate distortion bound –Lattice encoding distributes over Z = X 1 – c X 2

Nested Lattice Codes ( ¤ 1, ¤ 2 ) is a nested lattice if ¤ 1 ½ ¤ 2 Nested lattice codes widely used in multiterminal communications. Need nested lattice codes such that ¤ 1 and ¤ 2 are good source and channel codes. Such nested lattices termed “good nested lattice codes”. Known that such nested lattices do exist in sufficiently high dimension.

The Coding Scheme Good nested lattice code ( ¤ 11, ¤ 12, ¤ 2 ), ¤ 2 ½ ¤ 11, ¤ 12 Dithers Note: second encoder scales the input before encoding.

The Coding Scheme contd. Lattice parameters Rate of a nested lattice ( ¤ 1, ¤ 2 ) is Encoder rates are Sum rate R 1 + R 2 = log achievable at distortion D.

The Coding Theorem The set of all rate-distortion tuples (R 1,R 2,D) that satisfy are achievable.

Proof Outline Key Idea: Distributive property of lattice mod operation Using this, one of the mod- ¤ 2 can be removed from the signal path. Simplified but equivalent coding scheme is

Proof Outline contd. e q 1, e q 2 - subtractive dither quantization noises independent of the sources. Decoder operation But ¤ 2 is a good ¾ 4 Z /( ¾ 2 Z -D) channel code. Implies ((Z+e q ) mod ¤ 2 ) = (Z+e q ) with high probability Decoder reconstruction Can be checked that

Comments about the Coding Scheme Larger rate region than the random coding scheme for some source statistics and some range of D. Coding scheme is lattice vector quantization followed by “correlated” lattice-structured binning. Previously lattice coding used only to achieve known performance bounds. Our theorem – first instance of lattice coding being central to improving the rate region.

Extensions to more than Two Sources Reconstructing –Some sources coded together using the “correlated” binning strategy of the lattice coding scheme. –Others coded independently using the Berger-Tung coding scheme. –Decoder has some side information – previously decoded sources. –Possible to present a unified rate region combining all such strategies.

Comparison of the Rate Regions Compare sum rates for ½ = 0.8 and c = 0.8 Lattice based scheme – lower sum rates for small distortions. Time sharing between the two schemes – Better rate region than either scheme alone.

Range of values for Lower Sum Rate Shows where ( ½,c) should lie for lattice sum rate to be lower than Berger- Tung sum rate for some distorion D. Contour marked R – lattice sum rate lower by R units Improvement only for c > 0.

Conclusion Considered lossy reconstruction of a function of the sources in a distributed setting. Presented a coding scheme of vector quantization followed by “correlated” binning using lattices. Improves upon the rate region of the natural random coding scheme. Currently working on extending the scheme to discrete sources and arbitrary functions.