Scalar Quantization – Mathematical Model

Slides:



Advertisements
Similar presentations
Multimedia Data Compression
Advertisements

EET260: A/D and D/A conversion
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Sampling and Pulse Code Modulation
1. INTRODUCTION In order to transmit digital information over * bandpass channels, we have to transfer the information to a carrier wave of.appropriate.
Information Theory EE322 Al-Sanie.
An introduction to Data Compression
Quantization Prof. Siripong Potisuk.
CSE 589 Applied Algorithms Spring 1999 Image Compression Vector Quantization Nearest Neighbor Search.
Spatial and Temporal Data Mining
Lecture 7: Spring 2009 Lossless Compression Algorithms
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
Losslessy Compression of Multimedia Data Hao Jiang Computer Science Department Sept. 25, 2007.
Review of Probability and Random Processes
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
SIMS-201 Audio Digitization. 2  Overview Chapter 12 Digital Audio Digitization of Audio Samples Quantization Reconstruction Quantization error.
Pulse Modulation 1. Introduction In Continuous Modulation C.M. a parameter in the sinusoidal signal is proportional to m(t) In Pulse Modulation P.M. a.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Chapter 7. Analog to Digital Conversion
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
CMPT 365 Multimedia Systems
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Optimal Bayes Classification
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Introduction to Digital Signals
Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD
CS654: Digital Image Analysis
CY1B2 Statistics1 (ii) Poisson distribution The Poisson distribution resembles the binomial distribution if the probability of an accident is very small.
IT-101 Section 001 Lecture #9 Introduction to Information Technology.
Digital Image Processing Lecture 22: Image Compression
Vector Quantization CAP5015 Fall 2005.
Analysis of Experimental Data; Introduction
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Chapter 8 Lossy Compression Algorithms. Fundamentals of Multimedia, Chapter Introduction Lossless compression algorithms do not deliver compression.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
بسم الله الرحمن الرحيم Digital Signal Processing Lecture 2 Analog to Digital Conversion University of Khartoum Department of Electrical and Electronic.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Chapter 8 Lossy Compression Algorithms
Image Compression (Chapter 8)
Data Transformation: Normalization
Digital Image Processing Lecture 20: Image Compression May 16, 2005
CSI-447: Multimedia Systems
The Johns Hopkins University
Analog to digital conversion
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
DIGITIAL COMMUNICATION
Chapter 3 Sampling.
Quantization and Encoding
CH 8. Image Compression 8.1 Fundamental 8.2 Image compression models
Subject Name: Digital Communication Subject Code:10EC61
2018/9/16 Distributed Source Coding Using Syndromes (DISCUS): Design and Construction S.Sandeep Pradhan, Kannan Ramchandran IEEE Transactions on Information.
Department of Electrical and Computer Engineering
Soutenance de thèse vendredi 24 novembre 2006, Lorient
Chapter 3: Pulse Code Modulation
لجنة الهندسة الكهربائية
PCM & DPCM & DM.
Image Transforms for Robust Coding
Welcome to the wonderful world of Probability
Foundation of Video Coding Part II: Scalar and Vector Quantization
Sampling and Quantization
Image Coding and Compression
第 四 章 VQ 加速運算與編碼表壓縮 4-.
CMPT 365 Multimedia Systems
Chapter 6 Random Processes
Experiments, Outcomes, Events and Random Variables: A Revisit
Continuous Random Variables: Basics
Presentation transcript:

Scalar Quantization – Mathematical Model Multimedia Compression דחיסת מולטימדיה January 27, 2009 Lecture 9A: Scalar Quantization – Mathematical Model

Definition of Quantization Quantization: a process of representing a large – possible infinite – set of values with a much smaller set. Scalar quantization: a mapping of an input value x into a finite number of output values, y: Q:x ® y One of the most simplest and most general idea in lossy compression.

Definition of Quantization (Cont.) Many of the fundamental ideas of quantization and compression are most easily introduced in the simple context of scalar quantization. Any real number x can be rounded off to the nearest integer, say q(x) = round(x) Maps the real line R (a continuous space) into a discrete space.

An example of uniform quantization

Input vs. Output

Quantization

Example of a Quantized Waveform

Noise Quantization resulting quantization error (‘noise’) so that

Quantizer definition The design of the quantizer has a significant impact on the amount of compression obtained and loss incurred in a lossy compression scheme. Quantizer: encoder mapping and decode mapping. Encoder mapping – The encoder divides the range of source into a number of intervals – Each interval is represented by a distinct codeword Decoder mapping – For each received codeword, the decoder generates a reconstruct value

Quantization operation – Let M be the number of reconstruction levels where the decision boundaries are and the reconstruction levels are

Quantization Problem MSQE (mean squared quantization error) If the quantization operation is Q Suppose the input is modeled by a random variable X with pdf fX(x). The MSQE is

Quantization Problem Rate of the quantizer The average number of bits required to represent a single quantizer output –For fixed-length coding, the rate R is: For variable-length coding, the rate will depend on the probability of occurrence of the outputs

Quantization Problem Quantizer design problem Fixed -length coding Variable-length coding If li is the length of the codeword corresponding to the output yi, and the probability of occurrence of yi is: The rate is given by:

Uniform Quantization

Quantization Levels

Quantizer: Midtreader vs. Midrizer

Quantizer: Uniform vs. Nonuniform

Uniform Quantizer Zero is one of the output levels M is odd Zero is not one of the output levels M is even

Uniform Quantization of A Uniformly Distributed Source

Uniform Quantization of A Uniformly Distributed Source

Uniform Quantization of A Non-uniformly Distributed Source

Image Compression Original 8bits/pixel 3bits/pixel

Image Compression 2bits/pixel 1bit/pixel

Lloyd-Max Quantization Problem : For a signal u with given pdf pu(u) find a quantizer with N representative levels such that Solution : Lloid-Max quantizer (Lloid, 1967; Max, 1960) N-1 decision thresholds exactly half way between representative levels N representative levels in the centroid of the pdf between two successive decision thresholds

Lloid-Max Quantizer vs. Best Uniform Quantizer

Optimal Quantization squares error (MMSE) sense The optimal reconstruction levels, {rj }, in minimum mean squares error (MMSE) sense

Optimal Quantization (Cont.) If J is large  p(f)  p(rj) for optimal  If p(f) is uniformly distributed:

Optimal Quantization (Cont.) In general To minimize D, with d0 = -, dL =  Max-Lloid Quantizer:

Optimal Quantization (Cont.) The integration can be replaced by summation if f is discrete valued In practice, various distributions ( e.g., uniform, Gaussian, or Laplacian) are used to model the source p(f). If p(f) is unknown, histogram can be used to obtain p(f), after normalization

Uniform and Optimal Quantization Uniform Quantization The error Eq is unifirmly distributed with zero mean and variance - Let the range of f be A. Its variance is - The signal-to-noise ratio for a uniform quantizer is 2