CSE 589 Applied Algorithms Spring 1999 Image Compression Vector Quantization Nearest Neighbor Search.

Slides:



Advertisements
Similar presentations
Multimedia Data Compression
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
K-means method for Signal Compression: Vector Quantization
School of Computing Science Simon Fraser University
Department of Computer Engineering University of California at Santa Cruz Data Compression (3) Hai Tao.
1 Preprocessing for JPEG Compression Elad Davidson & Lilach Schwartz Project Supervisor: Ari Shenhar SPRING 2000 TECHNION - ISRAEL INSTITUTE of TECHNOLOGY.
Spatial and Temporal Data Mining
Vector Quantization. 2 outline Introduction Two measurement : quality of image and bit rate Advantages of Vector Quantization over Scalar Quantization.
Losslessy Compression of Multimedia Data Hao Jiang Computer Science Department Sept. 25, 2007.
Part 3 Vector Quantization and Mixture Density Model CSE717, SPRING 2008 CUBS, Univ at Buffalo.
T.Sharon-A.Frank 1 Multimedia Image Compression 2 T.Sharon-A.Frank Coding Techniques – Hybrid.
Roger Cheng (JPEG slides courtesy of Brian Bailey) Spring 2007
1 JPEG Compression CSC361/661 Burg/Wong. 2 Fact about JPEG Compression JPEG stands for Joint Photographic Experts Group JPEG compression is used with.jpg.
Image Compression JPEG. Fact about JPEG Compression JPEG stands for Joint Photographic Experts Group JPEG compression is used with.jpg and can be embedded.
Image Compression - JPEG. Video Compression MPEG –Audio compression Lossy / perceptually lossless / lossless 3 layers Models based on speech generation.
Fast vector quantization image coding by mean value predictive algorithm Authors: Yung-Gi Wu, Kuo-Lun Fan Source: Journal of Electronic Imaging 13(2),
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
Speech coding. What’s the need for speech coding ? Necessary in order to represent human speech in a digital form Applications: mobile/telephone communication,
Introduction to JPEG Alireza Shafaei ( ) Fall 2005.
1 Section 3. Image Compression Xudong Ni Group Member: Wei Yan,Li Yang,Xudong Ni Computer Science Florida International University.
CS 395 T Real-Time Graphics Architectures, Algorithms, and Programming Systems Spring’03 Vector Quantization for Texture Compression Qiu Wu Dept. of ECE.
IMAGE COMPRESSION USING BTC Presented By: Akash Agrawal Guided By: Prof.R.Welekar.
Concepts of Multimedia Processing and Transmission IT 481, Lecture 5 Dennis McCaughey, Ph.D. 19 February, 2007.
1 Multimedia Compression Algorithms Wen-Shyang Hwang KUAS EE.
CSIE Dept., National Taiwan Univ., Taiwan
CMPT 365 Multimedia Systems
CE Digital Signal Processing Fall 1992 Waveform Coding Hossein Sameti Department of Computer Engineering Sharif University of Technology.
1 Lecture 1 1 Image Processing Eng. Ahmed H. Abo absa
Indiana University Purdue University Fort Wayne Hongli Luo
1 PCM & DPCM & DM. 2 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number.
Outline Kinds of Coding Need for Compression Basic Types Taxonomy Performance Metrics.
Date: Advisor: Jian-Jung Ding Reporter: Hsin-Hui Chen.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
1 Information Hiding Based on Search Order Coding for VQ Indices Source: Pattern Recognition Letters, Vol.25, 2004, pp.1253 – 1261 Authors: Chin-Chen Chang,
PCM & DPCM & DM.
1 資訊隱藏技術之研究 The Study of Information Hiding Mechanisms 指導教授: Chang, Chin-Chen ( 張真誠 ) 研究生: Lu, Tzu-Chuen ( 呂慈純 ) Department of Computer Science and Information.
JPEG - JPEG2000 Isabelle Marque JPEGJPEG2000. JPEG Joint Photographic Experts Group Committe created in 1986 by: International Organization for Standardization.
A Fast LBG Codebook Training Algorithm for Vector Quantization Presented by 蔡進義.
Vector Quantization Vector quantization is used in many applications such as image and voice compression, voice recognition (in general statistical pattern.
Vector Quantization CAP5015 Fall 2005.
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) Image Compression Quantization independent samples uniform and optimum correlated.
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Chapter 8 Lossy Compression Algorithms. Fundamentals of Multimedia, Chapter Introduction Lossless compression algorithms do not deliver compression.
Rate Distortion Theory. Introduction The description of an arbitrary real number requires an infinite number of bits, so a finite representation of a.
IS502:M ULTIMEDIA D ESIGN FOR I NFORMATION S YSTEM M ULTIMEDIA OF D ATA C OMPRESSION Presenter Name: Mahmood A.Moneim Supervised By: Prof. Hesham A.Hefny.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
S.R.Subramanya1 Outline of Vector Quantization of Images.
Chapter 8 Lossy Compression Algorithms
JPEG Compression What is JPEG? Motivation
IMAGE COMPRESSION.
Chapter 9 Image Compression Standards
15-853:Algorithms in the Real World
The Johns Hopkins University
Algorithms in the Real World
JPEG Image Coding Standard
Chapter 3 向量量化編碼法.
Scalar Quantization – Mathematical Model
Presenter by : Mourad RAHALI
CSE 589 Applied Algorithms Spring 1999
A Data Hiding Scheme Based Upon Block Truncation Coding
PCM & DPCM & DM.
CSE 589 Applied Algorithms Spring 1999
第七章 資訊隱藏 張真誠 國立中正大學資訊工程研究所.
Foundation of Video Coding Part II: Scalar and Vector Quantization
第 四 章 VQ 加速運算與編碼表壓縮 4-.
Chair Professor Chin-Chen Chang Feng Chia University
Hiding Information in VQ Index Tables with Reversibility
Information Hiding and Its Applications
Presentation transcript:

CSE 589 Applied Algorithms Spring 1999 Image Compression Vector Quantization Nearest Neighbor Search

CSE Lecture 14 - Spring Lossy Image Compression Methods Basic theory - trade-off between bit rate and distortion. Vector quantization (VQ). –A indices of set of representative blocks can be used to code an image, yielding good compression. Requires training. Wavelet Compression. –An image can be decomposed into a low resolution version and the detail needed to recover the original. Sending most significant bits of the wavelet coded yields excellent compression.

CSE Lecture 14 - Spring JPEG Standard JPEG - Joint Photographic Experts Group –Current image compression standard. Uses discrete cosine transform, scalar quantization, and Huffman coding. – JPEG 2000 will move to wavelet compression.

CSE Lecture 14 - Spring Barbara original JPEG VQWavelet-SPIHT 32:1 compression ratio.25 bits/pixel (8 bits)

CSE Lecture 14 - Spring JPEG

CSE Lecture 14 - Spring VQ

CSE Lecture 14 - Spring SPIHT

CSE Lecture 14 - Spring Original

CSE Lecture 14 - Spring Distortion Lossy compression: Measure of distortion is commonly mean squared error (MSE). Assume x has n real components (pixels). EncoderDecoder compressedoriginaldecompressed

CSE Lecture 14 - Spring PSNR Peak Signal to Noise Ratio (PSNR) is the standard way to measure fidelity. PSNR is measured in decibels (dB)..5 to 1 dB is said to be a perceptible difference. where m is the maximum value of a pixel possible. For gray scale images (8 bits per pixel) m = 255.

CSE Lecture 14 - Spring Rate-Fidelity Curve Properties: - Increasing - Slope decreasing

CSE Lecture 14 - Spring PSNR is not Everything PSNR = 25.8 dB VQ

CSE Lecture 14 - Spring PSNR Reflects Fidelity (1) PSNR bpp 12.8 : 1 VQ

CSE Lecture 14 - Spring PSNR Reflects Fidelity (2) PSNR bpp 25.6 : 1 VQ

CSE Lecture 14 - Spring PSNR Reflects Fidelity (2) PSNR bpp 51.2 : 1 VQ

CSE Lecture 14 - Spring Vector Quantization 1 2 n source image codebook 1 2 n i index of nearest codeword decoded image

CSE Lecture 14 - Spring Vector Quantization Facts The image is partitioned into a x b blocks. The codebook has n representative a x b blocks called codewords, each with an index. Compression is Example: a = b = 4 and n = 1,024 –compression is 10/16 =.63 bpp –compression ratio is 8 :.63 = 12.8 : 1 bpp

CSE Lecture 14 - Spring Examples 4 x 4 blocks.63 bpp 4 x 8 blocks.31 bpp 8 x 8 blocks.16 bpp Codebook size = 1,024

CSE Lecture 14 - Spring Encoding and Decoding Encoding: –Scan the a x b blocks of the image. For each block find the nearest codeword in the code book and output its index. –Nearest neighbor search. Decoding: –For each index output the codeword with that index into the destination image. –Table lookup.

CSE Lecture 14 - Spring The Codebook Both encoder and decoder must have the same codebook. The codebook must be useful for many images and be stored someplace. The codebook must be designed properly to be effective. Design requires a representative training set. These are major drawbacks to VQ.

CSE Lecture 14 - Spring Codebook Design Problem Input: A training set X of vectors of dimension d and a number n. ( d = a x b and n is number of codewords) Ouput: n vectors c 1, c 2,... c n that minimizes the sum of the distances from each member of the training set to its nearest codeword. That is minimizes where c n(x) is the nearest codeword to x.

CSE Lecture 14 - Spring Algorithms for Codebook Design The optimal codebook design problem appears to be a NP-hard problem. There is a very effective method, called the generalized Lloyd algorithm (GLA) for finding a good local minimum. GLA is also known in the statistics community as the k-means algorithm. GLA is slow.

CSE Lecture 14 - Spring GLA Start with an initial codebook c 1, c 2,... c n and training set X. Iterate: –Partition X into X 1, X 2,..., X n where X i includes the members of X that are closest to c i. –Let x i be the centroid of X i. –if is sufficiently small then quit. otherwise continue the interation with the new codebook c 1, c 2,... c n = x 1, x 2,... x n.

CSE Lecture 14 - Spring GLA Example (1) codeword training vector cici

CSE Lecture 14 - Spring GLA Example (2) codeword training vector cici XiXi

CSE Lecture 14 - Spring GLA Example (3) codeword training vector centroid cici XiXi xixi

CSE Lecture 14 - Spring GLA Example (4) codeword training vector

CSE Lecture 14 - Spring GLA Example (5) codeword training vector

CSE Lecture 14 - Spring GLA Example (6) codeword training vector centroid

CSE Lecture 14 - Spring GLA Example (7) codeword training vector

CSE Lecture 14 - Spring GLA Example (8) codeword training vector

CSE Lecture 14 - Spring GLA Example (9) codeword training vector centroid

CSE Lecture 14 - Spring GLA Example (10) codeword training vector centroid

CSE Lecture 14 - Spring Codebook 1 x 2 codewords Note: codewords diagonally spread

CSE Lecture 14 - Spring GLA Advice Time per iteration is dominated by the partitioning step, which is m nearest neighbor searches where m is the training set size. –Average time per iteration O(m log n) assuming d is small. Training set size. –Training set should be at least 20 training vectors per code word to get reasonable performance. –To small a training set results in “over training”. Number of iterations can be large.

CSE Lecture 14 - Spring Nearest Neighbor Search Preprocess a set of n vectors V in d dimensions into a search data structure. Input: A query vector q. Ouput: The vector v in V that is nearest to q. That is, the vector v in V that minimizes

CSE Lecture 14 - Spring NNS in VQ Used in codebook design. –Used in GLA to partition the training set. –Since codebook design is seldom done then speed of NNS is not too big a issue. Used in VQ encoding. –Codebook size is commonly 1,000 or more. –Naive linear search would make encoding too slow. –Can we do better than linear search?

CSE Lecture 14 - Spring Naive Linear Search Keep track of the current best vector, best-v, and best distance squared, best-squared-d. –For an unsearched vector v compute ||v -q|| 2 to see if it smaller than best-squared-d. –If so then replace best-v. –If d is moderately large it is a good idea not to compute the squared distance completely. Bail out when k < d and

CSE Lecture 14 - Spring Orchard’s Method Invented by Orchard (1991) Uses a “guess codeword”. The guess codeword is the codeword of an adjacent coded vector. Orchard’s Principle. –if r is the distance between the guess codeword and the query vector q then the nearest codeword to q must be within a distance of 2r of the guess codeword.

CSE Lecture 14 - Spring Orchard’s Principle query codeword r 2r guess codeword