CH 8. Image Compression 8.1 Fundamental 8.2 Image compression models

Slides:



Advertisements
Similar presentations
Multimedia Data Compression
Advertisements

Image Compression. Data and information Data is not the same thing as information. Data is the means with which information is expressed. The amount of.
Chapter 6 Information Theory
Image Compression (Chapter 8)
Losslessy Compression of Multimedia Data Hao Jiang Computer Science Department Sept. 25, 2007.
Image Compression (Chapter 8)
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
CMPT 365 Multimedia Systems
1 Image Compression อ. รัชดาพร คณาวงษ์ วิทยาการคอมพิวเตอร์ คณะ วิทยาศาสตร์ มหาวิทยาลัยศิลปากรวิทยาเขต พระราชวังสนามจันทร์
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
2007Theo Schouten1 Compression "lossless" : f[x,y]  { g[x,y] = Decompress ( Compress ( f[x,y] ) | “lossy” : quality measures e 2 rms = 1/MN  ( g[x,y]
Image Compression - JPEG. Video Compression MPEG –Audio compression Lossy / perceptually lossless / lossless 3 layers Models based on speech generation.
Image Compression Image compression address the problem of reducing the amount of data required to represent a digital image with no significant loss of.
Software Research Image Compression Mohamed N. Ahmed, Ph.D.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Lecture 1 Contemporary issues in IT Lecture 1 Monday Lecture 10:00 – 12:00, Room 3.27 Lab 13:00 – 15:00, Lab 6.12 and 6.20 Lecturer: Dr Abir Hussain Room.
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
ECE472/572 - Lecture 12 Image Compression – Lossy Compression Techniques 11/10/11.
IMAGE COMPRESSION USING BTC Presented By: Akash Agrawal Guided By: Prof.R.Welekar.
Prof. Amr Goneid Department of Computer Science & Engineering
Chapter 8 Image Compression. What is Image Compression? Reduction of the amount of data required to represent a digital image → removal of redundant data.
Image Processing and Computer Vision: 91. Image and Video Coding Compressing data to a smaller volume without losing (too much) information.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 8 – Image Compression.
Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben.
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Digital Image Processing Image Compression
Image Compression – Fundamentals and Lossless Compression Techniques
Outline Kinds of Coding Need for Compression Basic Types Taxonomy Performance Metrics.
Image Compression Fasih ur Rehman. Goal of Compression Reduce the amount of data required to represent a given quantity of information Reduce relative.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Compression video overview 演講者:林崇元. Outline Introduction Fundamentals of video compression Picture type Signal quality measure Video encoder and decoder.
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Chapter 17 Image Compression 17.1 Introduction Redundant and irrelevant information  “Your wife, Helen, will meet you at Logan Airport in Boston.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
CS654: Digital Image Analysis
Digital Image Processing Lecture 22: Image Compression
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Chapter 8 Lossy Compression Algorithms. Fundamentals of Multimedia, Chapter Introduction Lossless compression algorithms do not deliver compression.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
IT472 Digital Image Processing
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure.
Image Compression (Chapter 8) CS474/674 – Prof. Bebis.
Introductions What Is Data Compression ? Data compression is the process of reduction in the amount of signal space that must be allocated to a given.
Chapter 8 Lossy Compression Algorithms
Image Compression (Chapter 8)
JPEG Compression What is JPEG? Motivation
IMAGE PROCESSING IMAGE COMPRESSION
Digital Image Processing Lecture 20: Image Compression May 16, 2005
Data Compression.
Algorithms in the Real World
Image Compression The still image and motion images can be compressed by lossless coding or lossy coding. Principle of compression: - reduce the redundant.
Data Compression.
I MAGE C OMPRESSION. The goal of image compression is to reduce the amount of data required to represent a digital image.
Context-based Data Compression
Image Compression 9/20/2018 Image Compression.
Scalar Quantization – Mathematical Model
Image Analysis Image Restoration.
CMPT 365 Multimedia Systems
Digital Multimedia Coding
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Lecture 18 Figures from Gonzalez and Woods, Digital Image Processing, Second Edition, 2002.
Image Compression Fundamentals Error-Free Compression
CSE 589 Applied Algorithms Spring 1999
Image Transforms for Robust Coding
Image Coding and Compression
Topic 1 Three related sub-fields Image processing Computer vision
Presentation transcript:

CH 8. Image Compression 8.1 Fundamental 8.2 Image compression models 8.3 Elements of information theory 8.4 Error-free compression 8.5 Lossy compression 8.6 Image compression standards * Reference to Prof. Bebis (UNR)

8.1 Fundamentals Data compression aims to reduce the amount of data while preserving as much information as possible. The goal of image compression is to reduce the amount of data required to represent an image.

Trade-off: information loss vs. compression ratio Lossless Information preserving Low compression ratios Lossy Not information preserving High compression ratios Trade-off: information loss vs. compression ratio

Compression Ratio compression Compression ratio:

Relevant Data Redundancy Example:

Types of Data Redundancy (1) Coding Redundancy (2) Interpixel Redundancy (3) Psychovisual Redundancy Data compression attempts to reduce one or more of these redundancy types.

Coding - Definitions Code: a list of symbols (letters, numbers, bits etc.) Code word: a sequence of symbols used to represent some information (e.g., gray levels). Code word length: number of symbols in a code word. 8

Coding - Definitions (cont’d) N x M image, rk: k-th gray level l(rk): # of bits for rk, P(rk): probability of rk Expected value:

8.1.1 Coding Redundancy Case 1: l(rk) = constant length Example:

8.1.1 Coding Redundancy (cont’d) Case 2: l(rk) = variable length variable length Total number of bits: 2.7NM

8.1.2 Interpixel redundancy Interpixel redundancy implies that pixel values are correlated (i.e., a pixel value can be reasonably predicted by its neighbors). histograms auto-correlation auto-correlation: f(x)=g(x)

8.1.2 Interpixel redundancy To reduce interpixel redundancy, some kind of transformation must be applied on the data (e.g., thresholding, DFT, DWT) Example: threshold original 11 ……………0000……………………..11…..000….. thresholded (1+10) bits/pair

8.1.3 Psychovisual redundancy The human eye is more sensitive to the lower frequencies than to the higher frequencies in the visual spectrum. Idea: discard data that is perceptually insignificant!

8.1.3 Psychovisual redundancy Example: quantization 256 gray levels 16 gray levels 16 gray levels + random noise C=8/4 = 2:1 add a small pseudo-random number to each pixel prior to quantization

8.1.4 Fidelity Criteria How close is to ? Criteria Subjective: based on human observers Objective: mathematically defined criteria 17

Subjective Fidelity Criteria

Objective Fidelity Criteria Root mean square error (RMS) Mean-square signal-to-noise ratio (SNR) 19

Image Compression Models (cont’d) The decoder applies the inverse steps. Note that quantization is irreversible in general.

8.3 Element of Info Theory A key question in image compression is: How do we measure the information content of an image? “What is the minimum amount of data that is sufficient to describe completely an image without loss of information?”

8.3.1 Measuring Information We assume that information generation is a probabilistic process. Idea: associate information with probability! A random event E with probability P(E) contains: Note: I(E)=0 when P(E)=1

How much information does a pixel contain? Suppose that gray level values are generated by a random process, then rk contains: units of information! (assume statistically independent random events)

How much information does a pixel contain? Average information content of an image: using units/pixel (e.g., bits/pixel) Entropy:

8.3.4 Using Information Theory Redundancy: where: Note: if Lavg= H, then R=0 (no redundancy)

Entropy Estimation It is not easy to estimate H reliably! image

Entropy Estimation (cont’d) First order estimate of H: Lavg = 8 bits/pixel R= Lavg-H The first-order estimate provides only a lower-bound on the compression that can be achieved. 27

Entropy Estimation (cont’d) Second order estimate of H: Use relative frequencies of pixel blocks (adjacent pair) : image

Differences in Entropy Estimates Differences between higher-order estimates of entropy and the first-order estimate indicate the presence of interpixel redundancy! Need to apply some transformation to deal with interpixel redundancy!

Differences in Entropy Estimates (cont’d) For example, consider pixel differences: 16

Differences in Entropy Estimates (cont’d) What is the entropy of the difference image? (better than the entropy of the original image H=1.81) An even better transformation is possible since the second order entropy estimate is lower: