Digital Image Processing Lecture 22: Image Compression

Slides:



Advertisements
Similar presentations
Information Theory EE322 Al-Sanie.
Advertisements

SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Data Compression Michael J. Watts
Lecture04 Data Compression.
Compression & Huffman Codes
Compression Techniques. Digital Compression Concepts ● Compression techniques are used to replace a file with another that is smaller ● Decompression.
CSc 461/561 CSc 461/561 Multimedia Systems Part B: 1. Lossless Compression.
SWE 423: Multimedia Systems
Fundamental limits in Information Theory Chapter 10 :
Image Compression (Chapter 8)
Information Theory Eighteenth Meeting. A Communication Model Messages are produced by a source transmitted over a channel to the destination. encoded.
Klara Nahrstedt Spring 2014
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
1 Image Compression อ. รัชดาพร คณาวงษ์ วิทยาการคอมพิวเตอร์ คณะ วิทยาศาสตร์ มหาวิทยาลัยศิลปากรวิทยาเขต พระราชวังสนามจันทร์
Lossless Data Compression Using run-length and Huffman Compression pages
Lossless Compression in Multimedia Data Representation Hao Jiang Computer Science Department Sept. 20, 2007.
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
1 Lossless Compression Multimedia Systems (Module 2) r Lesson 1: m Minimum Redundancy Coding based on Information Theory: Shannon-Fano Coding Huffman Coding.
Spring 2015 Mathematics in Management Science Binary Linear Codes Two Examples.
Software Research Image Compression Mohamed N. Ahmed, Ph.D.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Chapter 5 : IMAGE COMPRESSION – LOSSLESS COMPRESSION - Nur Hidayah Bte Jusoh (IT 01481) 2)Azmah Bte Abdullah Sani (IT 01494) 3)Dina Meliwana.
Chapter 2 Source Coding (part 2)
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 8 – JPEG Compression (Part 3) Klara Nahrstedt Spring 2012.
Source Coding-Compression
: Chapter 12: Image Compression 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Klara Nahrstedt Spring 2011
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2011.
Page 110/6/2015 CSE 40373/60373: Multimedia Systems So far  Audio (scalar values with time), image (2-D data) and video (2-D with time)  Higher fidelity.
Prof. Amr Goneid Department of Computer Science & Engineering
Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben.
1 Introduction to Information Technology LECTURE 5 COMPRESSION.
Compression.  Compression ratio: how much is the size reduced?  Symmetric/asymmetric: time difference to compress, decompress?  Lossless; lossy: any.
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Digital Image Processing Image Compression
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
Image Compression – Fundamentals and Lossless Compression Techniques
1 Image Formats. 2 Color representation An image = a collection of picture elements (pixels) Each pixel has a “color” Different types of pixels Binary.
Image Compression Fasih ur Rehman. Goal of Compression Reduce the amount of data required to represent a given quantity of information Reduce relative.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Chapter 17 Image Compression 17.1 Introduction Redundant and irrelevant information  “Your wife, Helen, will meet you at Logan Airport in Boston.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 6 – Basics of Compression (Part 1) Klara Nahrstedt Spring 2011.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
CS654: Digital Image Analysis
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
Lossless Compression(2)
Multi-media Data compression
1 Data Compression Hae-sun Jung CS146 Dr. Sin-Min Lee Spring 2004.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2012.
ECE 101 An Introduction to Information Technology Information Coding.
Page 1KUT Graduate Course Data Compression Jun-Ki Min.
Huffman code and Lossless Decomposition Prof. Sin-Min Lee Department of Computer Science.
Dr. Hadi AL Saadi Image Compression. Goal of Image Compression Digital images require huge amounts of space for storage and large bandwidths for transmission.
Lossless Compression-Statistical Model Lossless Compression One important to note about entropy is that, unlike the thermodynamic measure of entropy,
Data Compression: Huffman Coding in Weiss (p.389)
Data Compression Michael J. Watts
Image Compression (Chapter 8)
IMAGE PROCESSING IMAGE COMPRESSION
Digital Image Processing Lecture 20: Image Compression May 16, 2005
Image Compression The still image and motion images can be compressed by lossless coding or lossy coding. Principle of compression: - reduce the redundant.
CH 8. Image Compression 8.1 Fundamental 8.2 Image compression models
Information Theory Michael J. Watts
Image Compression 9/20/2018 Image Compression.
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Image Coding and Compression
Presentation transcript:

Digital Image Processing Lecture 22: Image Compression Prof. Charlene Tsai *Section 8.4 in Gonzalez

Starting with Information Theory Data compression: the process of reducing the amount of data required to represent a given quantity of information. Data information Data convey the information; various amount of data can be used to represent the same amount of information. E.g. story telling (Gonzalez pg411) Data redundancy Our focus will be coding redundancy

Coding Redundancy Again, we’re back to gray-level histogram for data (code) reduction Let rk be a graylevel with occurrence probability pr(rk). If l(rk) is the # of bits used to represent rk, the average # of bits for each pixel is

Example on Variable-Length Coding Average for code 1 is 3, and for code 2 is 2.7 Compression ratio is 1.11 (3/2.7), and level of reduction is

Information Theory Information theory provides the mathematical framework for data compression Generation of information modeled as a probabilistic process A random event E that occurs with probability p(E) contain units of information (self-information)

Some Intuition I(E) is inversely related to p(E) If p(E) is 1 => I(E)=0 No uncertainty is associated with the event, so no information is transferred by E. Take alphabet “a” and “q” as an example. p(“a”) is high, so, low I(“a”); p(“q”) is low, so high I(“q”). The base of the logarithm is the unit used to measure the information. Base 2 is for information in bit

Entropy Measure of the amount of information Formal definition: entropy H of an image is the theoretical minimum # of bits/pixel required to encode the image without loss of information where i is the grayscale of an image, and pi is the probability of graylevel i occurring in the image. No matter what coding scheme is used, it will never use fewer than H bits per pixel

Variable-Length Coding Lossless compression Instead of fixed length code, we use variable-length code: Smaller-length code for more probable gray values Two methods: Huffman coding Arithmetic coding We’ll go through the first method

Huffman Coding The most popular technique for removing coding redundancy Steps: Determine the probability of each gray value in the image Form a binary tree by adding probabilities two at a time, always taking the 2 lowest available values Now assign 0 and 1 arbitrarily to each branch of the tree from the apex Read the codes from the top down

Example The average bit per pixel is 2.7 How to decode the string Much better than 3, originally Theoretical minimum (entropy) is 2.7 How to decode the string 11011101111100111110 Huffman codes are uniquely decodable. Gray value Huffman code 0 (0.19) 00 1 (0.25) 10 2 (0.21) 01 3 (0.16) 110 4 (0.08) 1110 5 (0.06) 11110 6 (0.03) 111110 7 (0.02) 111111

LZW (Lempel-Ziv-Welch) Coding Lossless Compression Compression scheme for Gif, TIFF and PDF For 8-bit grayscale images, the first 256 words are assigned to grayscales 0, 1, …255 As the encoder scans the image, the grayscale sequences not in the dictionary are placed in the next available location. The encoded output consists of dictionary entries.

Example Consider the 4x4, 8-bit image of a vertical edge 39 39 126 126 39 39 126 126 A 512-word dictionary starts with the content Dictionary location Entry 1 255 256 ---- 511 --- … … … …

To decode, read the 3rd column from top to bottom

Run-Length Encoding (1D) Lossless compression To encode strings of 0s and 1s by the number or repetitions in each string. A standard in fax transmission There are many versions of RLE

(con’d) Consider the binary image on the right Method 1: (123)(231)(0321)(141)(33)(0132) Method 2: (22)(33)(1361)(24)(43)(1152) For grayscale image, break up the image first into the bit planes. 0 1 1 0 0 0 0 0 1 1 1 0 1 1 1 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 1 1

Problem with grayscale RLE Long runs of very similar gray values would result in very good compression rate for the code. Not the case for 4 bit image consisting of randomly distributed 7s and 8s. One solution is to use gray codes.

Example in pg 400 For 4 bit image, Bit planes are: Binary encoding: 8 is 1000, and 7 is 0111 Gray code encoding: 8 is 1100 and 7 is 0100 Bit planes are: Uncorrelated Highly correlated 1 1 1 0th, 1st, and 2nd binary bit plane 3rd binary bit plane 0th and 1st gray code bit plane (replace 0 by 1 for 2nd plane) 3rd gray code bit plane

Summary Information theory Lossless compression schemes Measure of entropy, which is the theoretical minimum # of bits per pixel Lossless compression schemes Huffman coding LZW Run-Length encoding