Compression Algorithms

Slides:



Advertisements
Similar presentations
15 Data Compression Foundations of Computer Science ã Cengage Learning.
Advertisements

Data Compression CS 147 Minh Nguyen.
Introduction to Computer Science 2 Lecture 7: Extended binary trees
Lecture 4 (week 2) Source Coding and Compression
Image Compression. Data and information Data is not the same thing as information. Data is the means with which information is expressed. The amount of.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Source Coding Data Compression A.J. Han Vinck. DATA COMPRESSION NO LOSS of information and exact reproduction (low compression ratio 1:4) general problem.
Greedy Algorithms (Huffman Coding)
Lecture 10 : Huffman Encoding Bong-Soo Sohn Assistant Professor School of Computer Science and Engineering Chung-Ang University Lecture notes : courtesy.
Data Compressor---Huffman Encoding and Decoding. Huffman Encoding Compression Typically, in files and messages, Each character requires 1 byte or 8 bits.
Data Compression Michael J. Watts
Compression & Huffman Codes
Huffman Encoding 16-Apr-17.
Lecture 6: Huffman Code Thinh Nguyen Oregon State University.
SWE 423: Multimedia Systems
CSCI 3 Chapter 1.8 Data Compression. Chapter 1.8 Data Compression  For the purpose of storing or transferring data, it is often helpful to reduce the.
A Data Compression Algorithm: Huffman Compression
Compression & Huffman Codes Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Data Compression Basics & Huffman Coding
Data dan Teknologi Multimedia Sesi 08 Nofriyadi Nurdam.
Representation of Strings  Background  Huffman Encoding.
Spring 2015 Mathematics in Management Science Binary Linear Codes Two Examples.
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
8. Compression. 2 Video and Audio Compression Video and Audio files are very large. Unless we develop and maintain very high bandwidth networks (Gigabytes.
Chapter 2 Source Coding (part 2)
Text Compression Spring 2007 CSE, POSTECH. 2 2 Data Compression Deals with reducing the size of data – Reduce storage space and hence storage cost Compression.
Page 110/6/2015 CSE 40373/60373: Multimedia Systems So far  Audio (scalar values with time), image (2-D data) and video (2-D with time)  Higher fidelity.
Arrays and Strings CSCI 2720 University of Georgia Spring 2007.
Prof. Amr Goneid Department of Computer Science & Engineering
Multimedia Specification Design and Production 2012 / Semester 1 / L3 Lecturer: Dr. Nikos Gazepidis
Communication Technology in a Changing World Week 2.
Multimedia Data Introduction to Lossless Data Compression Dr Sandra I. Woolley Electronic, Electrical.
Compression.  Compression ratio: how much is the size reduced?  Symmetric/asymmetric: time difference to compress, decompress?  Lossless; lossy: any.
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 5.
The LZ family LZ77 LZ78 LZR LZSS LZB LZH – used by zip and unzip
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Addressing Image Compression Techniques on current Internet Technologies By: Eduardo J. Moreira & Onyeka Ezenwoye CIS-6931 Term Paper.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Huffman Code and Data Decomposition Pranav Shah CS157B.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
Huffman Codes Juan A. Rodriguez CS 326 5/13/2003.
Lossless Compression(2)
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Multi-media Data compression
1 Data Compression Hae-sun Jung CS146 Dr. Sin-Min Lee Spring 2004.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
Data Compression Michael J. Watts
Textbook does not really deal with compression.
HUFFMAN CODES.
Data Coding Run Length Coding
Compression & Huffman Codes
Data Compression.
Data Compression.
Applied Algorithmics - week7
Lempel-Ziv-Welch (LZW) Compression Algorithm
Data Compression CS 147 Minh Nguyen.
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Chapter 11 Data Compression
Communication Technology in a Changing World
UNIT IV.
Communication Technology in a Changing World
CSE 589 Applied Algorithms Spring 1999
15 Data Compression Foundations of Computer Science ã Cengage Learning.
Huffman Encoding.
Algorithms CSCI 235, Spring 2019 Lecture 31 Huffman Codes
15 Data Compression Foundations of Computer Science ã Cengage Learning.
Presentation transcript:

Compression Algorithms CSCI 2720 Spring 2005 Eileen Kraemer

When we last met … We looked at string encoding And noted that if we use same number of bits per character in our alphabet, then the number of bits required to encode a character of the alphabet is log2(ceil(sizeof(alphabet)) And we don’t need to transmit or store the mapping from encodings to characters

What if … the string we encode doesn’t use all the letters in the alphabet? log2(ceil(sizeof(set_of_characters_used)) But then also need to store / transmit the mapping from encodings to characters … and is typically close to size of alphabet

And we also looked at: … Huffman Encoding Assumes encoding on a per-character basis Observation: assigning shorter codes to frequently used characters (which requires assigning longer codes to rarely used characters) can result in overall shorter encodings of strings Problem: when decoding, need to know how many bits to read off for each character. Solution: Choose an encoding that ensures that no character encoding is the prefix of any other character encoding. An encoding tree has this property.

A Huffman Encoding Tree 21 1 9 12 E 1 5 7 1 1 3 2 3 4 A T R N

12 21 9 7 4 3 5 2 A T R N E 1 A 000 T 001 R 010 N 011 E 1

Weighted path length A 000 T 001 R 010 N 011 E 1 Weighted path = Len(code(A)) * f(A) + Len(code(T)) * f(T) + Len(code(R) ) * f(R) + Len(code(N)) * f(N) + Len(code(E)) * f(E) = (3 * 3) + ( 2 * 3) + (3 * 3) + (4 *3) + (9*1) = 9 + 6 + 9 + 12 + 9 = 45 Claim (proof in text) : no other encoding can result in a shorter weighted path length A 000 T 001 R 010 N 011 E 1

Taking a step back … Why do we need compression? rate of creation of image and video data image data from digital camera today 1k by 1.5 k is common = 1.5 mbytes need 2k by 3k to equal 35mm slide = 6 mbytes video at even low resolution of 512 by 512 and 3 bytes per pixel, 30 frames/second

Compression basics video data rate mpeg-1 compresses 23.6 mbytes/second 2 hours of video = 169 gigabytes mpeg-1 compresses 23.6 mbytesdown to 187 kbytes per second 169 gigabytes down to 1.3 gigabytes compression is essential for both storage and transmission of data

Compression basics compression is very widely used jpeg, giff for single images mpeg1, 2, 3, 4 for video sequence zip for computer data mp3 for sound based on two fundamental principles spatial coherence and temporal coherence similarity with spatial neighbour similarity with temporal neighbour

Basics of compression character = basic data unit in the input stream represents byte, bit, etc. strings = sequences of characters encoding = compression decoding = decompression codeword = data elements used to represent input characters or character strings codetable = list of codewords

Codeword encoding/compression takes decoder/decompressor takes characters/strings as input and use codetable to decide on which codewords to produce decoder/decompressor takes codewords as input and uses same codetable to decide on which characters/strings to produce

Codetable clearly both encoder and decoder must pass the encoded data as a series of codewords also must pass the codetable the codetable can be passed explicitly or implicitly that is we either pass it across agree on it beforehand (hard wired) recreate it from the codewords (clever!)

Basic definitions compression ratio = lossless compression size of original data / compressed data basically higher compression ratio the better lossless compression output data is exactly same as input data essential for encoding computer processed data lossy compression output data not same as input data acceptable for data that is only viewed or heard

Lossless versus lossy human visual system less sensitive to high frequency losses and to losses in color lossy compression acceptable for visual data degree of loss is usually a parameter of the compression algorithm tradeoff - loss versus compression higher compression => more loss lower compression => less loss

Symmetric versus asymmetric encoding time == decoding time essential for real-time applications (ie. video or audio on demand) asymmetric encoding time >> decoding ok for write-once, read-many situations

Entropy encoding compression that does not take into account what is being compressed normally is also lossless encoding most common types of entropy encoding run length encoding Huffman encoding modified Huffman (fax…) Lempel Ziv

Source encoding takes into account type of data (ie. visual) normally is lossy but can also be lossless most common types in use: JPEG, GIF = single images MPEG = sequence of images (video) MP3 = sound sequence often uses entropy encoding as a sub-routine

Run length encoding one of simplest and earliest types of compression take account of repeating data (called runs) runs are represented by a count along with the original data eg. AAAABB => 4A2B do you run length encode a single character? no, use a special prefix character to represent start of runs

Run length encoding runs are represented as prefix char itself becomes <prefix char><repeat count><run char> prefix char itself becomes <prefix char>1<prefix char> want a prefix char that is not too common an example early use is MacPaint file format run length encoding is lossless and has fixed length codewords

MacPaint File Format

Run length encoding works best for images with solid background good example of such an image is a cartoon does not work as well for natural images does not work well for English text however, is almost always a part of a larger compression system

Huffman encoding assume we know the frequency of each character in the input stream then encode each character as a variable length bit string, with the length inversely proportional to the character frequency variable length codewords are used; early example is Morse code Huffman produced an algorithm for assigning codewords optimally

Huffman encoding input = probabilities of occurrence of each input character (frequencies of occurrence) output is a binary tree each leaf node is an input character each branch is a zero or one bit codeword for a leaf is the concatenation of bits for the path from the root to the leaf codeword is a variable length bit string a very good compression ratio (optimal)?

Huffman encoding Basic algorithm Mark all characters as free tree nodes While there is more than one free node Take two nodes with lowest freq. of occurrence Create a new tree node with these nodes as children and with freq. equal to the sum of their freqs. Remove the two children from the free node list. Add the new parent to the free node list Last remaining free node is the root of the binary tree used for encoding/decoding

Huffman example a series of colours in an 8 by 8 screen colours are red, green, cyan, blue, magenta, yellow, and black sequence is rkkkkkkk gggmcbrr kkkrrkkk bbbmybbr kkrrrrgg gggggggr kkbcccrr grrrrgrr

Huffman example

Huffman example

Huffman example

Huffman example

Fixed versus variable length codewords run length codewords are fixed length Huffman codewords are variable length length inversely proportional to frequency all variable length compression schemes have the prefix property one code can not be the prefix of another binary tree structure guarantees that this is the case (a leaf node is a leaf node!)

Huffman encoding advantages disadvantages maximum compression ratio assuming correct probabilities of occurrence easy to implement and fast disadvantages need two passes for both encoder and decoder one to create the frequency distribution one to encode/decode the data can avoid this by sending tree (takes time) or by having unchanging frequencies

Modified Huffman encoding if we know frequency of occurrences, then Huffman works very well consider case of a fax; mostly long white spaces with short bursts of black do the following run length encode each string of bits on a line Huffman encode these run length codewords use a predefined frequency distribution combination run length, then Huffman

Lempel Ziv Welsch (LZW) previous methods worked only on characters LZW works by encoding strings some strings are replaced by a single codeword for now assume codeword is fixed (12 bits) for 8 bit characters, first 256 (or less) entries in table are reserved for the characters rest of table (257-4096) represent strings

LZW compression trick is that strings to codeword mapping is created dynamically by the encoder also recreated dynamically by the decoder need not pass the code table between the two is a lossless compression algorithm degree of compression hard to predict depends on data, but gets better as codeword table contains more strings

LZW encoder

Demonstrations A nice animated version of Lempel-Ziv

LZW encoder example compress the string BABAABAAA

LZW decoder

Lempel-Ziv compression a lossless compression algorithm All encodings have the same length But may represent more than one character Uses a “dictionary” approach – keeps track of characters and character strings already encountered

LZW decoder example decompress the string <66><65><256><257><65><260>

LZW Issues compression better as the code table grows what happens when all 4096 locations in string table are used? A number of options, but encoder and decoder must agree to do the same thing do not add any more entries to table (as is) clear codeword table and start again clear codeword table and start again with larger table/longer codewords (GIF format)

LZW advantages/disadvantages simple, fast and good compression can do compression in one pass dynamic codeword table built for each file decompression recreates the codeword table so it does not need to be passed disadvantages not the optimum compression ratio actual compression hard to predict

Entropy methods all previous methods are lossless and entropy based lossless methods are essential for computer data (zip, gnuzip, etc.) combination of run length encoding/huffman is a standard tool are often used as a subroutine by other lossy methods (Jpeg, Mpeg)

Lempel-Ziv compression a lossless compression algorithm All encodings have the same length But may represent more than one character Uses a “dictionary” approach – keeps track of characters and character strings already encountered