Data Structures and Algorithms Huffman compression: An Application of Binary Trees and Priority Queues.

Slides:



Advertisements
Similar presentations
Introduction to Computer Science 2 Lecture 7: Extended binary trees
Advertisements

Lecture 4 (week 2) Source Coding and Compression
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Greedy Algorithms Amihood Amir Bar-Ilan University.
Greedy Algorithms (Huffman Coding)
Data Compressor---Huffman Encoding and Decoding. Huffman Encoding Compression Typically, in files and messages, Each character requires 1 byte or 8 bits.
Lecture04 Data Compression.
Compression & Huffman Codes
1 Huffman Codes. 2 Introduction Huffman codes are a very effective technique for compressing data; savings of 20% to 90% are typical, depending on the.
Huffman Coding: An Application of Binary Trees and Priority Queues
CSCI 3 Chapter 1.8 Data Compression. Chapter 1.8 Data Compression  For the purpose of storing or transferring data, it is often helpful to reduce the.
A Data Compression Algorithm: Huffman Compression
Chapter 9: Huffman Codes
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Lossless Data Compression Using run-length and Huffman Compression pages
Data Compression Basics & Huffman Coding
x x x 1 =613 Base 10 digits {0...9} Base 10 digits {0...9}
Huffman Codes Message consisting of five characters: a, b, c, d,e
CSE Lectures 22 – Huffman codes
Topic 20: Huffman Coding The author should gaze at Noah, and... learn, as they did in the Ark, to crowd a great deal of matter into a very small compass.
Chapter 2 Source Coding (part 2)
Algorithm Design & Analysis – CS632 Group Project Group Members Bijay Nepal James Hansen-Quartey Winter
Huffman Codes. Encoding messages  Encode a message composed of a string of characters  Codes used by computer systems  ASCII uses 8 bits per character.
Data Compression1 File Compression Huffman Tries ABRACADABRA
Huffman Encoding Veronica Morales.
1 Analysis of Algorithms Chapter - 08 Data Compression.
Lecture Objectives  To learn how to use a Huffman tree to encode characters using fewer bytes than ASCII or Unicode, resulting in smaller files and reduced.
Data Structures Week 6: Assignment #2 Problem
Compression.  Compression ratio: how much is the size reduced?  Symmetric/asymmetric: time difference to compress, decompress?  Lossless; lossy: any.
© Jalal Kawash 2010 Trees & Information Coding Peeking into Computer Science.
Huffman Coding. Huffman codes can be used to compress information –Like WinZip – although WinZip doesn’t use the Huffman algorithm –JPEGs do use Huffman.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-217, ext: 1204, Lecture 4 (Week 2)
Introduction to Algorithms Chapter 16: Greedy Algorithms.
Huffman coding Content 1 Encoding and decoding messages Fixed-length coding Variable-length coding 2 Huffman coding.
Priority Queues, Trees, and Huffman Encoding CS 244 This presentation requires Audio Enabled Brent M. Dingle, Ph.D. Game Design and Development Program.
Huffman Codes Juan A. Rodriguez CS 326 5/13/2003.
CPS 100, Spring Huffman Coding l D.A Huffman in early 1950’s l Before compressing data, analyze the input stream l Represent data using variable.
Huffman’s Algorithm 11/02/ Weighted 2-tree A weighted 2-tree T is an extended binary tree with n external nodes and each of the external nodes is.
Bahareh Sarrafzadeh 6111 Fall 2009
Main Index Contents 11 Main Index Contents Complete Binary Tree Example Complete Binary Tree Example Maximum and Minimum Heaps Example Maximum and Minimum.
Chapter 3 Data Representation. 2 Compressing Files.
1 Algorithms CSCI 235, Fall 2015 Lecture 30 More Greedy Algorithms.
Lossless Decomposition and Huffman Codes Sophia Soohoo CS 157B.
1 Data Compression Hae-sun Jung CS146 Dr. Sin-Min Lee Spring 2004.
1 Huffman Codes. 2 ASCII use same size encoding for all characters. Variable length codes can produce shorter messages than fixed length codes Huffman.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
Lecture 12 Huffman Algorithm. In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Lecture on Data Structures(Trees). Prepared by, Jesmin Akhter, Lecturer, IIT,JU 2 Properties of Heaps ◈ Heaps are binary trees that are ordered.
Design & Analysis of Algorithm Huffman Coding
Huffman Codes ASCII is a fixed length 7 bit code that uses the same number of bits to define each character regardless of how frequently it occurs. Huffman.
HUFFMAN CODES.
CSC317 Greedy algorithms; Two main properties:
Assignment 6: Huffman Code Generation
Madivalappagouda Patil
Data Compression If you’ve ever sent a large file to a friend, you may have compressed it into a zip archive like the one on this slide before doing so.
Chapter 8 – Binary Search Tree
Chapter 9: Huffman Codes
Huffman Coding.
Advanced Algorithms Analysis and Design
Huffman Coding CSE 373 Data Structures.
Huffman Encoding Huffman code is method for the compression for standard text documents. It makes use of a binary tree to develop codes of varying lengths.
Data Structure and Algorithms
Topic 20: Huffman Coding The author should gaze at Noah, and ... learn, as they did in the Ark, to crowd a great deal of matter into a very small compass.
Podcast Ch23d Title: Huffman Compression
Algorithms CSCI 235, Spring 2019 Lecture 30 More Greedy Algorithms
Huffman Coding Greedy Algorithm
Algorithms CSCI 235, Spring 2019 Lecture 31 Huffman Codes
Analysis of Algorithms CS 477/677
Presentation transcript:

Data Structures and Algorithms Huffman compression: An Application of Binary Trees and Priority Queues

CS 102 Encoding and Compression  Fax Machines  ASCII  Variations on ASCII  min number of bits needed  cost of savings  patterns  modifications

CS 102 Purpose of Huffman Coding  Proposed by Dr. David A. Huffman in 1952  “ A Method for the Construction of Minimum Redundancy Codes ”  Applicable to many forms of data transmission  Our example: text files

CS 102 The Basic Algorithm  Huffman coding is a form of statistical coding  Not all characters occur with the same frequency!  Yet all characters are allocated the same amount of space  1 char = 1 byte, be it e or x

CS 102 The Basic Algorithm  Any savings in tailoring codes to frequency of character?  Code word lengths are no longer fixed like ASCII.  Code word lengths vary and will be shorter for the more frequently used characters.

CS 102 The (Real) Basic Algorithm 1.Scan text to be compressed and tally occurrence of all characters. 2.Sort or prioritize characters based on number of occurrences in text. 3.Build Huffman code tree based on prioritized list. 4.Perform a traversal of tree to determine all code words. 5.Scan text again and create new file using the Huffman codes.

CS 102 Building a Tree  Consider the following short text:  Eerie eyes seen near lake.  Count up the occurrences of all characters in the text

CS 102 Building a Tree Eerie eyes seen near lake.  What characters are present? E e r i space y s n a r l k.

CS 102 Building a Tree Prioritize characters  Create binary tree nodes with character and frequency of each character  Place nodes in a priority queue  The lower the occurrence, the higher the priority in the queue

CS 102 Building a Tree Prioritize characters  Uses binary tree nodes public class HuffNode { public char myChar; public int myFrequency; public HuffNode myLeft, myRight; } priorityQueue myQueue;

CS 102 Building a Tree  The queue after inserting all nodes  Null Pointers are not shown E1E1 i1i1 y1y1 l1l1 k1k1.1.1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8

CS 102 Building a Tree  While priority queue contains two or more nodes  Create new node  Dequeue node and make it left subtree  Dequeue next node and make it right subtree  Frequency of new node equals sum of frequency of left and right children  Enqueue new node back into queue

CS 102 Building a Tree E1E1 i1i1 y1y1 l1l1 k1k1.1.1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8

CS 102 Building a Tree E1E1 i1i1 y1y1 l1l1 k1k1.1.1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8 2

CS 102 Building a Tree E1E1 i1i1 y1y1 l1l1 k1k1.1.1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8 2

CS 102 Building a Tree E1E1 i1i1 k1k1.1.1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8 2 y1y1 l1l1 2

CS 102 Building a Tree E1E1 i1i1 k1k1.1.1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8 2 y1y1 l1l1 2

CS 102 Building a Tree E1E1 i1i1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8 2 y1y1 l1l1 2 k1k

CS 102 Building a Tree E1E1 i1i1 r2r2 s2s2 n2n2 a2a2 sp 4 e8e8 2 y1y1 l1l1 2 k1k

CS 102 Building a Tree E1E1 i1i1 n2n2 a2a2 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4

CS 102 Building a Tree E1E1 i1i1 n2n2 a2a2 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a2 4

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a2 4

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a2 4 4

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a2 4 4

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a What is happening to the characters with a low number of occurrences?

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Building a Tree E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a After enqueueing this node there is only one node left in priority queue.

CS 102 Building a Tree Dequeue the single node left in the queue. This tree contains the new code words for each character. Frequency of root node should equal number of characters in text. E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a Eerie eyes seen near lake.  26 characters

CS 102 Encoding the File Traverse Tree for Codes  Perform a traversal of the tree to obtain new code words  Going left is a 0 going right is a 1  code word is only completed when a leaf node is reached E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Encoding the File Traverse Tree for Codes CharCode E0000 i0001 y0010 l0011 k space011 e10 r1100 s1101 n1110 a1111 E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Encoding the File  Rescan text and encode file using new code words Eerie eyes seen near lake. CharCode E0000 i0001 y0010 l0011 k space011 e10 r1100 s1101 n1110 a  Why is there no need for a separator character?.

CS 102 Encoding the File Results  Have we made things any better?  73 bits to encode the text  ASCII would take 8 * 26 = 208 bits  If modified code used 4 bits per character are needed. Total bits 4 * 26 = 104. Savings not as great.

CS 102 Decoding the File  How does receiver know what the codes are?  Tree constructed for each text file.  Considers frequency for each file  Big hit on compression, especially for smaller files  Tree predetermined  based on statistical analysis of text files or file types  Data transmission is bit based versus byte based

CS 102 Decoding the File  Once receiver has tree it scans incoming bit stream  0  go left  1  go right E1E1 i1i1 sp 4 e8e8 2 y1y1 l1l1 2 k1k r2r2 s2s2 4 n2n2 a2a

CS 102 Summary  Huffman coding is a technique used to compress files for transmission  Uses statistical coding  more frequently used symbols have shorter code words  Works well for text and fax transmissions  An application that uses several data structures

44 Huffman Compression 1)Is the code correct?  Based on the way the tree is formed, it is clear that the codewords are valid  Prefix Property is assured, since each codeword ends at a leaf  all original nodes corresponding to the characters end up as leaves 2)Does it give good compression?  For a block code of N different characters,  log 2 N  bits are needed per character  Thus a file containing M ASCII characters, 8M bits are needed

45 Huffman Compression  Given Huffman codes {C 0,C 1,…C N-1 } for the N characters in the alphabet, each of length |C i |  Given frequencies {F 0,F 1,…F N-1 } in the file  Where sum of all frequencies = M  The total bits required for the file is:  Sum from 0 to N-1 of (|C i | * F i )  Overall total bits depends on differences in frequencies  The more extreme the differences, the better the compression  If frequencies are all the same, no compression  See example from board

46 Huffman Shortcomings  What is Huffman missing?  Although OPTIMAL for single character (word) compression, Huffman does not take into account patterns / repeated sequences in a file  Ex: A file with 1000 As followed by 1000 Bs, etc. for every ASCII character will not compress AT ALL with Huffman  Yet it seems like this file should be compressable  We can use run-length encoding in this case (see text)  However run-length encoding is very specific and not generally effective for most files (since they do not typically have long runs of each character)