FAX and TIFF Modified Huffman coding is used in fax machines.

Slides:



Advertisements
Similar presentations
Functional Programming Lecture 15 - Case Study: Huffman Codes.
Advertisements

JPEG DCT Quantization FDCT of 8x8 blocks.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Problem: Huffman Coding Def: binary character code = assignment of binary strings to characters e.g. ASCII code A = B = C =
T h e U n i v e r s i t y o f B r i t i s h C o l u m b i a Bi-Level Image Compression EECE 545: Data Compression by Dave Tompkins.
Compression & Huffman Codes
CPSC 231 Organizing Files for Performance (D.H.) 1 LEARNING OBJECTIVES Data compression. Reclaiming space in files. Compaction. Searching. Sorting, Keysorting.
SWE 423: Multimedia Systems
Compression Word document: 1 page is about 2 to 4kB Raster Image of 1 page at 600 dpi is about 35MB Compression Ratio, CR =, where is the number of bits.
Quantum Error Correction Michele Mosca. Quantum Error Correction: Bit Flip Errors l Suppose the environment will effect error (i.e. operation ) on our.
HUFFMAN TREES CSC 172 SPRING 2002 LECTURE 24. Prefix Codes Consider a binary trie representing a code
Lossless Compression in Multimedia Data Representation Hao Jiang Computer Science Department Sept. 20, 2007.
Data Compression Basics & Huffman Coding
Data Compression and Huffman Trees (HW 4) Data Structures Fall 2008 Modified by Eugene Weinstein.
Data Structures and Algorithms Huffman compression: An Application of Binary Trees and Priority Queues.
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
Basics of Compression Goals: to understand how image/audio/video signals are compressed to save storage and increase transmission efficiency to understand.
Compression and Decompression
15-853Page :Algorithms in the Real World Data Compression II Arithmetic Coding – Integer implementation Applications of Probability Coding – Run.
Entropy coding Present by 陳群元. outline constraints  Compression efficiency  Computational efficiency  Error robustness.
296.3Page 1 CPS 296.3:Algorithms in the Real World Data Compression: Lecture 2.5.
Dr. O.Bushehrian ALGORITHM DESIGN HUFFMAN CODE. Fixed length code a: 00b: 01c: 11 Given this code, if our file is ababcbbbc our encoding is
Concepts of Multimedia Processing and Transmission IT 481, Lecture 5 Dennis McCaughey, Ph.D. 19 February, 2007.
Prof. Amr Goneid Department of Computer Science & Engineering
Huffman Coding Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
Communication Technology in a Changing World Week 2.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
Image Compression – Fundamentals and Lossless Compression Techniques
ENEE244-02xx Digital Logic Design Lecture 3. Announcements Homework 1 due next class (Thursday, September 11) First recitation quiz will be next Monday,
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Quiz # 1 Chapters 1,2, & 3.
Huffman Code and Data Decomposition Pranav Shah CS157B.
Digital Image Processing Lecture 22: Image Compression
Bahareh Sarrafzadeh 6111 Fall 2009
Chapter 3 Data Representation. 2 Compressing Files.
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
بسم الله الرحمن الرحيم My Project Huffman Code. Introduction Introduction Encoding And Decoding Encoding And Decoding Applications Applications Advantages.
ECE 101 An Introduction to Information Technology Information Coding.
Huffman encoding.
JPEG. Introduction JPEG (Joint Photographic Experts Group) Basic Concept Data compression is performed in the frequency domain. Low frequency components.
IS502:M ULTIMEDIA D ESIGN FOR I NFORMATION S YSTEM M ULTIMEDIA OF D ATA C OMPRESSION Presenter Name: Mahmood A.Moneim Supervised By: Prof. Hesham A.Hefny.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Data Compression: Huffman Coding in Weiss (p.389)
Yingfang Zhang Department of Computer Science UCCS
Textbook does not really deal with compression.
Assignment 6: Huffman Code Generation
Digital Image Processing Lecture 20: Image Compression May 16, 2005
CPSC 231 Organizing Files for Performance (D.H.)
IMAGE COMPRESSION.
Algorithms in the Real World
ISNE101 – Introduction to Information Systems and Network Engineering
Proving the Correctness of Huffman’s Algorithm
Data Compression and Encryption
Image Compression 9/20/2018 Image Compression.
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Huffman Coding.
Advanced Algorithms Analysis and Design
Pixels.
Image Processing, Leture #16
UNIT IV.
Standard Array.
T305: Digital Communications
Distributed Compression For Binary Symetric Channels
Image Compression Purposes Requirements Types
CSE 326 Huffman coding Richard Anderson.
Chapter 8 – Compression Aims: Outline the objectives of compression.
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Huffman Coding Greedy Algorithm
Proving the Correctness of Huffman’s Algorithm
Presentation transcript:

FAX and TIFF Modified Huffman coding is used in fax machines. TIFF also uses this technique. It combines the variable length codes of Huffman coding with the coding of repetitive data in run length encoding. 4. 1

FAXes Faxes have only white bits and black bits. Each line is coded as a series of alternating runs of white and black bits. Runs of 63 or less are coded with a terminating code. Runs of 64 or greater require that a makeup code prefix the terminating code. The makeup codes are used to describe runs in multiples of 64 from 64 to 2560. 4. 2

Terminating codewords Run Length White bits Black bits 00110101 0000110111 32 00011011 000001101010 1 000111 010 33 00010010 000001101011 2 0111 11 34 00010011 000011010010 3 1000 10 35 00010100 000011010011 4 1011 011 36 00010101 000011010100 5 1100 0011 37 00001110 000011010101 6 1110 0010 38 00010111 000011010110 7 1111 00011 39 00101000 000011010111 8 10011 000101 40 00101001 000001101100 9 10100 000100 41 00101010 000001101101 00111 0000100 42 00101011 000011011010 01000 0000101 43 00101100 000011011011 12 001000 0000111 44 00101101 000001010100 13 000011 00000100 45 000001010101 14 110100 00000111 46 00000101 000001010110 15 110101 000011000 47 00001010 000001010111 16 101010 0000010111 48 00001011 000001100100 17 101011 0000011000 49 01010010 000001100101 18 0100111 0000001000 50 01010011 000001010010 19 0001100 00001100111 51 01010100 000001010011 20 0001000 00001101000 52 01010101 000000100100 21 0010111 00001101100 53 00100100 000000110111 22 0000011 00000110111 54 00100101 000000111000 23 00000101000 55 01011000 000000100111 24 0101000 00000010111 56 01011001 000000101000 25 0101011 00000011000 57 01011010 000001011000 26 0010011 000011001010 58 01011011 000001011001 27 0100100 000011001011 59 01001010 000000101011 28 0011000 000011001100 60 01001011 000000101100 29 00000010 000011001101 61 00110010 000001011010 30 00000011 000001101000 62 001110011 000001100110 31 00011010 000001101001 00110100 000001100111 4. 3

Makeup Codewords 64 11011 000000111 128 10010 00011001000 192 010111 000011001001 256 0110111 000001011011 320 00110110 000000110011 384 00110111 000000110100 448 01100100 000000110101 512 01100101 0000001101100 576 01101000 0000001101101 640 01100111 0000001001010 704 011001100 0000001001011 768 011001101 0000001001100 832 011010010 0000001001101 896 101010011 0000001110010 960 011010100 0000001110011 1024 011010101 0000001110100 1088 011010110 0000001110101 1152 011010111 0000001110110 1216 011011000 0000001110111 1280 011011001 0000001010010 1344 011011010 0000001010011 1408 011011011 0000001010100 1472 010011000 0000001010101 1536 010011001 0000001011010 1600 010011010 0000001011011 1664 011000 0000001100100 1728 010011011 0000001100101 1792 00000001000 1856 00000001100 1920 00000001101 1984 000000010010 2048 000000010011 2112 000000010100 2170 000000010101 2240 000000010110 2304 000000010111 2368 000000011100 2432 000000011101 2496 000000011110 2560 000000011111 EOL 000000000001 4. 4

White and Black pixels Studies have shown that most facsimiles are 85 percent white. Faxes assume that any line begins with a run of white bits. If it does not, a run of white bits of 0 length must begin the encoded line. 4. 5

Error Recovering Each scan line ends with a special EOL (end of line) character consisting of eleven zeros and a 1 (000000000001). There is no code that has more than seven leading zeroes. There is no code that has more than three ending zeros. 4. 6

Error Recovering (Cont.) If the decoder sees eleven zeros, it will recognize the end of line and continue scanning for a 1. Upon receiving the 1, it will then start a new line. If a bit in a scanned line gets corrupted, the most that will be lost is the rest of the line. If the EOL code gets corrupted, the most that will get lost is the next line. 4. 7

Example 0 white 00110101 1 black 010 4 white 1011 2 black 11 1 white 0111 1 black 010 1266 white 011011000 + 01010011 EOL 000000000001 4. 8