Lecture 11 The Noiseless Coding Theorem (Section 3.4)

Slides:



Advertisements
Similar presentations
DCSP-8: Minimal length coding II, Hamming distance, Encryption Jianfeng Feng
Advertisements

DCSP-10 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
CY2G2 Information Theory 1
EE 4780 Huffman Coding Example. Bahadir K. Gunturk2 Huffman Coding Example Suppose X is a source producing symbols; the symbols comes from the alphabet.
Lecture 4 (week 2) Source Coding and Compression
Error Control Code.
Lecture 3: Source Coding Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Information Theory EE322 Al-Sanie.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Optimal Partitions of Strings: A new class of Burrows-Wheeler Compression Algorithms Raffaele Giancarlo Marinella Sciortino
Image Compression, Transform Coding & the Haar Transform 4c8 – Dr. David Corrigan.
SWE 423: Multimedia Systems
UCB Claude Shannon – In Memoriam Jean Walrand U.C. Berkeley
Fundamental limits in Information Theory Chapter 10 :
Revision of Chapter III For an information source {p i, i=1,2,…,N} its entropy is defined by Shannon’s first theorem: For an instantaneous coding, we have.
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
Information Theory Eighteenth Meeting. A Communication Model Messages are produced by a source transmitted over a channel to the destination. encoded.
Data Structures – LECTURE 10 Huffman coding
1 Chapter 5 A Measure of Information. 2 Outline 5.1 Axioms for the uncertainty measure 5.2 Two Interpretations of the uncertainty function 5.3 Properties.
CSE 326 Huffman coding Richard Anderson. Coding theory Conversion, Encryption, Compression Binary coding Variable length coding A B C D E F.
Variable-Length Codes: Huffman Codes
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Optimal Partitions of Strings: A new class of Burrows-Wheeler Compression Algorithms Raffaele Giancarlo Marinella Sciortino
CSI Uncertainty in A.I. Lecture 201 Basic Information Theory Review Measuring the uncertainty of an event Measuring the uncertainty in a probability.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Source Coding-Compression
Basics of Data Compression Paolo Ferragina Dipartimento di Informatica Università di Pisa.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Huffman Coding Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
§3 Discrete memoryless sources and their rate-distortion function §3.1 Source coding §3.2 Distortionless source coding theorem §3.3 The rate-distortion.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
ENEE244-02xx Digital Logic Design Lecture 3. Announcements Homework 1 due next class (Thursday, September 11) First recitation quiz will be next Monday,
Huffman coding Content 1 Encoding and decoding messages Fixed-length coding Variable-length coding 2 Huffman coding.
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
DIGITAL COMMUNICATIONS Linear Block Codes
Coding Theory Efficient and Reliable Transfer of Information
Source Coding Efficient Data Representation A.J. Han Vinck.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Digital Image Processing Lecture 22: Image Compression
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 7 (W5)
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
SEAC-3 J.Teuhola Information-Theoretic Foundations Founder: Claude Shannon, 1940’s Gives bounds for:  Ultimate data compression  Ultimate transmission.
Entropy estimation and lossless compression Structure and Entropy of English How much lossless compression can be achieved for a given image? How much.
UNIT –V INFORMATION THEORY EC6402 : Communication TheoryIV Semester - ECE Prepared by: S.P.SIVAGNANA SUBRAMANIAN, Assistant Professor, Dept. of ECE, Sri.
Information Theory Information Suppose that we have the source alphabet of q symbols s 1, s 2,.., s q, each with its probability p(s i )=p i. How much.
© Tallal Elshabrawy Trellis Coded Modulation. © Tallal Elshabrawy Trellis Coded Modulation: Introduction Increases the constellation size compared to.
Ch4. Zero-Error Data Compression Yuan Luo. Content  Ch4. Zero-Error Data Compression  4.1 The Entropy Bound  4.2 Prefix Codes  Definition and.
EE465: Introduction to Digital Image Processing
Digital Image Processing Lecture 20: Image Compression May 16, 2005
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
Lecture 6 Instantaneous Codes and Kraft’s Theorem (Section 1.4)
Chapter 11 Data Compression
IV. Convolutional Codes
Distributed Compression For Binary Symetric Channels
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Lecture 9 Entropy (Section 3.1, a bit of 3.2)
Lecture 7 Information Sources; Average Codeword Length (Section 2.1)
Lecture 17 Making New Codes from Old Codes (Section 4.6)
Lecture 15 The Minimum Distance of a Code (Section 4.4)
Lecture 4 What are Codes? (Section 1.2)
Theory of Information Lecture 13
Lecture 8 Huffman Encoding (Section 2.2)
Lecture 18 The Main Coding Theory Problem (Section 4.7)
Presentation transcript:

Lecture 11 The Noiseless Coding Theorem (Section 3.4) Theory of Information Lecture 11 Theory of Information Lecture 11 The Noiseless Coding Theorem (Section 3.4)

Theory of Information Lecture 11 Idea Theory of Information Lecture 11 The average codeword length can never be better than the entropy. (version 1 of the Noiseless Coding Theorem). It will also never be worse than entropy+1. (version 2 of the Noiseless Coding Theorem). By encoding extensions of a source S, that is blocks of symbols rather than individual symbols, we can reduce the average codeword length as close to the entropy as desired. In other words, entropy is the best we can achieve when seeking effeciency of encoding. (version 3 of the Noiseless Coding Theorem).

The Noiseless Coding Theorem Theory of Information Lecture 11 MinAveCodeLen(S) means the minimum average codeword length among all uniquely decipherable binary encoding schemes for source S. For any source S we have: Version 1. H(S)  MinAveCodeLen(S) Version 2. H(S)  MinAveCodeLen(S)  H(S)+ 1 MinAveCodeLen(Sn) Version 3. H(S)  --------------------  H(S)+1/n n

Theory of Information Lecture 11 Homework Theory of Information Lecture 11 Exercises 1,2 and 3 of Section 3.4.