Lecture 7 Information Sources; Average Codeword Length (Section 2.1)

Slides:



Advertisements
Similar presentations
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Advertisements

EE 4780 Huffman Coding Example. Bahadir K. Gunturk2 Huffman Coding Example Suppose X is a source producing symbols; the symbols comes from the alphabet.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
5.1 Perpendicular Bisector Equidistant. Definition of a Perpendicular Bisector A Perpendicular Bisector is a ray, line, segment or even a plane that is.
Slide 1Fig. 10.1, p.293. Slide 2Fig. 10.1a, p.293.
Data Compression Michael J. Watts
MTH 252 Integral Calculus Chapter 6 – Integration Section 6.8 – Evaluating Definite Integrals by Substitution Copyright © 2005 by Ron Wallace, all rights.
4. Convergence of random variables  Convergence in probability  Convergence in distribution  Convergence in quadratic mean  Properties  The law of.
Lecture Recursive Definitions. Fractals fractals are examples of images where the same elements is being recursively.
Lossless Compression - I Hao Jiang Computer Science Department Sept. 13, 2007.
P Fig. 6-1, p. 193 Fig. 6-2, p. 193 Fig. 6-3, p. 195.
CSI Uncertainty in A.I. Lecture 201 Basic Information Theory Review Measuring the uncertainty of an event Measuring the uncertainty in a probability.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Transformations Transforming Graphs. 7/9/2013 Transformations of Graphs 2 Basic Transformations Restructuring Graphs Vertical Translation f(x) to f(x)
1 Example 1 Explain why the Intermediate Value Theorem does or does not apply to each of the following functions. (a) f(x) = 1/x with domain [1,2]. Solution.
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
MAT 1234 Calculus I Section 1.8 Continuity
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Sta220 - Statistics Mr. Smith Room 310 Class #15.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 6 – Basics of Compression (Part 1) Klara Nahrstedt Spring 2011.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
4.2 – The Mean Value Theorem
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 7 (W5)
CSNB 143 Discrete Mathematical Structures
1 Chapter 3. Section 3-4. Triola, Elementary Statistics, Eighth Edition. Copyright Addison Wesley Longman M ARIO F. T RIOLA E IGHTH E DITION E LEMENTARY.
بسم الله الرحمن الرحيم My Project Huffman Code. Introduction Introduction Encoding And Decoding Encoding And Decoding Applications Applications Advantages.
Section 5.3 The Fundamental Theorem and Interpretations.
An Example of {AND, OR, Given that} Using a Normal Distribution By Henry Mesa.
Theorems Lisa Brady Mrs. Pellissier Calculus AP 28 November 2008.
Data Compression Michael J. Watts
4.2 The Mean Value Theorem In this section, we will learn about:
4.2 The Mean Value Theorem State Standard
Applied Algorithmics - week7
The distribution function F(x)
Information Theory Michael J. Watts
Rolle’s Theorem.
CS100: Discrete structures
Information-Theoretic Secrecy
Lecture 6 Instantaneous Codes and Kraft’s Theorem (Section 1.4)
Advanced Algorithms Analysis and Design
CSE 589 Applied Algorithms Spring 1999
Lecture 2 Section 1.3 Objectives: Continuous Distributions
Chapter 2 Notes Math 309 Probability.
Chapter 2 Section 4 Special Angle Pairs Special Angle Pair #1:
Intermediate Value Theorem
Intermediate Value Theorem
Copyright © Cengage Learning. All rights reserved.
4.6 The Mean Value Theorem.
Continuity Alex Karassev.
Continuity.
5.1 Functions.
College Algebra Chapter 4 Exponential and Logarithmic Functions
12 Further mathematics Recurrence relations.
Rolle’s Theorem and the Mean Value Theorem
Functions Section 2.3.
Warm up  .
Lecture 9 Entropy (Section 3.1, a bit of 3.2)
Huffman Coding Greedy Algorithm
Lecture 11 The Noiseless Coding Theorem (Section 3.4)
Lecture 17 Making New Codes from Old Codes (Section 4.6)
Lecture 3 Strings and Things (Section 1.1)
Lecture 15 The Minimum Distance of a Code (Section 4.4)
Lecture 19 The Vector Space Znp (Section 5.1)
Lecture 4 What are Codes? (Section 1.2)
Lecture 1 Introduction and preliminaries (Chapter 0)
Perpendicular and Parallel Lines
Theory of Information Lecture 13
Lecture 8 Huffman Encoding (Section 2.2)
Lecture 18 The Main Coding Theory Problem (Section 4.7)
Consider the following problem
Presentation transcript:

Lecture 7 Information Sources; Average Codeword Length (Section 2.1) Theory of Information Lecture 7 Theory of Information Lecture 7 Information Sources; Average Codeword Length (Section 2.1)

Theory of Information Lecture 7 Definitions Theory of Information Lecture 7 By McMillan’s Theorem, we must allow reasonably long codewords. But we want efficiency. Way out: use short codewords for frequent symbols, longer codewords for infrequent ones. Central in this approach becomes the concept of information source. DEFINITION An information source is S=(S,P), where S={s1,…,sq} is a source alphabet, and P is a probability law for it. Such a P can be written as the probability distribution P={p1,…,pq}. DEFINITION Let (S,P) be an information source, and let (C,f) be an encoding scheme for S={s1,…,sq}. The average codeword length of (C,f) is len(f(s1))P(s1) + … + len(f(sq))P(sq).

Theory of Information Lecture 7 Example Theory of Information Lecture 7 Source alphabet: {a,b,c,d} Probability law: P(a)=0.5, P(b)=0.2, P(c)=0.2, P(d)=0.1 Encoding scheme f: f(a)=0, f(b)=11, f(c)=100, f(d)=101 Encoding scheme g: g(a)=101, g(b)=100, g(c)=11, g(d)=0 What is the average codeword length of (C,f)? What is the average codeword length of (C,f)?

Theory of Information Lecture 7 Homework Theory of Information Lecture 7 Exercises 1 and 2 of Section 2.1.