Theory of Information Lecture 13

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

Chapter 8 Channel Capacity. bits of useful info per bits actually sent Change in entropy going through the channel (drop in uncertainty): average uncertainty.
Applied Algorithmics - week7
Error Correcting Codes Stanley Ziewacz 22M:151 Spring 2009.
Chapter 10 Shannon’s Theorem. Shannon’s Theorems First theorem:H(S) ≤ L n (S n )/n < H(S) + 1/n where L n is the length of a certain code. Second theorem:
Information Theory EE322 Al-Sanie.
6.375 Project Arthur Chang Omid Salehi-Abari Sung Sik Woo May 11, 2011
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Information Theory Introduction to Channel Coding Jalal Al Roumy.
Chapter 6 Information Theory
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
DIGITAL COMMUNICATION Coding
Information Theory and Security. Lecture Motivation Up to this point we have seen: –Classical Crypto –Symmetric Crypto –Asymmetric Crypto These systems.
Error Correcting Codes To detect and correct errors Adding redundancy to the original message Crucial when it’s impossible to resend the message (interplanetary.
Variable-Length Codes: Huffman Codes
Copyright © Cengage Learning. All rights reserved.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Information Theory and Security
Noise, Information Theory, and Entropy
exercise in the previous class (1)
MAT 1000 Mathematics in Today's World Winter 2015.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Noise, Information Theory, and Entropy
Analysis of Iterative Decoding
CY2G2 Information Theory 5
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
COEN 180 Erasure Correcting, Error Detecting, and Error Correcting Codes.
Introduction to Coding Theory. p2. Outline [1] Introduction [2] Basic assumptions [3] Correcting and detecting error patterns [4] Information rate [5]
§2 Discrete memoryless channels and their capacity function
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Information Theory Linear Block Codes Jalal Al Roumy.
Channel Coding Binit Mohanty Ketan Rajawat. Recap…  Information is transmitted through channels (eg. Wires, optical fibres and even air)  Channels are.
Last time, we talked about:
Basic Concepts of Encoding Codes and Error Correction 1.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Welcome This is a template to create an Instructional Design Document of the concept you have selected for creating animation. This will take you through.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Error Detecting and Error Correcting Codes
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
Exercise in the previous class (1) Define (one of) (15, 11) Hamming code: construct a parity check matrix, and determine the corresponding generator matrix.
ENTROPY Entropy measures the uncertainty in a random experiment. Let X be a discrete random variable with range S X = { 1,2,3,... k} and pmf p k = P X.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Modulo-2 Digital coding uses modulo-2 arithmetic where addition becomes the following operations: 0+0= =0 0+1= =1 It performs the.
The Viterbi Decoding Algorithm
Introduction to Information theory
Information Theory Michael J. Watts
Maximum Likelihood Detection
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
Lecture 6 Instantaneous Codes and Kraft’s Theorem (Section 1.4)
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
Distributed Compression For Binary Symetric Channels
Copyright © Cengage Learning. All rights reserved.
Communicating Efficiently
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Lecture 7 Information Sources; Average Codeword Length (Section 2.1)
Lecture 11 The Noiseless Coding Theorem (Section 3.4)
Lecture 17 Making New Codes from Old Codes (Section 4.6)
Lecture 15 The Minimum Distance of a Code (Section 4.4)
Lecture 19 The Vector Space Znp (Section 5.1)
IV. Convolutional Codes
Lecture 18 The Main Coding Theory Problem (Section 4.7)
Presentation transcript:

Theory of Information Lecture 13 Lecture 14 Decision Rules, Nearest Neighbor Decoding (Section 4.2, 4.3)

What Is a Decision Rule? received encoded message message decoded Communications channel model received message … x … encoded message … … decoded message noise decision rule An (n,M)-code means a block code of length n and size M (i.e. every codeword has length n, and there are M codewords altogether) Definition Let C be an (n,M)-code over a code alphabet A, and assume C does not contain the symbol ?. A decision rule for C is a function f: AnC{?}. Intuition: f(x)=c means assuming that the received word x was meant to be c, i.e. that c was sent. In other words, decoding, or interpreting x as c. If we cannot identify such a codeword, i.e. cC, the symbol ? is used to declare a decoding error, so in this case always c=?.

Two Sorts of Decision Rules Goal: maximize the probability of correct decoding Code alphabet: {0,1} Code: {0000,1111} Channel: 90% 0 0 1 1 10% 11% 89% If 0111 is received, how would you interpret it? If 0011 is received, how would you interpret it? But if you knew that 1111 is sent 99% of the time, how would you interpret 0011?

Which Rule Is Better? Ideal observer decision rule Maximum likelihood Advantages:

Ideal Observer Decision Rule 25% 35% x c3 c1 c2 40% c1 P(c1 sent | x received) x P(cM sent | x received) cM Ideal observer decision rule decodes received codeword x as codeword c with maximal probability P(c sent | x received).

Maximum Likelihood Decision Rule 50% 70% x c3 c1 c2 80% c1 P(x received | c1 sent ) x P(x received | cM sent) cM Maximum likelihood decision rule decodes received codeword x as codeword c = f(x) with maximal probability P(x received | c sent).

One Rule as a Special Case of the Other 50% 70% x c3 c1 c2 80% Assume all codewords are equally likely to be sent. What would be the backward probabilities then? c1 c2 x c3 Theorem 4.2.2: For the uniform input distribution, ideal observer decoding coincides with maximum likelihood decoding.

Example 4.2.1 Suppose codewords of C = {000, 111} are sent over a binary symmetric channel with crossover prob. p = 0.01. If string 100 is received, how should it be decoded by the maximum likelihood decision rule? P(100 received | 000 sent)= P(100 received | 111 sent)= Would the same necessarily be the case under ideal observer decoding?

Sometimes the Two Rules Yield Different Results 95% 5% 6% 1 1 94% Assume: C={00,11}; 11 is sent 70% of the time; 01 is received. How would 01 be decoded by the maximum likelihood rule? How would 01 be decoded by the ideal observer rule? As As

Handling Ties In the case of ties, it is reasonable for a given decision rule to declare an error. Suppose codewords of C = {0000, 1111} are sent over a binary symmetric channel with crossover prob. p = ¼ . How should the following strings be decoded by the maximum likelihood decision rule? 0000 1011 0011 P(0011 received | 0000 sent)= P(0011 received | 1111 sent)=

Hamming Distance and Nearest Neighbor Decoding Definition Let x and y be two strings of the same length over the same alphabet. The Hamming distance between x and y, denoted d(x,y), is defined to be the number of places in which x and y differ. E.g.: d(000,100) = 1, d(111,100) = 2. The decision rule assigning a received word the closest codeword (in Hamming distance) is called the nearest neighbor decision rule. Theorem 4.3.2 For BSC, the maximum likelihood decision rule is equivalent to the nearest neighbor decision rule.

Exercise 7 of Section 4.3 Construct a binary channel for which maximum likelihood decoding is not the same as nearest neighbor decoding. 10% 0 0 1 1 90% Let C= {001, 011} Assume 000 is received. 50% 50% P(000 received | 001 sent) = P(000 received | 011 sent) =

Theory of Information Lecture 13 Homework Exercises 2,3,4,5 of Section 4.2. Exercises 1,2,3 of Section 4.3.