Basic Concepts of Encoding Codes and Error Correction 1.

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

parity bit is 1: data should have an odd number of 1's
Applied Algorithmics - week7
Information and Coding Theory
Information Theory EE322 Al-Sanie.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
1 Diagonalization Fact: Many books exist. Fact: Some books contain the titles of other books within them. Fact: Some books contain their own titles within.
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Information Theory Introduction to Channel Coding Jalal Al Roumy.
Error-Correcting Codes
1 Huffman Codes. 2 Introduction Huffman codes are a very effective technique for compressing data; savings of 20% to 90% are typical, depending on the.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
CHAPTER 4 Decidability Contents Decidable Languages
Data Structures – LECTURE 10 Huffman coding
Error Correcting Codes To detect and correct errors Adding redundancy to the original message Crucial when it’s impossible to resend the message (interplanetary.
Variable-Length Codes: Huffman Codes
Copyright © Cengage Learning. All rights reserved.
Hamming Code A Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect up to two bit errors,
exercise in the previous class (1)
Games, Hats, and Codes Mira Bernstein Wellesley College SUMS 2005.
MAT 1000 Mathematics in Today's World Winter 2015.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Linear Algebra Chapter 4 Vector Spaces.
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Channel Capacity.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
COEN 180 Erasure Correcting, Error Detecting, and Error Correcting Codes.
ERROR CONTROL CODING Basic concepts Classes of codes: Block Codes
Copyright © Cengage Learning. All rights reserved. CHAPTER 7 FUNCTIONS.
Data and Computer Communications by William Stallings Eighth Edition Digital Data Communications Techniques Digital Data Communications Techniques Click.
Introduction to Coding Theory. p2. Outline [1] Introduction [2] Basic assumptions [3] Correcting and detecting error patterns [4] Information rate [5]
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
DIGITAL COMMUNICATIONS Linear Block Codes
1 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lecture 20 (2009)
CHAPTER 5 SIGNAL SPACE ANALYSIS
ADVANTAGE of GENERATOR MATRIX:
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
1 © Unitec New Zealand CRC calculation and Hammings code.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Error Detection and Correction – Hamming Code
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Cryptography and Coding Theory
SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Convolutional Codes.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Hamming (4,7) Code Binary Linear Codes Hamming Distance Weight of BLC
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Error Detecting and Error Correcting Codes
Computational Molecular Biology
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
ENTROPY Entropy measures the uncertainty in a random experiment. Let X be a discrete random variable with range S X = { 1,2,3,... k} and pmf p k = P X.
8 Coding Theory Discrete Mathematics: A Concept-based Approach.
parity bit is 1: data should have an odd number of 1's
The Viterbi Decoding Algorithm
Proving the Correctness of Huffman’s Algorithm
S Digital Communication Systems
Distributed Compression For Binary Symetric Channels
주민등록번호.
parity bit is 1: data should have an odd number of 1's
Proving the Correctness of Huffman’s Algorithm
Theory of Information Lecture 13
Presentation transcript:

Basic Concepts of Encoding Codes and Error Correction 1

Encoding Encoding is a transformation procedure operating on the input signal prior to its entry into the communication channel. This procedure adapts the input signal to the communication system and improves its efficiency. 2

Encoding In other words, encoding is a procedure for associating words constructed from a finite alphabet of a language (e.g. a natural language) with given words of another language (encoding language) in one-to-one manner. Decoding is the inverse operation: restoration of words from the initial language. 3

Codes Let be the alphabet and its cardinality is. Any finite sequence of the letters from this alphabet forms a word over it. Let S be a set of all possible words over A. Some of may be meaningful, some of them may not be meaningful, but anyway we will use only some to encode the information. 4

Codes A subset, which is used for representation of the information in the communication system is commonly referred to as the code. If all words from V have the same length n, then the code V is called the uniform code. If words from V may have different length, then the code V is called the non-uniform code. 5

Digital communications Let us consider the digital communication channel. Hence. We will consider the uniform codes of the length n. Thus, the words over Z 2 are n - dimensional binary vectors form a set of “encoding” words. 6

Distance between binary vectors The distance ρ (the Hamming distance) between two n -dimensional binary vectors and is the number of components that differ from each other in terms of component-wise comparison. To find the distance between two binary vectors, it is necessary to add them component-wise by mod 2 and then to count the number of “1s” in the vector-sum. 7

Distance between binary vectors For example, 8

Distance between binary vectors The distance ρ meets all metric’s axioms: The Hamming’s norm of a binary vector is the number of “1s” in this vector. 9

Errors Replacement of one letter in a word by another one is commonly referred to as the error. Let the recipient (the receiver) of the information knows the code. “Detection of the error” means detection of the fact that the error has occurred without the exact detection of where. “Correction of the error” means the complete restoration of a word, which was originally sent, but then was distorted. 10

Errors If the word was transmitted and some bits in X were inverted. As a result, the receiver receives. If, then the error can not be detected and corrected without analysis of the sense of a whole message. If, but, then the error can be detected and corrected upon certain conditions. 11

Maximum likelihood decoding Let X was transmitted, Y was received and To correct the error (errors) and to decode the corresponding word we have to find This method is called the maximum likelihood decoding 12

Minimum encoding distance i s called a minimum encoding distance of the code V. This means that In other words, the minimum encoding distance equals to the minimum distance between the encoding vectors if the distance between the encoding vector and another vector is less then d then 13

Minimum encoding distance For example, let n =3. Then S ={000,001,010,011,000,101,110,111}. Let V ={000,111}. Then d =3. Indeed, If 14

Criterion of Error Detection Theorem. The uniform code V detects at most t errors if, and only if d=t+1. Proof. Let d=t+1. Let X  Y and. This means that and t errors can be detected. Let the code detects t errors. Then. Otherwise, if and, this contradicts to the ability to detect t errors. 15

Example of Error Detection For example, let n =3. Then S ={000,001,010,011,000,101,110,111}. Let V ={000,111}. Then d =3 and we can detect (not correct, just detect!!!) 2 errors. Indeed, if any 1 or 2 of 3 bits in any encoding vector is (are) inverted, we obtain a vector, which does not belong to V. If 3 bits are inverted, we obtain another encoding vector and can not detect the errors. 16

Criterion of Error Correction Theorem. The uniform code V can correct at most t errors if, and only if d=2t+1. Proof. Necessity. The code corrects at most t errors. We have to prove that Suppose that this is not true, which means: Then, if was transmitted, was received and, we are unable to decode, because X and Y are equidistant to Z. This contradicts to the initial condition that the code corrects up to t errors. 17

Criterion of Error Correction If was transmitted, was received and,which means that Y was transmitted. This contradicts to the initial condition that the code corrects up to t errors and therefore it can not be that and this means that and therefore d=2t+1. 18

Criterion of Error Correction Proof. Sufficiency. Let the minimum encoding distance is d=2t+1. We have to prove that the code can correct up to t errors. Let was transmitted and was received. Suppose that. On the other hand, according to the metric axioms: If exactly t errors occurred, than X will be decoded. If less then t errors occurred, then, a fortiori, X will be decoded. 19

Example of Error Correction For example, let n =3. Then S ={000,001,010,011,000,101,110,111}. Let V ={000,111}. Then d =3=2*1+1, and we can correct 1 error. Indeed, if 1 of 3 bits in any encoding vector is inverted, we obtain a vector, which does not belong to V, and we always can determine a unique vector from V, whose distance to the distorted vector is exactly 1. 20

Example of Error Correction S ={000,001,010,011,000,101,110,111}. V ={000,111}.  X 1 =(000) X 2 =(111) Let X 1 =(000) was transmitted, Y=(100) was received. and we definitely decode X 1. 21

Example of Error Correction S ={000,001,010,011,000,101,110,111}. V ={000,111}.  X 1 =(000) X 2 =(111) If 2 of 3 bits in any encoding vector are inverted, we also obtain a vector, which does not belong to V. We can detect that 2 errors occurred, but we can not correct them, because there will be more than one equidistant vector in V, whose distance to the distorted vector is 2. Let X 1 =(000) was transmitted, Y=(101) was received. and there is no way to correct the errors because the decoding procedure can not be ambiguous. 22