DCSP-10 Jianfeng Feng Department of Computer Science Warwick Univ., UK

Slides:



Advertisements
Similar presentations
Another question consider a message (sequence of characters) from {a, b, c, d} encoded using the code shown what is the probability that a randomly chosen.
Advertisements

DCSP-3: Fourier Transform (continuous time) Jianfeng Feng
DCSP-8: Minimal length coding II, Hamming distance, Encryption Jianfeng Feng
DCSP-17 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-2: Fourier Transform I Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-5: Noise Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-14 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-17: Matched Filter Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-21 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-4: Modem Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-20 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-7: Information Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-13 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-3: Fourier Transform Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-11 Jianfeng Feng
DCSP-12 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-16 Jianfeng Feng Department of Computer Science Warwick Univ., UK
Functions of the Data Link Layer Provide service interface to the network layer Dealing with transmission errors Regulating data flow Slow receivers not.
Applied Algorithmics - week7
Error Control Code.
Information Theory EE322 Al-Sanie.
Data and Computer Communications
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Quantum Error Correction SOURCES: Michele Mosca Daniel Gottesman Richard Spillman Andrew Landahl.
Turbo Codes Azmat Ali Pasha.
2/28/03 1 The Virtues of Redundancy An Introduction to Error-Correcting Codes Paul H. Siegel Director, CMRR University of California, San Diego The Virtues.
Error Correcting Codes To detect and correct errors Adding redundancy to the original message Crucial when it’s impossible to resend the message (interplanetary.
Reliability and Channel Coding
Noise, Information Theory, and Entropy
Hamming Code A Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect up to two bit errors,
1/26 Chapter 6 Digital Data Communication Techniques.
CY2G2 Information Theory 5
Lecture 10: Error Control Coding I Chapter 8 – Coding and Error Control From: Wireless Communications and Networks by William Stallings, Prentice Hall,
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
§3 Discrete memoryless sources and their rate-distortion function §3.1 Source coding §3.2 Distortionless source coding theorem §3.3 The rate-distortion.
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
Introduction to Coding Theory. p2. Outline [1] Introduction [2] Basic assumptions [3] Correcting and detecting error patterns [4] Information rate [5]
COSC 3213: Computer Networks I Instructor: Dr. Amir Asif Department of Computer Science York University Section M Topics: 1. Error Detection Techniques:
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Coding Theory Efficient and Reliable Transfer of Information
ERROR DETECTING AND CORRECTING CODES -BY R.W. HAMMING PRESENTED BY- BALAKRISHNA DHARMANA.
Basic Concepts of Encoding Codes and Error Correction 1.
Error Detection and Correction – Hamming Code
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
INFORMATION THEORY Pui-chor Wong.
Convolutional Coding In telecommunication, a convolutional code is a type of error- correcting code in which m-bit information symbol to be encoded is.
Error Detecting and Error Correcting Codes
UNIT –V INFORMATION THEORY EC6402 : Communication TheoryIV Semester - ECE Prepared by: S.P.SIVAGNANA SUBRAMANIAN, Assistant Professor, Dept. of ECE, Sri.
Ch4. Zero-Error Data Compression Yuan Luo. Content  Ch4. Zero-Error Data Compression  4.1 The Entropy Bound  4.2 Prefix Codes  Definition and.
The Viterbi Decoding Algorithm
Communication Networks: Technology & Protocols
And Decoders Prof. Sin-Min Lee Department of Computer Science
Information Theory Michael J. Watts
COT 5611 Operating Systems Design Principles Spring 2012
II. Linear Block Codes.
COT 5611 Operating Systems Design Principles Spring 2014
Information Redundancy Fault Tolerant Computing
Distributed Compression For Binary Symetric Channels
Communicating Efficiently
Reliability and Channel Coding
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Error Detection and Correction
Theory of Information Lecture 13
Presentation transcript:

DCSP-10 Jianfeng Feng Department of Computer Science Warwick Univ., UK

Channel coding; Hamming distance The task of source coding is to represent the course information with the minimum of symbols. When a code is transmitted over a channel in the presence of noise, errors will occur. The task of channel coding is to represent the source information in a manner that minimises the error probability in decoding.

It is apparent that channel coding requires the use of redundancy. If all possible outputs of the channel correspond uniquely to a source input, this is no possibility of detecting errors in the transmission. To detect, and possibly correct errors, the channel code sequence must be longer the source sequence. The rate of R of a channel code is the average ratio of the source sequence length to the channel code length. Thus R<1.

A good channel code is designed so that, if a few errors occur in transmission, the output can still be decoded with the correct input. This is possible because although incorrect, the output is sufficiently similar to the input to be recognisable.

The idea of similarity is made more firm by the definition of a Hamming distance. Let x and y be two binary sequence of the same length. The hamming distance between these two codes is the number of symbols that disagree.

Two example distances: 0100->1001 has distance 3 (red path); 0110->1110 has distance 1 (blue path)

The Hamming distance between and is 2. The Hamming distance between and is 3. The Hamming distance between "toned" and "roses" is 3.

Suppose the code x is transmitted over the channel. Due to error, y is received. The decoder will assign to y the code x that minimises the Hamming distance between x and y.

It can be shown that to detect n bit errors, a coding scheme requires the use of codewords with a Hamming distance of at least n+1. it can be also shown that to correct n bit errors requires a coding scheme with a least a Hamming distance of 2n+1 between the codewords. By designing a good code, we try to ensure that the Hamming distance between possible codewords x is larger than the Hamming distance arising from errors.

Channel Capacity One of the most famous of all results of information theory is Shannon's channel capacity theorem. For a given channel there exists a code that will permit the error-free transmission across the channel at a rate R, provided R<C, the channel capacity.

C = B log 2 ( 1 + (S/N) ) b/s

As we have already noted, the astonishing part of the theory is the existence of a channel capacity. Shannon's theorem is both tantalizing and frustrating.

It is offers error-free transmission, but it makes no statements as to what code is required. In fact, all we may deduce from the proof of the theorem is that is must be a long one. No none has yet found a code that permits the use of a channel at its capacity. However, Shannon has thrown down the gauntlet, in as much as he has proved that the code exists.

We shall not give a description of how the capacity is calculated. However, an example is instructive. The binary channel is a channel with a binary input and output. Associated with each output is a probability p that the output is correct, and a probability 1-p it is not.

For such a channel, the channel capacity turns output to be: C =1+ p log 2 p+ (1-p) log 2 (1-p) Here, p is the bit error probability. If p=0, then C=1. If p=0.5, then C=0. Thus if there is equal of receiving a 1 or 0, irrespective of the signal sent, the channel is completely unreliable and no message can be sent across it.

So defined., the channel capacity is a non-dimensional number. We normally quote the capacity as a rate, in bits/second. To do this we relate each output to a change in the signal. For the binary channel we have C = B [1+p log 2 p+(1-p) log 2 (1-p)] We note that C<B, i.e. the capacity is always less than the it rate.