Chapter 5 Markov processes Run length coding Gray code.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

Another question consider a message (sequence of characters) from {a, b, c, d} encoded using the code shown what is the probability that a randomly chosen.
Routing in a Parallel Computer. A network of processors is represented by graph G=(V,E), where |V| = N. Each processor has unique ID between 1 and N.
(speaker) Fedor Groshev Vladimir Potapov Victor Zyablov IITP RAS, Moscow.
Chapter 10 Shannon’s Theorem. Shannon’s Theorems First theorem:H(S) ≤ L n (S n )/n < H(S) + 1/n where L n is the length of a certain code. Second theorem:
The Data Link Layer Chapter 3. Data Link Layer Design Issues Services Provided to the Network Layer Framing Error Control Flow Control.
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Entropy and Shannon’s First Theorem
Quantization Prof. Siripong Potisuk.
Rajat K. Pal. Chapter 3 Emran Chowdhury # P Presented by.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
. Parameter Estimation and Relative Entropy Lecture #8 Background Readings: Chapters 3.3, 11.2 in the text book, Biological Sequence Analysis, Durbin et.
1 Cooperative Communications in Networks: Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland October, 2008.
Fundamental limits in Information Theory Chapter 10 :
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Practical Session 11 Codes. Hamming Distance General case: The distance between two code words is the amount of 1-bit changes required to reach from one.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
7/2/2015Errors1 Transmission errors are a way of life. In the digital world an error means that a bit value is flipped. An error can be isolated to a single.
Forward Error CORRECTION A little magic. Hamming in perspective Typically errors are corrected with retransmission. Hamming lets the receiver determine.
Channel Polarization and Polar Codes
exercise in the previous class (1)
Spread Spectrum Techniques
Shashank Srivastava Motilal Nehru National Institute Of Technology, Allahabad Error Detection and Correction : Data Link Layer.
CY2G2 Information Theory 5
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Lecture 4. RAM Model, Space and Time Complexity
Combinatorial Algorithms Reference Text: Kreher and Stinson.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
Channel Capacity.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Practical Session 10 Error Detecting and Correcting Codes.
Coding and Algorithms for Memories Lecture 5 1.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Coding and Algorithms for Memories Lecture 4 1.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
Source Coding Efficient Data Representation A.J. Han Vinck.
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Low Density Parity Check codes
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Basic Concepts of Encoding Codes and Error Correction 1.
Error Detection and Correction – Hamming Code
CS1Q Computer Systems Lecture 6 Simon Gay. Lecture 6CS1Q Computer Systems - Simon Gay2 Algebraic Notation Writing AND, OR, NOT etc. is long-winded and.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Error Detecting and Error Correcting Codes
Practical Session 10 Computer Architecture and Assembly Language.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
Modulo-2 Digital coding uses modulo-2 arithmetic where addition becomes the following operations: 0+0= =0 0+1= =1 It performs the.
Computer Architecture and Assembly Language
New Characterizations in Turnstile Streams with Applications
Coding and Algorithms for Memories Lecture 4
Factor Graphs and the Sum-Product Algorithm
Subject Name: Information Theory Coding Subject Code: 10EC55
II. Linear Block Codes.
Distributed Compression For Binary Symetric Channels
Computer Architecture and Assembly Language
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Error Detection and Correction
Theory of Information Lecture 13
Presentation transcript:

Chapter 5 Markov processes Run length coding Gray code

| transition probability Markov Processes Let S = {s 1, …, s q } be a set of symbols. A j th -order Markov process has probabilities p(s i | s i 1 … s i j ) associated with it, the conditional probability of seeing s i after seeing s i 1 … s i j. This is said to be a j- memory source, and there are q j states in the Markov process. Transition Graph b ½ ½ c ¼ ¼ a ⅓ ⅓ ⅓ ¼ ¼ sjsj sisi p(s i | s j ) Weather Example: Let (j = 1). Think: a means “fair” b means “rain” c means “snow” Transition Matrix p(s i | s j ) i = column, j = row next symbol = M 5.2 currentstatecurrentstate ∑ outgoing edges = 1 ⅓ ⅓ ⅓ ¼ ½ ¼ ¼ ¼ ½ abc a b c

Ergodic Equilibriums Definition: A Markov process M is said to be ergodic if 1.From any state we can eventually get to any other state. 2.The system reaches a limiting distribution. 5.2

Predictive Coding Assume a prediction algorithm for the source which given all prior symbols, predicts the next. s 1 ….. s n  1  p n e n = p n  s n error input stream prediction What is transmitted is the error, e i. By knowing just the error, the predictor also knows the original symbols. source  channel  destination predictor enen enen snsn snsn pnpn pnpn must assume that both predictors are identical, and start in the same state 5.7

Accuracy: The probability of the predictor being correct is p = 1  q; constant (over time) and independent of other prediction errors. Let the probability of a run of exactly n 0’s, (0 n 1), be p(n) = p n ∙ q. The probability of runs of any length n = 0, 1, 2, … is: Note: alternate method for calculating f(p), look at 5.8

Coding of Run Lengths Send a k-digit binary number to represent a run of zeroes whose length is between 0 and 2 k  2. (small runs are in binary) For run lengths larger than 2 k  2, send 2 k  1 (k ones) followed by another k-digit binary number, etc. (large runs are in unary) Let n = run length. Fix k = block length. Let n = i ∙ m + j0 ≤ j < m = 2 k  1 like “reading” the “matrix” with m cells and ∞ many rows. 5.9

Let p(n) = the probability of a run of exactly n 0’s: 0 n 1. The expected code length is: 5.9 But every n can be written uniquely as i∙m + j where i ≥ 0, 0 ≤ j < m = 2 k  1. Expected length of run length code

Gray Code Consider an analog-to-digital “flash” converter consisting of a rotating wheel: The maximum error in the scheme is ± ⅛ rotation because … imagine “brushes” contacting the wheel in each of the three circles The Hamming Distance between adjacent positions is 1. In ordinary binary, the maximum distance is 3 (the max. possible)

1-bit Gray code