Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.

Slides:



Advertisements
Similar presentations
Physical Layer: Signals, Capacity, and Coding
Advertisements

Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Sampling and Pulse Code Modulation
Information Theory EE322 Al-Sanie.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Chapter 6 Information Theory
ENGS Assignment 3 ENGS 4 – Assignment 3 Technology of Cyberspace Winter 2004 Thayer School of Engineering Dartmouth College Assignment 3 – Due Sunday,
Fundamental limits in Information Theory Chapter 10 :
Spatial and Temporal Data Mining
Information Theory Eighteenth Meeting. A Communication Model Messages are produced by a source transmitted over a channel to the destination. encoded.
Digital Communications I: Modulation and Coding Course Term 3 – 2008 Catharina Logothetis Lecture 2.
Review of Probability and Random Processes
Pulse Code Modulation (PCM) 1 EE322 A. Al-Sanie. Encode Transmit Pulse modulate SampleQuantize Demodulate/ Detect Channel Receive Low-pass filter Decode.
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
Noise, Information Theory, and Entropy
Spread Spectrum Techniques
Noise, Information Theory, and Entropy
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
STATISTIC & INFORMATION THEORY (CSNB134)
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Modulation, Demodulation and Coding Course Period Sorour Falahati Lecture 2.
EE 3220: Digital Communication
Fundamentals of Digital Communication
Digital Communications Chapter 2 Formatting and Baseband Modulation Signal Processing Lab.
Digital Communications I: Modulation and Coding Course Spring – 2012 Jeffrey N. Denenberg Lecture 2: Formatting and Baseband Modulation.
Computer Communication & Networks Lecture # 05 Physical Layer: Signals & Digital Transmission Nadeem Majeed Choudhary
Channel Capacity.
ECE 283 Digital Communication Systems Course Description –Digital modulation techniques. Coding theory. Transmission over bandwidth constrained channels.
Information Theory and Coding System EMCS 676 Fall 2014 Prof. Dr. Md. Imdadul Islam
Prepared by: Amit Degada Teaching Assistant, ECED, NIT Surat
1 Information in Continuous Signals f(t) t 0 In practice, many signals are essentially analogue i.e. continuous. e.g. speech signal from microphone, radio.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Introduction to Digital and Analog Communication Systems
Week 7 Lecture 1+2 Digital Communications System Architecture + Signals basics.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Information Theory The Work of Claude Shannon ( ) and others.
DIGITAL COMMUNICATIONS Linear Block Codes
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Source Coding Efficient Data Representation A.J. Han Vinck.
CELLULAR COMMUNICATIONS MIDTERM REVIEW. Representing Oscillations   w is angular frequency    Need two variables to represent a state  Use a single.
Abdullah Aldahami ( ) April 6,  Huffman Coding is a simple algorithm that generates a set of variable sized codes with the minimum average.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
Dept. of EE, NDHU 1 Chapter One Signals and Spectra.
Chapter 4 Digital Transmission
Expected values of discrete Random Variables. The function that maps S into S X in R and which is denoted by X(.) is called a random variable. The name.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
INFORMATION THEORY Pui-chor Wong.
Basic Concepts of Information Theory A measure of uncertainty. Entropy. 1.
1 CSCD 433 Network Programming Fall 2013 Lecture 5a Digital Line Coding and other...
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
1 st semester 1436 / Modulation Continuous wave (CW) modulation AM Angle modulation FM PM Pulse Modulation Analog Pulse Modulation PAMPPMPDM Digital.
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure.
TUNALIData Communication1 Spread Spectrum Chapter 9.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
UNIT –V INFORMATION THEORY EC6402 : Communication TheoryIV Semester - ECE Prepared by: S.P.SIVAGNANA SUBRAMANIAN, Assistant Professor, Dept. of ECE, Sri.
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
EKT 431 DIGITAL COMMUNICATIONS. MEETING LECTURE : 3 HOURS LABORATORY : 2 HOURS LECTURER PUAN NORSUHAIDA AHMAD /
1 CSCD 433 Network Programming Fall 2016 Lecture 4 Digital Line Coding and other...
Information Theory Information Suppose that we have the source alphabet of q symbols s 1, s 2,.., s q, each with its probability p(s i )=p i. How much.
Increasing Information per Bit
Introduction to Information theory
Principios de Comunicaciones EL4005
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
경희대학교 MediaLab 서 덕 영 Information theory 경희대학교 MediaLab 서 덕 영
Context-based Data Compression
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
INTRODUCTION TO DIGITAL COMMUNICATION
Presentation transcript:

Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization (Chapter 6) (week 5) Channel Capacity (Chapter 7) (week 6) Error Correction Codes (Chapter 8) (week 7 and 8) Equalization (Bandwidth Constrained Channels) (Chapter 10) (week 9) Adaptive Equalization (Chapter 11) (week 10 and 11) Spread Spectrum (Chapter 13) (week 12) Fading and multi path (Chapter 14) (week 12)

Transmitters (week 1 and 2) Information Measures Vector Quantization Delta Modulation QAM

Digital Communication System: Transmitter Receiver Information per bit increases noise immunity increases Bandwidth efficiency increases

Transmitter Topics Increasing information per bit Increasing noise immunity Increasing bandwidth efficiency

Increasing Information per Bit Information in a source –Mathematical Models of Sources –Information Measures Compressing information –Huffman encoding Optimal Compression? –Lempel-Ziv-Welch Algorithm Practical Compression Quantization of analog data –Scalar Quantization –Vector Quantization –Model Based Coding –Practical Quantization  -law encoding Delta Modulation Linear Predictor Coding (LPC)

Increasing Noise Immunity Coding (Chapter 8, weeks 7 and 8)

Increasing bandwidth Efficiency Modulation of digital data into analog waveforms –Impact of Modulation on Bandwidth efficiency

Increasing Information per Bit Information in a source –Mathematical Models of Sources –Information Measures Compressing information –Huffman encoding Optimal Compression? –Lempel-Ziv-Welch Algorithm Practical Compression Quantization of analog data –Scalar Quantization –Vector Quantization –Model Based Coding –Practical Quantization  -law encoding Delta Modulation Linear Predictor Coding (LPC)

Mathematical Models of Sources Discrete Sources –Discrete Memoryless Source (DMS) Statistically independent letters from finite alphabet –Stationary Source Statistically dependent letters, but joint probabilities of sequences of equal length remain constant Analog Sources –Band Limited |f|<W Equivalent to discrete source sampled at Nyquist = 2W but with infinite alphabet (continuous)

Discrete Sources

Discrete Memoryless Source (DMS) –Statistically independent letters from finite alphabet e.g., a normal binary data stream X might be a series of random events of either X=1, or X=0 P(X=1) = constant = 1 - P(X=0) e.g., well compressed data, digital noise

Stationary Source –Statistically dependent letters, but joint probabilities of sequences of equal length remain constant e.g., probability that sequence a i,a i+1,a i+2,a i+3 =1001 when a j,a j+1,a j+2,a j+3 =1010 is always the same Approximation uncoded for text

Analog Sources Band Limited |f|<W –Equivalent to discrete source sampled at Nyquist = 2W but with infinite alphabet (continuous)

Information in a DMS letter If an event X denotes the arrival of a letter x i with probability P(X=x i ) = P(x i ) the information contained in the event is defined as: I(X=x i ) = I(x i ) = -log 2 (P(x i )) bits I(x i ) P(x i )

Examples e.g., An event X generates random letter of value 1 or 0 with equal probability P(X=0) = P(X=1) = 0.5 then I(X) = -log 2 (0.5) = 1 or 1 bit of info each time X occurs e.g., if X is always 1 then P(X=0) = 0, P(X=1) = 1 then I(X=0) = -log 2 (0) =  and I(X=1) = -log 2 (1) = 0

Discussion I(X=1) = -log 2 (1) = 0 Means no information is delivered by X, which is consistent with X = 1 all the time. I(X=0) = -log 2 (0) =  Means if X=0 then a huge amount of information arrives, however since P(X=0) = 0, this never happens.

Average Information To help deal with I(X=0) = , when P(X=0) = 0 we need to consider how much information actually arrives with the event over time. The average letter information for letter x i out of an alphabet of L letters, i = 1,2,3…L, is I(x i )P(x i ) = -P(x i )log 2 (P(x i ))

Average Information Plotting this for 2 symbols (1,0) we see that on average at most a little more than 0.5 bits of information arrive with a particular letter, and that low or high probability letters generally carry little information.

Average Information (Entropy) Now lets consider average information of the event X made up of the random arrival of all the letters x i in the alphabet. This is the (sum of) average information arriving with each bit.

Average Information (Entropy) Plotting this for L = 2 we see that on average at most 1 bit of information is delivered per event, but only if both symbols arrive with equal probability.

Average Information (Entropy) What is best possible entropy for multi symbol code? So multi bit binary symbols of equally probable random bits will equal the most efficient information carriers i.e., 256 symbols made from 8 bit bytes is OK from information standpoint