CHAPTER 3 SIGNAL SPACE ANALYSIS

Slides:



Advertisements
Similar presentations
Communication System Overview
Advertisements

STAT 497 APPLIED TIME SERIES ANALYSIS
EE322 Digital Communications
1 Dr. Uri Mahlab. INTRODUCTION In order to transmit digital information over * bandpass channels, we have to transfer the information to a carrier wave.
Review of Probability and Random Processes
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3b: Detection and Signal Spaces.
Digital communication - vector approach Dr. Uri Mahlab 1 Digital Communication Vector Space concept.
Matched Filters By: Andy Wang.
1 Digital Communication Systems Lecture-3, Prof. Dr. Habibullah Jamal Under Graduate, Spring 2008.
Lecture II-2: Probability Review
ELEC 303 – Random Signals Lecture 21 – Random processes
Quantum One: Lecture 8. Continuously Indexed Basis Sets.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Modulation, Demodulation and Coding Course
Random Process The concept of random variable was defined previously as mapping from the Sample Space S to the real line as shown below.
Lecture II Introduction to Digital Communications Following Lecture III next week: 4. … Matched Filtering ( … continued from L2) (ch. 2 – part 0 “ Notes.
Dept. of EE, NDHU 1 Chapter Three Baseband Demodulation/Detection.
CHAPTER 6 PASS-BAND DATA TRANSMISSION
TELECOMMUNICATIONS Dr. Hugh Blanton ENTC 4307/ENTC 5307.
Random Processes ECE460 Spring, Power Spectral Density Generalities : Example: 2.
1 Part 5 Response of Linear Systems 6.Linear Filtering of a Random Signals 7.Power Spectrum Analysis 8.Linear Estimation and Prediction Filters 9.Mean-Square.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
Course Review for Final ECE460 Spring, Common Fourier Transform Pairs 2.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 4710: Lecture #6 1 Bandlimited Signals  Bandlimited waveforms have non-zero spectral components only within a finite frequency range  Waveform is.
Baseband Demodulation/Detection
Effect of Noise on Angle Modulation
Chapter 2 Signals and Spectra (All sections, except Section 8, are covered.)
Chapter 4: Baseband Pulse Transmission Digital Communication Systems 2012 R.Sokullu1/46 CHAPTER 4 BASEBAND PULSE TRANSMISSION.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Unipolar vs. Polar Signaling Signal Space Representation
Random Processes and Spectral Analysis
EE 3220: Digital Communication
Digital Communications Chapeter 3. Baseband Demodulation/Detection Signal Processing Lab.
Chapter 1 Random Process
Dept. of EE, NDHU 1 Chapter One Signals and Spectra.
Discrete-time Random Signals
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Baseband Receiver Receiver Design: Demodulation Matched Filter Correlator Receiver Detection Max. Likelihood Detector Probability of Error.
EE 3220: Digital Communication Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser Slman bin Abdulaziz.
CHAPTER 6 PASS-BAND DATA TRANSMISSION
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Performance of Digital Communications System
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
SungkyunKwan Univ Communication Systems Chapter. 7 Baseband pulse Transmission by Cho Yeon Gon.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Lecture 1.31 Criteria for optimal reception of radio signals.
Multiple Random Variables and Joint Distributions
Chapter 7. Classification and Prediction
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
SIGNAL SPACE ANALYSIS SISTEM KOMUNIKASI
Digital transmission over a fading channel
SIGNALS PROCESSING AND ANALYSIS
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Lecture 1.30 Structure of the optimal receiver deterministic signals.
Advanced Wireless Networks
Principios de Comunicaciones EL4005
Matrices and Vectors Review Objective
Random Process The concept of random variable was defined previously as mapping from the Sample Space S to the real line as shown below.
Error rate due to noise In this section, an expression for the probability of error will be derived The analysis technique, will be demonstrated on a binary.
Digital Communication Systems Lecture-3, Prof. Dr. Habibullah Jamal
Vector Concept of Signals
EE513 Audio Signals and Systems
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
On the Design of RAKE Receivers with Non-uniform Tap Spacing
Parametric Methods Berlin Chen, 2005 References:
Chapter 6 Random Processes
16. Mean Square Estimation
Presentation transcript:

CHAPTER 3 SIGNAL SPACE ANALYSIS

INTRODUCTION – THE MODEL We consider the following model of a generic transmission system (digital source): A message source transmits 1 symbol every T sec Symbols belong to an alphabet M (m1, m2, …mM) Binary – symbols are 0s and 1s Quaternary PCM – symbols are 00, 01, 10, 11

TRANSMITTER SIDE Symbol generation (message) is probabilistic, with a priori probabilities p1, p2, .. pM. or Symbols are equally likely So, probability that symbol mi will be emitted:

SIGNAL SPACE: OVERVIEW What is a signal space? Vector representations of signals in an N-dimensional orthogonal space Why do we need a signal space? It is a means to convert signals to vectors and vice versa. It is a means to calculate signals energy and Euclidean distances between signals. Why are we interested in Euclidean distances between signals? For detection purposes: The received signal is transformed to a received vectors. The signal which has the minimum distance to the received signal is estimated as the transmitted signal.

Transmitter takes the symbol (data) mi (digital message source output) and encodes it into a distinct signal si(t). The signal si(t) occupies the whole slot T allotted to symbol mi. si(t) is a real valued energy signal (???)

Transmitter takes the symbol (data) mi (digital message source output) and encodes it into a distinct signal si(t). The signal si(t) occupies the whole slot T allotted to symbol mi. si(t) is a real valued energy signal (signal with finite energy)

CHANNEL ASSUMPTIONS Linear, wide enough to accommodate the signal si(t) with no or negligible distortion Channel noise is w(t) is a zero-mean white Gaussian noise process – AWGN additive noise received signal may be expressed as:

cond. error probability given ith symbol was sent RECEIVER SIDE Observes the received signal x(t) for a duration of time T sec Makes an estimate of the transmitted signal si(t) (eq. symbol mi). Process is statistical presence of noise errors So, receiver has to be designed for minimizing the average probability of error (Pe) What is this? Pe = Symbol sent cond. error probability given ith symbol was sent

GEOMETRIC REPRESENTATION OF SIGNALS Objective: To represent any set of M energy signals {si(t)} as linear combinations of N orthogonal basis functions, where N ≤ M Real value energy signals s1(t), s2(t),..sM(t), each of duration T sec Orthogonal basis function coefficient Energy signal

Coefficients: Real-valued basis functions:

The set of coefficients can be viewed as a N- dimensional vector, denoted by si Bears a one-to-one relationship with the transmitted signal si(t)

a) Synthesizer for generating the signal si(t) a) Synthesizer for generating the signal si(t). b) Analyzer for generating the set of signal vectors si.

So, Each signal in the set si(t) is completely determined by the vector of its coefficients

Finally, The signal vector si concept can be extended to 2D, 3D etc. N-dimensional Euclidian space Provides mathematical basis for the geometric representation of energy signals that is used in noise analysis Allows definition of Length of vectors (absolute value) Angles between vectors Squared value (inner product of si with itself) Matrix Transposition

Illustrating the geometric representation of signals for the case when N  2 and M  3. (two dimensional space, three signals)

Also, What is the relation between the vector representation of a signal and its energy value? …start with the definition of average energy in a signal…(5.10) Where si(t) is as in (5.5):

The energy of a signal is equal to the squared length of its vector After substitution: After regrouping: Φj(t) is orthogonal, so finally we have: The energy of a signal is equal to the squared length of its vector

FORMULAS FOR TWO SIGNALS Assume we have a pair of signals: si(t) and sj(t), each represented by its vector, Then: Inner product is invariant to the selection of basis functions Inner product of the signals is equal to the inner product of their vector representations [0,T]

EUCLIDIAN DISTANCE The Euclidean distance between two points represented by vectors (signal vectors) is equal to ||si-sk|| and the squared value is given by:

ANGLE BETWEEN TWO SIGNALS The cosine of the angle Θik between two signal vectors si and sk is equal to the inner product of these two vectors, divided by the product of their norms: So the two signal vectors are orthogonal if their inner product siTsk is zero (cos Θik = 0)

SCHWARTZ INEQUALITY Defined as: accept without proof…

GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t). Define the first basis function starting with s1 as: (where E is the energy of the signal) (based on 5.12) Then express s1(t) using the basis function and an energy related coefficient s11 as: Later using s2 define the coefficient s21 as:

(Look at 5.23) If we introduce the intermediate function g2 as: We can define the second basis function φ2(t) as: Which after substitution of g2(t) using s1(t) and s2(t) it becomes: Note that φ1(t) and φ2(t) are orthogonal that means: Orthogonal to φ1(t) (Look at 5.23)

And so on for N dimensional space…, In general a basis function can be defined using the following formula: where the coefficients can be defined using:

Special case: General case: For the special case of i = 1 gi(t) reduces to si(t). General case: Given a function gi(t) we can define a set of basis functions, which form an orthogonal set, as:

CONVERSION OF THE CONTINUOUS AWGN CHANNEL INTO A VECTOR CHANNEL Suppose that the si(t) is not any signal, but specifically the signal at the receiver side, defined in accordance with an AWGN channel: So the output of the correlator (Fig. 5.3b) can be defined as:

deterministic quantity random quantity contributed by the transmitted signal si(t) sample value of the variable Wi due to noise

Now, Consider a random process X1(t), with x1(t), a sample function which is related to the received signal x(t) as follows: Using 5.28, 5.29 and 5.30 and the expansion 5.5 we get: which means that the sample function x1(t) depends only on the channel noise!

The received signal can be expressed as: NOTE: This is an expansion similar to the one in 5.5 but it is random, due to the additive noise.

STATISTICAL CHARACTERIZATION The received signal (output of the correlator of Fig.5.3b) is a random signal. To describe it we need to use statistical methods – mean and variance. The assumptions are: X(t) denotes a random process, a sample function of which is represented by the received signal x(t). Xj(t) denotes a random variable whose sample value is represented by the correlator output xj(t), j = 1, 2, …N. We have assumed AWGN, so the noise is Gaussian, so X(t) is a Gaussian process and being a Gaussian RV, X j is described fully by its mean value and variance.

MEAN VALUE Let Wj, denote a random variable, represented by its sample value wj, produced by the jth correlator in response to the Gaussian noise component w(t). So it has zero mean (by definition of the AWGN model) …then the mean of Xj depends only on sij:

VARIANCE Starting from the definition, we substitute using 5.29 and 5.31 Autocorrelation function of the noise process

After substitution for the variance we get: It can be expressed as: (because the noise is stationary and with a constant power spectral density) After substitution for the variance we get: And since φj(t) has unit energy for the variance we finally have: Correlator outputs, denoted by Xj have variance equal to the power spectral density N0/2 of the noise process W(t).

PROPERTIES Xj are mutually uncorrelated Xj are statistically independent (follows from above because Xj are Gaussian) and for a memoryless channel the following equation is true:

Define (construct) a vector X of N random variables, X1, X2, …XN, whose elements are independent Gaussian RV with mean values sij, (output of the correlator, deterministic part of the signal defined by the signal transmitted) and variance equal to N0/2 (output of the correlator, random part, calculated noise added by the channel). then the X1, X2, …XN , elements of X are statistically independent. So, we can express the conditional probability of X, given si(t) (correspondingly symbol mi) as a product of the conditional density functions (fx) of its individual elements fxj. NOTE: This is equal to finding an expression of the probability of a received symbol given a specific symbol was sent, assuming a memoryless channel

…that is: where, the vector x and the scalar xj, are sample values of the random vector X and the random variable Xj.

Vector x is called observation vector Scalar xj is called observable element Vector x and scalar xj are sample values of the random vector X and the random variable Xj

Since, each Xj is Gaussian with mean sj and variance N0/2 we can substitute in 5.44 to get 5.46:

If we go back to the formulation of the received signal through a AWGN channel 5.34 Only projections of the noise onto the basis functions of the signal set {si(t)Mi=1 affect the significant statistics of the detection problem The vector that we have constructed fully defines this part

Finally, The AWGN channel, is equivalent to an N- dimensional vector channel, described by the observation vector

BIG PICTURE: DETECTION UNDER AWGN

ADDITIVE WHITE GAUSSIAN NOISE (AWGN) Thermal noise is described by a zero-mean Gaussian random process, n(t) that ADDS on to the signal => “additive” Probability density function (gaussian) [w/Hz] Power spectral Density (flat => “white”) Autocorrelation Function (uncorrelated) Its PSD is flat, hence, it is called white noise. Autocorrelation is a spike at 0: uncorrelated at any non-zero lag

EFFECT OF NOISE IN SIGNAL SPACE The cloud falls off exponentially (gaussian). Vector viewpoint can be used in signal space, with a random noise vector w

MAXIMUM LIKELIHOOD (ML) DETECTION: SCALAR CASE “likelihoods” Assuming both symbols equally likely: uA is chosen if Log-Likelihood => A simple distance criterion!

CORRELATOR RECEIVER The matched filter output at the sampling time, can be realized as the correlator output. Matched filtering, i.e. convolution with si*(T-) simplifies to integration w/ si*(), i.e. correlation or inner product! Recall: correlation operation is the projection of the received signal onto the signal space! Key idea: Reject the noise (N) outside this space as irrelevant: => maximize S/N

IRRELEVANCE THEOREM: NOISE OUTSIDE SIGNAL SPACE Noise PSD is flat (“white”) => total noise power infinite across the spectrum. We care only about the noise projected in the finite signal dimensions (eg: the bandwidth of interest).

ASIDE: CORRELATION EFFECT Correlation is a maximum when two signals are similar in shape, and are in phase (or 'unshifted' with respect to each other). Correlation is a measure of the similarity between two signals as a function of time shift (“lag”,  ) between them When the two signals are similar in shape and unshifted with respect to each other, their product is all positive. This is like constructive interference, The breadth of the correlation function - where it has significant value - shows for how long the signals remain similar.

A CORRELATION RECEIVER integrator Threshold device (A\D) - + Sample every Tb seconds integrator

INTEGRATE AND DUMP CORRELATION RECEIVER White Gaussian noise Closed every Tb seconds (n(t Filter to limit noise power Threshold device (A/D) + + c R (Signal z(t High gain amplifier The bandwidth of the filter preceding the integrator is assumed to be wide enough to pass z(t) without distortion