CHAPTER 5 SIGNAL SPACE ANALYSIS Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Outline 5.1 Introduction 5.2 Geometric Representation of Signals Gram-Schmidt Orthogonalization Procedure 5.3 Conversion of the AWGN into a Vector Channel 5.4 Maximum Likelihood Decoding 5.5 Correlation Receiver 5.6 Probability of Error Digital Communication Systems 2012 R.Sokullu
Introduction – the Model We consider the following model of a generic transmission system (digital source): A message source transmits 1 symbol every T sec Symbols belong to an alphabet M (m1, m2, …mM) Binary – symbols are 0s and 1s Quaternary PCM – symbols are 00, 01, 10, 11 Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Transmitter Side Symbol generation (message) is probabilistic, with a priori probabilities p1, p2, .. pM. or Symbols are equally likely So, probability that symbol mi will be emitted: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Transmitter takes the symbol (data) mi (digital message source output) and encodes it into a distinct signal si(t). The signal si(t) occupies the whole slot T allotted to symbol mi. si(t) is a real valued energy signal (???) Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Transmitter takes the symbol (data) mi (digital message source output) and encodes it into a distinct signal si(t). The signal si(t) occupies the whole slot T allotted to symbol mi. si(t) is a real valued energy signal (signal with finite energy) Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Channel Assumptions: Linear, wide enough to accommodate the signal si(t) with no or negligible distortion Channel noise is w(t) is a zero-mean white Gaussian noise process – AWGN additive noise received signal may be expressed as: Digital Communication Systems 2012 R.Sokullu
Receiver Side Observes the received signal x(t) for a duration of time T sec Makes an estimate of the transmitted signal si(t) (eq. symbol mi). Process is statistical presence of noise errors So, receiver has to be designed for minimizing the average probability of error (Pe) What is this? Pe = Symbol sent cond. error probability given ith symbol was sent Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Outline 5.1 Introduction 5.2 Geometric Representation of Signals Gram-Schmidt Orthogonalization Procedure 5.3 Conversion of the AWGN into a Vector Channel 5.4 Maximum Likelihood Decoding 5.5 Correlation Receiver 5.6 Probability of Error Digital Communication Systems 2012 R.Sokullu
5.2. Geometric Representation of Signals Objective: To represent any set of M energy signals {si(t)} as linear combinations of N orthogonal basis functions, where N ≤ M Real value energy signals s1(t), s2(t),..sM(t), each of duration T sec Orthogonal basis function coefficient Energy signal Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Coefficients: Real-valued basis functions: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu The set of coefficients can be viewed as a N-dimensional vector, denoted by si Bears a one-to-one relationship with the transmitted signal si(t) Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Figure 5.3 (a) Synthesizer for generating the signal si(t). (b) Analyzer for generating the set of signal vectors si. Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Each signal in the set si(t) is completely determined by the vector of its coefficients Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Finally, The signal vector si concept can be extended to 2D, 3D etc. N-dimensional Euclidian space Provides mathematical basis for the geometric representation of energy signals that is used in noise analysis Allows definition of Length of vectors (absolute value) Angles between vectors Squared value (inner product of si with itself) Matrix Transposition Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Figure 5.4 Illustrating the geometric representation of signals for the case when N 2 and M 3. (two dimensional space, three signals) Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Also, What is the relation between the vector representation of a signal and its energy value? …start with the definition of average energy in a signal…(5.10) Where si(t) is as in (5.5): Digital Communication Systems 2012 R.Sokullu
Φj(t) is orthogonal, so finally we have: After substitution: After regrouping: Φj(t) is orthogonal, so finally we have: The energy of a signal is equal to the squared length of its vector Digital Communication Systems 2012 R.Sokullu
Formulas for two signals Assume we have a pair of signals: si(t) and sj(t), each represented by its vector, Then: Inner product is invariant to the selection of basis functions Inner product of the signals is equal to the inner product of their vector representations [0,T] Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Euclidian Distance The Euclidean distance between two points represented by vectors (signal vectors) is equal to ||si-sk|| and the squared value is given by: Digital Communication Systems 2012 R.Sokullu
Angle between two signals The cosine of the angle Θik between two signal vectors si and sk is equal to the inner product of these two vectors, divided by the product of their norms: So the two signal vectors are orthogonal if their inner product siTsk is zero (cos Θik = 0) Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Schwartz Inequality Defined as: accept without proof… Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Outline 5.1 Introduction 5.2 Geometric Representation of Signals Gram-Schmidt Orthogonalization Procedure 5.3 Conversion of the AWGN into a Vector Channel 5.4 Maximum Likelihood Decoding 5.5 Correlation Receiver 5.6 Probability of Error Digital Communication Systems 2012 R.Sokullu
Gram-Schmidt Orthogonalization Procedure Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t). Define the first basis function starting with s1 as: (where E is the energy of the signal) (based on 5.12) Then express s1(t) using the basis function and an energy related coefficient s11 as: Later using s2 define the coefficient s21 as: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu If we introduce the intermediate function g2 as: We can define the second basis function φ2(t) as: Which after substitution of g2(t) using s1(t) and s2(t) it becomes: Note that φ1(t) and φ2(t) are orthogonal that means: Orthogonal to φ1(t) (Look at 5.23) Digital Communication Systems 2012 R.Sokullu
And so on for N dimensional space…, In general a basis function can be defined using the following formula: where the coefficients can be defined using: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Special case: For the special case of i = 1 gi(t) reduces to si(t). General case: Given a function gi(t) we can define a set of basis functions, which form an orthogonal set, as: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Outline 5.1 Introduction 5.2 Geometric Representation of Signals Gram-Schmidt Orthogonalization Procedure 5.3 Conversion of the AWGN into a Vector Channel 5.4 Maximum Likelihood Decoding 5.5 Correlation Receiver 5.6 Probability of Error Digital Communication Systems 2012 R.Sokullu
Conversion of the Continuous AWGN Channel into a Vector Channel Suppose that the si(t) is not any signal, but specifically the signal at the receiver side, defined in accordance with an AWGN channel: So the output of the correlator (Fig. 5.3b) can be defined as: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu deterministic quantity random quantity contributed by the transmitted signal si(t) sample value of the variable Wi due to noise Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Now, Consider a random process X1(t), with x1(t), a sample function which is related to the received signal x(t) as follows: Using 5.28, 5.29 and 5.30 and the expansion 5.5 we get: which means that the sample function x1(t) depends only on the channel noise! Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu The received signal can be expressed as: NOTE: This is an expansion similar to the one in 5.5 but it is random, due to the additive noise. Digital Communication Systems 2012 R.Sokullu
Statistical Characterization The received signal (output of the correlator of Fig.5.3b) is a random signal. To describe it we need to use statistical methods – mean and variance. The assumptions are: X(t) denotes a random process, a sample function of which is represented by the received signal x(t). Xj(t) denotes a random variable whose sample value is represented by the correlator output xj(t), j = 1, 2, …N. We have assumed AWGN, so the noise is Gaussian, so X(t) is a Gaussian process and being a Gaussian RV, X j is described fully by its mean value and variance. Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Mean Value Let Wj, denote a random variable, represented by its sample value wj, produced by the jth correlator in response to the Gaussian noise component w(t). So it has zero mean (by definition of the AWGN model) …then the mean of Xj depends only on sij: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Variance Starting from the definition, we substitute using 5.29 and 5.31 Autocorrelation function of the noise process Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu It can be expressed as: (because the noise is stationary and with a constant power spectral density) After substitution for the variance we get: And since φj(t) has unit energy for the variance we finally have: Correlator outputs, denoted by Xj have variance equal to the power spectral density N0/2 of the noise process W(t). Digital Communication Systems 2012 R.Sokullu
Properties (without proof) Xj are mutually uncorrelated Xj are statistically independent (follows from above because Xj are Gaussian) and for a memoryless channel the following equation is true: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Define (construct) a vector X of N random variables, X1, X2, …XN, whose elements are independent Gaussian RV with mean values sij, (output of the correlator, deterministic part of the signal defined by the signal transmitted) and variance equal to N0/2 (output of the correlator, random part, calculated noise added by the channel). then the X1, X2, …XN , elements of X are statistically independent. So, we can express the conditional probability of X, given si(t) (correspondingly symbol mi) as a product of the conditional density functions (fx) of its individual elements fxj. NOTE: This is equal to finding an expression of the probability of a received symbol given a specific symbol was sent, assuming a memoryless channel Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu …that is: where, the vector x and the scalar xj, are sample values of the random vector X and the random variable Xj. Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Vector x is called observation vector Scalar xj is called observable element Vector x and scalar xj are sample values of the random vector X and the random variable Xj Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Since, each Xj is Gaussian with mean sj and variance N0/2 we can substitute in 5.44 to get 5.46: Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu If we go back to the formulation of the received signal through a AWGN channel 5.34 Only projections of the noise onto the basis functions of the signal set {si(t)Mi=1 affect the significant statistics of the detection problem The vector that we have constructed fully defines this part Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Finally, The AWGN channel, is equivalent to an N-dimensional vector channel, described by the observation vector Digital Communication Systems 2012 R.Sokullu
Digital Communication Systems 2012 R.Sokullu Outline 5.1 Introduction 5.2 Geometric Representation of Signals Gram-Schmidt Orthogonalization Procedure 5.3 Conversion of the AWGN into a Vector Channel 5.4 Maximum Likelihood Decoding 5.5 Correlation Receiver 5.6 Probability of Error Digital Communication Systems 2012 R.Sokullu
Maximum Likelihood Decoding to be continued…. Digital Communication Systems 2012 R.Sokullu