Download presentation
Presentation is loading. Please wait.
Published byAbigail Rose Modified over 8 years ago
1
EE 551/451, Fall, 2006 Communication Systems Zhu Han Department of Electrical and Computer Engineering Class 15 Oct. 10 th, 2006
2
EE 541/451 Fall 2006 Outline Homework Exam format Second half schedule –Chapter 7 –Chapter 16 –Chapter 8 –Chapter 9 –Standards Estimation and detection this class: chapter 14, not required –Estimation theory, methods, and examples –Detection theory, methods, and examples Information theory next Tuesday: chapter 15, not required
3
EE 541/451 Fall 2006 Estimation Theory Consider a linear process y = H + n y = observed data = sending information n = additive noise If is known, H is unknown. Then estimation is the problem of finding the statistically optimal , given y, and knowledge of noise properties. If H is known, then detection is the problem of finding the most likely sending information , given y, and knowledge of noise properties. In practical system, the above two steps are conducted iteratively to track the channel changes then transmit data.
4
EE 541/451 Fall 2006 Different Approaches for Estimation Minimum variance unbiased estimators Subspace estimators Least Squares Maximum-likelihood Maximum a posteriori has no statistical basis uses knowledge of noise PDF uses prior information about
5
EE 541/451 Fall 2006 Least Squares Estimator Least Squares: LS = argmin ||y – H || 2 Natural estimator– want solution to match observation Does not use any information about noise There is a simple solution (a.k.a. pseudo-inverse): LS = (H T H) -1 H T y What if we know something about the noise? Say we know Pr(n)…
6
EE 541/451 Fall 2006 Maximum Likelihood Estimator Simple idea: want to maximize Pr(y| ) Can write Pr(n) = e -L(n), n = y – H , and Pr(n) = Pr(y| ) = e -L(y, ) if white Gaussian n, Pr(n) = e -||n|| 2 /2 2 and L(y, ) = ||y-H || 2 /2 2 ML = argmax Pr(y| ) = argmin L(y, ) –called the likelihood function ML = argmin ||y-H || 2 /2 2 This is the same as Least Squares!
7
EE 541/451 Fall 2006 Maximum Likelihood Estimator But if noise is jointly Gaussian with cov. matrix C Recall C, E (nn T ). Then Pr(n) = e -½ n T C -1 n L(y| ) = ½ (y-H ) T C -1 (y-H ) ML = argmin ½ (y-H ) T C -1 (y-H ) This also has a closed form solution ML = (H T C -1 H) -1 H T C -1 y If n is not Gaussian at all, ML estimators become complicated and non-linear Fortunately, in most channel noise is usually Gaussian
8
EE 541/451 Fall 2006 Estimation example - Denoising Suppose we have a noisy signal y, and wish to obtain the noiseless image x, where y = x + n Can we use Estimation theory to find x? Try: H = I, = x in the linear model Both LS and ML estimators simply give x = y! we need a more powerful model Suppose x can be approximated by a polynomial, i.e. a mixture of 1 st p powers of r: x = i=0 p a i r i
9
EE 541/451 Fall 2006 Example – Denoising LS = (H T H) -1 H T y x = i=0 p a i r i H y Least Squares estimate: y 1 y 2 y n 1 r 1 1 r 1 p 1 r 2 1 r 2 p 1 r n 1 r n p = a 0 a 1 a p n 1 n 2 n n +
10
EE 541/451 Fall 2006 Maximum a Posteriori (MAP) Estimate This is an example of using a signal prior information Priors are generally expressed in the form of a PDF Pr(x) Once the likelihood L(x) and prior are known, we have complete statistical knowledge LS/ML are suboptimal in presence of prior MAP (aka Bayesian) estimates are optimal Bayes Theorem: Pr(x|y) = Pr(y|x) Pr(x) Pr(y) likelihood prior posterior
11
EE 541/451 Fall 2006 Maximum a Posteriori (Bayesian) Estimate Consider the class of linear systems y = Hx + n Bayesian methods maximize the posterior probability: Pr(x|y) ∝ Pr(y|x). Pr(x) Pr(y|x) (likelihood function) = exp(- ||y-Hx|| 2 ) Pr(x) (prior PDF) = exp(-G(x)) Non-Bayesian: maximize only likelihood x est = arg min ||y-Hx|| 2 Bayesian: x est = arg min ||y-Hx|| 2 + G(x), where G(x) is obtained from the prior distribution of x If G(x) = ||Gx|| 2 Tikhonov Regularization
12
EE 541/451 Fall 2006 Expectation and Maximization (EM) Expectation and Maximization (EM) algorithm alternates between performing an expectation (E) step, which computes an expectation of the likelihood by including the latent variables as if they were observed, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters by maximizing the expected likelihood found on the E step. The parameters found on the M step are then used to begin another E step, and the process is repeated. –E-step: Estimation for unobserved event (which Gaussian is used), conditioned on the observation, using the values from the last maximization step. –M-step: You want to maximize the expected log-likelihood of the joint event
13
EE 541/451 Fall 2006 Minimum-variance unbiased estimator Biased and unbiased estimators An unbiased estimator of parameters, whose variance is minimized for all values of the parameters.unbiasedestimatorvariance The Cramer-Rao Lower Bound (CRLB) sets a lower bound on the variance of any unbiased estimator. Biased estimator might have better performances than unbiased estimator in terms of variance. Subspace methods –MUSIC –ESPRIT –Widely used in RADA –Helicopter, Weapon detection (from feature)
14
EE 541/451 Fall 2006 What is Detection Deciding whether, and when, an event occurs a.k.a. Decision Theory, Hypothesis testing Presence/absence of signal –RADA –Received signal is 0 or 1 –Stock goes high or not –Criminal is convicted or set free Measures whether statistically significant change has occurred or not
15
EE 541/451 Fall 2006 Detection “Spot the Money”
16
EE 541/451 Fall 2006 Hypothesis Testing with Matched Filter Let the signal be y(t), model be h(t) Hypothesis testing: H0: y(t) = n(t) (no signal) H1: y(t) = h(t) + n(t) (signal) The optimal decision is given by the Likelihood ratio test (Nieman-Pearson Theorem) Select H1 if L(y) = Pr(y|H1)/Pr(y|H0) > g otherwise select H0
17
EE 541/451 Fall 2006 Signal detection paradigm Signal trials Noise trials
18
EE 541/451 Fall 2006 Signal Detection
19
EE 541/451 Fall 2006 Receiver operating characteristic (ROC) curve
20
EE 541/451 Fall 2006 Matched Filters Optimal linear filter for maximizing the signal to noise ratio (SNR) at the sampling time in the presence of additive stochastic noise Given transmitter pulse shape g(t) of duration T, matched filter is given by h opt (t) = k g*(T-t) for all k g(t)g(t) Pulse signal w(t)w(t) x(t)x(t) h(t)h(t) y(t)y(t) t = T y(T)y(T) Matched filter
21
EE 541/451 Fall 2006 Questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.