Download presentation
Presentation is loading. Please wait.
Published byFelicia Dixon Modified over 9 years ago
1
Wiener Filtering for Image Restoration & Basics on Image Compression
4/26/2017 ENEE631 Spring’09 Lecture 8 (2/18/2009) Wiener Filtering for Image Restoration & Basics on Image Compression xx Lecture: xxxx – x slides, about xx min Spring ’09 Instructor: Min Wu Electrical and Computer Engineering Department, University of Maryland, College Park bb.eng.umd.edu (select ENEE631 S’09)
2
UMCP ENEE631 Slides (created by M.Wu © 2004)
4/26/2017 Overview Last Time: image restoration Power spectral density for 2-D stationary random field A few commonly seen linear distortions in imaging system Deconvolution: inverse filtering, pseudo-inverse filtering Today: Wiener filtering: balance between inverse filtering & noise removal Basics compression techniques UMCP ENEE631 Slides (created by M.Wu © 2004) H u(n1, n2) v(n1, n2) (n1, n2) G u’(n1, n2) w(n1, n2) M. Wu: ENEE631 Digital Image Processing (Spring'09)
3
Handling Noise in Deconvolution
4/26/2017 Handling Noise in Deconvolution Inverse filtering is sensitive to noise Does not explicitly model and handle noise Balance between undo degradation H vs. noise suppression Minimize MSE between the original and restored e = E{ [ u(n1, n2) – u’(n1, n2) ] 2 }, where u’(n1, n2) is a func. of {v(m1, m2) } Best estimate is conditional mean E[ u(n1 , n2) | all v(m1 , m2) ] see EE621; but usually difficult to solve for general restoration (need conditional probability distribution, and estimation is nonlinear in general) Get the best linear estimate instead Wiener filtering Consider the (desired) image and noise as random fields Produce a linear estimate from the observed image to minimize MSE UMCP ENEE631 Slides (created by M.Wu © 2001/2004) H u(n1, n2) v(n1, n2) (n1, n2) G u’(n1, n2) w(n1, n2) M. Wu: ENEE631 Digital Image Processing (Spring'09)
4
EE621 Review: MMSE Baysian parameter estimation – Minimum Mean Squared Error (MMSE) Estimation M. Wu: ENEE631 Digital Image Processing (Spring'09)
5
EE630 Review: Wiener Filtering
4/26/2017 EE630 Review: Wiener Filtering Want to know the actual values (of the current/interested realization) of a random process {d[n]}, but cannot directly observe it. What we can observe is a process {x[n]} that is statistically related to d[n]. We want to estimate d[n] from x[n]. M. Wu: ENEE631 Digital Image Processing (Spring'09)
6
EE630 Review: Principle of Orthogonality
4/26/2017 EE630 Review: Principle of Orthogonality “Orthogonal” in a statistical sense: i.e. the optimal error signal and each observation sample used in the filtering (and also their combinations) are statistically uncorrelated plugging e[n] into the orthogonality principle leads to the normal equation. M. Wu: ENEE631 Digital Image Processing (Spring'09)
7
UMCP ENEE631 Slides (created by M.Wu © 2001, 2007)
H u v G u’ w 4/26/2017 Wiener Filtering Get the best linear estimate minimizing MSE Assume: spatial-invariant restoration filter u’(n1, n2) = g (n1, n2) v(n1, n2) ; wide-sense stationarity for original signal and noise; noise is zero-mean and uncorrelated with original signal. Solutions Principle of orthogonality E{ [ u(n1, n2) – u’(n1, n2) ] v*(m1, m2) }=0 => E[ u(n1,n2) v*(m1,m2) ] = E[ u’(n1,n2) v*(m1,m2) ] => Ru v (k,l) = Ru’ v(k,l) i.e. the restored image should have similar stochastic properties as the original. Find out expressions of the two cross-correlation functions: Extend 1-D: for y(n1,n2) = x(n1,n2) + w(n1,n2) => Ruy (k,l) = Rux(k,l) + Ruw(k,l) ; if x(n1,n2) and w(n1,n2) uncorrelated => Ryy (k,l) = Rx x(k,l) + Rw w(k,l) . Ru’ v(k,l) = g(k,l) Rvv(k,l) = g(k,l) [ Rww(k,l) + R(k,l) ] Ru v(k,l) = Ruw(k,l) + Ru(k,l) = h*(k,l) Ruu(k,l) + 0 UMCP ENEE631 Slides (created by M.Wu © 2001, 2007) u(m,n) => [ H ] => (+ noise) => v(m,n) v = conv( h, u ) + noise; u’ = conv( g, v ) Note we have S_uv instead of S_vu in the solution, requiring the use of H* instead of H M. Wu: ENEE631 Digital Image Processing (Spring'09)
8
Wiener Filter in Frequency-Domain Representation
4/26/2017 Wiener Filter in Frequency-Domain Representation Ru v (k, l) = Ru’ v(k, l) Ru’ v(k,l) = g(k,l) Rvv(k,l) = g(k,l) [ Rww(k,l) + R(k,l) ] Ru v(k,l) = Ruw(k,l) + Ru(k,l) = h*(k,l) Ruu(k,l) + 0 Take DFT to get representation in power spectrum density UMCP ENEE631 Slides (created by M.Wu © 2007) u(m,n) => [ H ] => (+ noise) => v(m,n) v = conv( h, u ) + noise; u’ = conv( g, v ) Note we have S_uv instead of S_vu in the solution, requiring the use of H* instead of H H u v G u’ w M. Wu: ENEE631 Digital Image Processing (Spring'09)
9
Wiener Filtering: Special Cases
4/26/2017 Wiener Filtering: Special Cases Balancing between two jobs for deblurring noisy image HPF filter for de-blurring (undo H distortion) LPF for suppressing noise Noiseless case ~ S = 0 Wiener filter becomes pseudo-inverse filter for S 0 No-blur case ~ H = 1 (Wiener Smoothing Filter) Zero-phase filter to attenuate noise according to SNR at each freq. UMCP ENEE631 Slides (created by M.Wu © 2001) In practice, we often approx SNR as a constant to walk around the need of estimating image p.s.d Note the phase response of Wiener is the same as inverse filter 1 / H( ), i.e. not compensate phase distortion due to noise M. Wu: ENEE631 Digital Image Processing (Spring'09)
10
UMCP ENEE631 Slides (created by M.Wu © 2001)
4/26/2017 Comparisons UMCP ENEE631 Slides (created by M.Wu © 2001) Wiener: sort of a band-pass effect to balance between LP noise smoothing and HP/sharpening inverse filter From Jain Fig.8.11 M. Wu: ENEE631 Digital Image Processing (Spring'09)
11
Example: Wiener Filtering vs. Inverse Filtering
4/26/2017 Example: Wiener Filtering vs. Inverse Filtering UMCP ENEE631 Slides (created by M.Wu © 2004) Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 5) M. Wu: ENEE631 Digital Image Processing (Spring'09)
12
Example (2): Wiener Filtering vs. Inverse Filtering
4/26/2017 Example (2): Wiener Filtering vs. Inverse Filtering UMCP ENEE631 Slides (created by M.Wu © 2004) Top row: noise variance = 650; 2nd row: noise variance ~ 65; 3rd row: noise variance 10^-4 Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 5) M. Wu: ENEE631 Digital Image Processing (Spring'09)
13
To Explore Further on Wiener Filter
4/26/2017 To Explore Further on Wiener Filter Recall the assumptions: p.s.d. of image and noise random fields are known frequency response of distortion filter is known Are these reasonable assumptions? What do they imply in the implementation of Wiener filter? UMCP ENEE631 Slides (created by M.Wu © 2004) M. Wu: ENEE631 Digital Image Processing (Spring'09)
14
Wiener Filter: Issues to Be Addressed
4/26/2017 Wiener Filter: Issues to Be Addressed Wiener filter’s size Theoretically the p.s.d. based formulation can have infinite impulse response ~ require large-size DFTs Impose filter size constraint: find the best FIR that minimizes MSE Need to estimate power spectrum density of orig. signal? Avoid explicit estimate by using an (adaptive) constant for SNR Estimate p.s.d. of blurred image v and compensate variance due to noise Estimate from a representative image set (similar to the images to be restored) Or use statistical model for the orig. image and estimate parameters Constrained least square filter ~ see Gonzalez Sec.5.9 Optimize smoothness in restored image (least-square of the rough transitions) Constrain differences between blurred image and blurred version of reconstructed image Estimate restoration filter w/o estimating p.s.d. Unknown distortion H ~ Blind Deconvolution UMCP ENEE631 Slides (created by M.Wu © 2001) Constrained least square filtering: Min | conv( HPF, u’ ) |, subject to | v – conv( H, u’ ) | < epsilon H u v G u’ w v’ M. Wu: ENEE631 Digital Image Processing (Spring'09)
15
EE630 Review: Periodogram Spectral Estimator
4/26/2017 EE630 Review: Periodogram Spectral Estimator UMCP ENEE624 Slides (created by M.Wu © 2003) E.g. Periodogram of white noise (See Hayes pp395 Example8.2.1) M. Wu: ENEE631 Digital Image Processing (Spring'09)
16
EE630 Review: Averaged Periodogram
4/26/2017 EE630 Review: Averaged Periodogram As one solution to the variance problem of periodogram Average K periodograms computed from K sets of data records UMCP ENEE624 Slides (created by M.Wu © 2003) M. Wu: ENEE631 Digital Image Processing (Spring'09)
17
Basic Ideas of Blind Deconvolution
4/26/2017 Basic Ideas of Blind Deconvolution Three ways to estimate H: observation, experimentation, math. modeling Estimate H via spectrum’s zero patterns Two major classes of blur (motion blur and out-of-focus) H has nulls related to the type and the parameters of the blur Maximum-Likelihood blur estimation Each set of image model and blur parameters gives a “typical” blurred output; Probability comes into picture because of the existence of noise Given the observation of blurred image, try to find the set of parameters that is most likely to produce that blurred output Iteration ~ Expectation-Maximization approach (EM) Given estimated parameters, restore image via Wiener filtering Examine restored image and refine parameter estimation Get local optimums To explore more: Bovik’s Handbook Sec.3.5 (subsection-3 & 4) “Blind Image Deconvolution” by Kundur et al, IEEE Sig. Proc. Magazine, vol.13(3), 1996 UMCP ENEE631 Slides (created by M.Wu © 2001/2004) See also ref from Haykin’s book (general blind deconvolution issues w/ app to comm): 4th edition adaptive filtering: chapter 16 blind deconvolution. M. Wu: ENEE631 Digital Image Processing (Spring'09)
18
Basic Techniques for Data Compression
4/26/2017 Basic Techniques for Data Compression UMCP ENEE631 Slides (created by M.Wu © 2004) M. Wu: ENEE631 Digital Image Processing (Spring'09)
19
UMCP ENEE631 Slides (created by M.Wu © 2001)
4/26/2017 Why Need Compression? Savings in storage and transmission multimedia data (esp. image and video) have large data volume difficult to send real-time uncompressed video over current network Accommodate relatively slow storage devices they do not allow playing back uncompressed multimedia data in real time 1x CD-ROM transfer rate ~ 150 kB/s 320 x 240 x 24 fps color video bit rate ~ 5.5MB/s => 36 seconds needed to transfer 1-sec uncompressed video from CD UMCP ENEE631 Slides (created by M.Wu © 2001) M. Wu: ENEE631 Digital Image Processing (Spring'09)
20
Example: Storing An Encyclopedia
4/26/2017 From Ken Lam’s DCT talk 2001 (HK Polytech) Example: Storing An Encyclopedia 500,000 pages of text (2kB/page) ~ 1GB => 2:1 compress 3,000 color pictures (64048024bits) ~ 3GB => 15:1 500 maps (64048016bits=0.6MB/map) ~ 0.3GB => 10:1 60 minutes of stereo sound (176kB/s) ~ 0.6GB => 6:1 30 animations with average 2 minutes long (64032016bits16frames/s=6.5MB/s) ~ 23.4GB => 50:1 50 digitized movies with average 1 minute long (64048024bits30frames/s = 27.6MB/s) ~ 82.8GB => 50:1 Require a total of 111.1GB storage capacity if without compression Reduce to 2.96GB if with compression UMCP ENEE631 Slides (created by M.Wu © 2001) M. Wu: ENEE631 Digital Image Processing (Spring'09)
21
UMCP ENEE631 Slides (created by M.Wu © 2001)
4/26/2017 Outline Lossless encoding tools Entropy coding: Huffman, Lemple-Ziv, and others (Arithmetic coding) Run-length coding Lossy tools for reducing redundancy Quantization and truncations Predictive coding Transform coding Zonal coding (truncation) Rate-distortion and bit allocation Tree structure Vector quantization UMCP ENEE631 Slides (created by M.Wu © 2001) M. Wu: ENEE631 Digital Image Processing (Spring'09)
22
UMCP ENEE631 Slides (created by M.Wu © 2001)
4/26/2017 PCM coding How to encode a digital image into bits? Sampling and perform uniform quantization “Pulse Coded Modulation” (PCM) 8 bits per pixel ~ good for grayscale image/video 10-12 bpp ~ needed for medical images Reduce # of bpp for reasonable quality via quantization Quantization reduces # of possible levels to encode Visual quantization: dithering, companding, etc. Halftone use 1bpp but usually upsampling ~ saving less than 2:1 Encoder-Decoder pair “codec” UMCP ENEE631 Slides (created by M.Wu © 2001) “PCM” (pulse coded modulation): after sampling-quantization, we code the signal sample by a suitable codeword before feeding it to a digital modulator for transmission. So on its own, PCM does not do any modulation. Additional material (write on board): source coding vs. channel coding; the whole process of transmission; advantage of separating source and channel coding I(x,y) Input image Sampler Quantizer Encoder transmit image capturing device M. Wu: ENEE631 Digital Image Processing (Spring'09)
23
Discussions on Improving PCM
4/26/2017 Discussions on Improving PCM Quantized PCM values may not be equally likely Can we do better than encode each value using same # bits? Example P(“0” ) = 0.5, P(“1”) = 0.25, P(“2”) = 0.125, P(“3”) = 0.125 If to use same # bits for all values Need 2 bits to represent the four possibilities if treat If to use fewer bits for the likely value “0” ~ Variable Length Codes (VLC) “0” => [0], “1” => [10], “2” => [110], “3” => [111] Use i pi li =1.75 bits on average ~ saves 0.25 bpp! Bring probability into the picture Use prob. distribution to reduce average # bits per quantized sample UMCP ENEE631 Slides (created by M.Wu © 2001) M. Wu: ENEE631 Digital Image Processing (Spring'09)
24
UMCP ENEE631 Slides (created by M.Wu © 2001/2004)
4/26/2017 Entropy Coding Idea: use fewer bits for commonly seen values At least how many # bits needed? Limit of compression => “Entropy” Measures the uncertainty or avg. amount of information of a source Definition: H = i pi log2 (1 / pi) bits e.g., entropy of previous example is 1.75 Can’t represent a source perfectly with less than avg. H bits per sample Can represent a source perfectly with avg. H+ bits per sample ( Shannon Lossless Coding Theorem ) “Compressability” depends on the statistical nature of the info source Important to design a codebook to decode coded stream efficiently and without ambiguity See info. theory course (EE721) for more theoretical details UMCP ENEE631 Slides (created by M.Wu © 2001/2004) M. Wu: ENEE631 Digital Image Processing (Spring'09)
25
E.g. of Entropy Coding: Huffman Coding
4/26/2017 E.g. of Entropy Coding: Huffman Coding Variable length code Assign about log2 (1 / pi) bits for the ith value has to be integer# of bits per symbol Step-1 Arrange pi in decreasing order and consider them as tree leaves Step-2 Merge two nodes with smallest probabilities to a new node and sum up probabilities Arbitrarily assign 1 and 0 to each pair of merging branch Step-3 Repeat until no more than one node left. Read out codeword sequentially from root to leaf UMCP ENEE631 Slides (created by M.Wu © 2001) The tree construction and reading out codeword from root to leaf ensure “prefix free” condition for the code, i.e. no codeword is a prefix of some other codeword. M. Wu: ENEE631 Digital Image Processing (Spring'09)
26
Huffman Coding (cont’d)
4/26/2017 Huffman Coding (cont’d) 000 001 010 011 100 101 110 111 PCM 00 10 010 011 1100 1101 1110 1111 Huffman (trace from root) S0 S1 S2 S3 S4 S5 S6 S7 0.25 0.21 0.15 0.14 0.0625 0.54 1 1.0 1 0.46 1 0.29 1 UMCP ENEE631 Slides (created by M.Wu © 2001) Reversely tracing the 0/1 paths to obtain final codewords: the reverse tracing ensures prefix-free 0.125 1 0.25 1 0.125 1 M. Wu: ENEE631 Digital Image Processing (Spring'09)
27
Huffman Coding: Pros & Cons
4/26/2017 Huffman Coding: Pros & Cons Pro Simplicity in implementation (table lookup) For a given alphabet size, Huffman coding gives best coding efficiency (i.e. any other code won’t give lower expected code length) Con Need to obtain source statistics The length of each codeword has to be integer => lead to gaps between its average codelength and entropy Improvement (Ref: Cover-Thomas) Code a group of symbols as a whole: allow fractional # bits/symbol Arithmetic coding: fractional # bits/symbol Lempel-Ziv coding or LZW algorithm “universal”, no need to pre-estimate source statistics fix-length codeword for variable-length source symbols UMCP ENEE631 Slides (created by M.Wu © 2001) M. Wu: ENEE631 Digital Image Processing (Spring'09)
28
UMCP ENEE631 Slides (created by M.Wu © 2001)
4/26/2017 Run-Length Coding How to efficiently encode it? e.g. a row in a binary doc image: “ …” Run-length coding (RLC) Code length of runs of “0” between successive “1” run-length of “0” ~ # of “0” between “1” good if often getting frequent large runs of “0” and sparse “1” E.g., => (7) (0) (3) (1) (6) (0) (0) … … Assign fixed-length codeword to run-length in a range (e.g. 0~7) Or use variable-length code like Huffman to further improve RLC also applicable to general a data sequence with many consecutive “0” (or long runs of other values) UMCP ENEE631 Slides (created by M.Wu © 2001) Use 7 bits to represent 7 zeros; versus use 3 bits to represent up to 2^3-1 runlength M. Wu: ENEE631 Digital Image Processing (Spring'09)
29
4/26/2017 RLC Example Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 8) M. Wu: ENEE631 Digital Image Processing (Spring'09)
30
Analyzing Coding Efficiency of Run-Length Coding
4/26/2017 Analyzing Coding Efficiency of Run-Length Coding Simplified assumption: “0” occurs independently w.p. p (close to 1) Prob. of getting an L-run of “0”: possible runs L=0,1, …, M P( L = l ) = pl (1-p) for 0 l M-1 (geometric distribution) P( L M ) = pM (when having M or more “0”) Avg. # binary symbols for each run of zero Savg = L (L+1) pL(1-p) + M pM = (1 – pM ) / ( 1 – p ) Compression ratio C = Savg / log2 (M+1) = (1 – pM ) / [( 1–p ) log2(M+1)] Example: p = 0.9, M=15, 4 bits per run-length symbol Savg = 7.94, Average run-length coding rate Bavg = 4 bits / 7.94 bpp Compression ratio C = 1 / B = Source’s entropy H = bpp => Coding efficiency = H / Bavg 91% UMCP ENEE631 Slides (created by M.Wu © 2001) If we have more than M zeros, we first encode runlength of M, then encode the rest of the runlength (taking values from 0, 1, …) Run-length coding can be viewed as a coding example of coding variable source length with fixed length codeword M. Wu: ENEE631 Digital Image Processing (Spring'09)
31
Summary of Today’s Lecture
4/26/2017 Summary of Today’s Lecture Wiener filtering for image restoration More on advanced restoration & applications if time is allowed later in the course Basics compression techniques PCM coding; Entropy coding; Run-length coding Next time: continue on image compression => quantization, etc. Take home exercise: derive optimal quantizers (1) To minimize maximum errors; (2) To minimize MSE Readings Gonzalez’s 3/e book ; 8.1, For further reading: Woods’ book 7.1, 7.2, (7.7); 3.1, 3.2, 3.5.0 Jain’s book ; Bovik’s Handbook Sec.3.5 (subsections 3 & 4) “Blind Image Deconvolution” by Kundur et al, IEEE Sig. Proc. Magazine, vol.13(3), 1996 UMCP ENEE631 Slides (created by M.Wu © 2004) Jain’s book ; , 11.9, Wang’s book 8.1, 8.4, 8.5 Woods’ book 7.1, 7.2; 8.1, 8.3, 8.4; Gonzalez’s 2/e book ; , 8.3.1, 8.3.4, 8.4 M. Wu: ENEE631 Digital Image Processing (Spring'09)
32
Revisit: Quantization Concept
4/26/2017 Revisit: Quantization Concept L-level Quantization Minimize errors for this lossy process What L values to use? Map what range of continuous values to each of L values? tmin tmax UMCP ENEE631 Slides (created by M.Wu © 2001/2004) What quantizer to use to minimize maximum errors? What conditions on {tk} and {rk} to minimize MSE? Note: (from Lim’s 2-D SP book) the quantization error is correlated with the original signal; adding dither can make them more uncorrelated. t1 tL+1 p.d.f pu(x) r1 rL M. Wu: ENEE631 Digital Image Processing (Spring'09)
33
Quantization: A Close Look
4/26/2017 Quantization: A Close Look UMCP ENEE631 Slides (created by M.Wu © 2004) M. Wu: ENEE631 Digital Image Processing (Spring'09)
34
Review of Quantization Concept
4/26/2017 Review of Quantization Concept L-level Quantization Minimize errors for this lossy process What L values to use? Map what range of continuous values to each of L values? tmin tmax UMCP ENEE631 Slides (created by M.Wu © 2001/2004) tmin tmax Note: (from Lim’s 2-D SP book) the quantization error is correlated with the original signal; adding dither can make them more uncorrelated. Uniform partition Maximum errors = ( tmax - tmin ) / 2L = A / 2L over a dynamic range of A Best solution? Consider minimizing maximum absolute error (min-max) vs. MSE what if the value between [a, b] is more likely than other intervals? tk tk+1 (tmax—tmax)/2L quantization error M. Wu: ENEE631 Digital Image Processing (Spring'09)
35
Bring in Probability Distribution
4/26/2017 Bring in Probability Distribution t1 tL+1 p.d.f pu(x) r1 rL Allocate more reconstruct. values in more probable ranges UMCP ENEE631 Slides (created by M.Wu © 2001) Minimize error in a probability sense MMSE (minimum mean square error) assign high penalty to large error and to likely occurring values squared error gives convenience in math.: differential, etc. An optimization problem What {tk} and {rk } to use? Necessary conditions: by setting partial differentials to zero M. Wu: ENEE631 Digital Image Processing (Spring'09)
36
Derivation of MMSE Quantizer
4/26/2017 Derivation of MMSE Quantizer MSE of L-level quantizer Optimal (MMSE) quantizer – necessary conditions M. Wu: ENEE631 Digital Image Processing (Spring'09)
37
MMSE Quantizer (Lloyd-Max)
4/26/2017 MMSE Quantizer (Lloyd-Max) Reconstruction and decision levels need to satisfy Solve iteratively Choose initial values of {tk}(0) , compute {rk}(0) Compute new values {tk}(1), and {rk}(1) …… For large number of quantization levels Approx. constant pdf within t[tk, tk+1), i.e. p(t) = p(tk’) for tk’=(tk+tk+1)/2 Reference: S.P. Lloyd: “Least Squares Quantization in PCM”, IEEE Trans. Info. Theory, vol.IT-28, March 1982, pp UMCP ENEE631 Slides (created by M.Wu © 2001/2004) r_k is the centroid of pdf over the k-th interval M. Wu: ENEE631 Digital Image Processing (Spring'09)
38
M. Wu: ENEE631 Digital Image Processing (Spring'09)
4/26/2017 M. Wu: ENEE631 Digital Image Processing (Spring'09)
39
Quantizer for Piecewise Constant p.d.f.
4/26/2017 Quantizer for Piecewise Constant p.d.f. UMCP ENEE631 Slides (created by M.Wu © 2001) (Jain’s Fig.4.17) M. Wu: ENEE631 Digital Image Processing (Spring'09)
40
MMSE Quantizer for Uniform Distribution
4/26/2017 MMSE Quantizer for Uniform Distribution t1 tL+1 A 1/A p.d.f. of uniform distribution Uniform quantizer Optimal for uniform distributed r.v. in MMSE sense MSE = q2 / 12 with q = A / L SNR of uniform quantizer Variance of uniform distributed r.v. = A2 / 12 SNR = 10 log10 (A2 / q2) = 20 log10 L (dB) If L = 2B, SNR = (20 log102)*B = 6B (dB) “1 bit is worth 6 dB.” Rate-Distortion tradeoff t1 tL+1 UMCP ENEE631 Slides (created by M.Wu © 2001/2004) * MMSE quantizer: given the # of quantization intervals (i.e. loosely speaking for the rate), we design quantizer that minimize MSE and we can obtain the MMSE distortion * Rate-distortion func may answer both the above question (in terms of rate=>distortion) as well as the reverse question: suppose we are allowed to make a certain amount of distortion, then what’s the minimum # of bits required (i.e. distortion=>rate). M. Wu: ENEE631 Digital Image Processing (Spring'09)
41
Quantization – A “Lossy Step” in Source Coding
4/26/2017 Quantization – A “Lossy Step” in Source Coding Quantizer achieves compression in a lossy way Lloyd-Max quantizer minimizes MSE distortion with a given rate Need at least how many # bits for certain amount of error? (information-theoretic) Rate-Distortion theory Rate distortion function of a r.v. Minimum average rate RD bits/sample required to represent this r.v. while allowing a fixed distortion D R(D) = min I(X;X*) minimize over p(X*|X) given a source p(X) For Gaussian r.v. and MSE 1bit more cuts down distortion to ¼ => 6dB UMCP ENEE739M Slides (created by M.Wu © 2002) D RD 2 See Info. Theory course/books for detailed proof of R-D theorem M. Wu: ENEE631 Digital Image Processing (Spring'09)
42
Shannon’s Channel Capacity
4/26/2017 See Information Theory course and books for more details (ENEE721, Cover-Thomas’ book ) Shannon’s Channel Capacity Entropy ~ measure the amount of uncertainty H = i pi log2 (1 / pi) bits (see also differential entropy for cont’s r.v.) Can’t represent a source perfectly with less than avg. H bits per sample but can do so with avg. H+ bits per sample Mutual Information I(X; Y) = E[ log p(X,Y) / (p(X)p(Y)) ] Measure the amount of info. that one r.v. contains about another r.v. Communicating through imperfect channel What’s the maximum amount of info. one can accurately convey per channel use? ~ how many diff. msg one can distinguish with n ch. use? “Capacity” and “Accuracy” are in asymptotical sense C = max I(X; Y) maximize over p(X) given p(Y|X) for discrete channel, I(X; Y) = H(Y) - H(Y|X) for Binary Symmetric Channel: C = 1 – h(p) where h(p) is bin. entropy for AWGN channel, C = ½ * log( 1 + 2x / 2n ) UMCP ENEE739M Slides (created by M.Wu © 2002) M. Wu: ENEE631 Digital Image Processing (Spring'09)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.