Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.

Similar presentations


Presentation on theme: "Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector."— Presentation transcript:

1 Mathematical Preliminaries

2 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector

3 38 Row-ordered form of a matrix Column-ordered form of a matrix Lexicographic Ordering(Stacking operation)

4 39

5 40 Transposition and conjugation rules Toeplitz matrices Circulant matrices

6 41 Linear convolution using Toeplitz matrix

7 42 (Toepliz matrix)

8 43 N-point circular convolution : h(n) N x(n) N Circular convolution using circulant matrix

9 44 (circulant matrix) Circular convolution + zero padding linear convolution Circular convolution with the period : the same result with that of linear convolution

10 45 (ex) Linear convolution as a Toeplitz matrix operation (ex) Circular convolution as a circulant matrix operation

11 46 Orthogonal and unitary matrices Orthogonal : Unitary : Positive definiteness and quadratic forms is called positive definite, if is a Hermitian matrix and is called positive semidefinite(nonnegative), if is a Hermitian matrix and Theorem if is a symmetric positive definite matrix, then all its eigenvalues are positive and the determinant of satisfies

12 47 Diagonal forms For any Hermitian matrix there exists a unitary matrix such that Eigenvalue and eigenvector : diagonal matrix containing the the eigenvalues of : eigenvalue : eigenvector

13 48 (ex) m n m n 1 5 5 1 3 10 5 2 2 3 -2 -3 n m y(m,n) Column ; Stacking Operation Block Matrices Block matrices : elements are matrices

14 49 where block matrix Let xn and yn be the column vector, then

15 50 (ex) Definition Properties(Table2.7) Kronecker Products

16 51 Separable transformation Transformation on an NXM image row-ordered form Consider the transformation, if : matrix form : vector form

17 52

18 53 Definitions Random signal : a sequence of random variables Mean : Variance : Covariance : Cross covariance : Autocorrelation : Cross correlation : Random Signals

19 54 : Nx1 vector : NxN matrix : mean vector : covariance matrix Gaussian(or Normal) distribution Gaussian random processes Gaussian random process if the joint probability density of any finite sub-sequence is a Gaussian distribution : covariance matrix Representation for an NX1 vector

20 55 Stationary process Strict-sense stationary if the joint density of any partial sequence is the same as that of the shifted sequence Wide-sense stationary if Gaussian process : wide-sense = strict sense : covariance matrix is Toeplitz

21 56 Orthogonal : Independent : Uncorrelated : (ex) Covariance matrix of a first-order stationary Markov sequence u(n) : Toeplitz Markov processes p-th order Markov

22 57 Karhunen-Loeve(KL) transform KL transform of Property The elements of y(k) are orthogonal is called the KL transform matrix The rows of are the conjugate eigenvectors of : NxN unitary matrix

23 58 Definitions Discrete random field Each sample of a 2-D sequence is a random variable Mean : Covariance : White noise field Symmetry Discrete Random Field

24 59 Separable and isotropic image covariance functions Separable Separable stationary covariance function Nonseparable exponential function (Nonstationary case) (Stationary case) (isotropic or circularly symmetric) Estimation mean and autocorrelation

25 60 2-D case Average power SDF(spectral density function) Definition Fourier transform of autocorrelation function 1-D case

26 61 (ex) the SDF of stationary white noise field

27 62 Estimate the random variable x by a suitable function g(y), such that is min. but the integrand is non-negative ; it is sufficient to minimize for every y Estimation Theory Mean square estimates

28 63 minimum mean square estimate (MMSE) also unbiased estimator ◆ Theorem Let y △ and x be jointly Gaussian with zero mean. The MMSE estimation is, where a i is chosen, such that ∀ all k = 1, 2, …, N (Pf) The random variable are jointly Gaussian. But the first one is uncorrelated with all the rest, it is independent of them. Thus, the error is independent of the random vector y.

29 64 where: estimation error yields, n = 1, 2, …, N

30 65 The estimation error is minimized if, n = 1, 2, …, N orthogonality principle If x and {y(n)} are independent If zero mean Gaussian random variables : linear combination of {y(n)} is determined by solving linear equations

31 66 Orthogonality principle The minimum mean square estimation error vector is orthogonal to every random variable functionally related to the observations, i.e., for any Since is a function of substitute matrix notation,

32 67 Minimum MSE : If x,y(n) are nonzero mean r.v. If x,y(n) are non-Gaussian, the results still give the best linear mean square estimate.

33 68 Information Theory Information Entropy For a binary source, i.e.,

34 69 Let x be a discrete r.v. with S x ={1, 2, …, K} △ {x=k}{x=k} uncertainty of A k is low, if p k is close to one, with p k =P r [x=k] let event A k and it is high, if p k is small. uncertainty of event : if P r (x=k) = 1 entropy : unit : bit when the logarithm is base 2 Information Theory

35 70 Consider the event A k, describing the emission of symbol s k by the source with probability p k 1) if p k =1 and p i =0 ∀ all i≠k no surprise ⇒ no information when s k is emitted by the source 2) if r k is low more surprise ⇒ information when s k is emitted by the source ; amount of information gained after observing the event s k ; average information per source symbol Entropy as a measure of information

36 71 16 balls : 2 balls “3”, 2 balls “4” 1 ball “5”, “6”, “7”, “8” 4 balls “1”, 4 balls “2” Question : Find out the number of the ball through a series of yes/no questions. x=1 ? yes x=2 ? yes x=7 ? yes Ex) 1) x=8 x=1 x=2 x=7 no the average number of question asked :

37 72 x≤2 ? yes x≤4 ? yes x=7 ? yes 2) x=8 x=1 x=2 x=7 no x≤6 ? yes x=1 ? yes no x=3 x=4 x=3 ? yes no x=5 x=6 x=5 ? yes no ⇒ The problem of designing the series of questions to identify x is exactly the same as the problem of encoding the output of information source.

38 73 x=1 0 0 0 yes / yes ⇒ 1 1 x=2 0 0 1 yes / no ⇒ 1 0 x=3 0 1 0 no / yes / yes ⇒ 0 1 1 x=4 0 1 1 no / yes / no ⇒ 0 1 0 x=5 1 0 0 no / no / yes / yes ⇒ 0 0 1 1 x=6 1 0 1 no / no / yes / no ⇒ 0 0 1 0 x=7 1 1 0 no / no / no / yes ⇒ 0 0 0 1 x=8 1 1 1 no / no / no / no ⇒ 0 0 0 0 3 bit / symbol variable length code p k ⇒ Huffman code ⇒ short code to frequency source symbol average number of bits required to identify the outcome of x ⇒ entropy of x represent the max. long code to rare source symbol

39 74 Noiseless Coding Theorem (1948, Shannon) min(R) = H(x) +ε bit / symbol when R is the transmission rate and ε is a positive quantity that can be arbitrarily close to zero by sophisticated coding procedure utilizing an appropriate amount of encoding delay.

40 75 Rate distortion function Distortion x : Gaussian r.v of variance y : reproduced value Rate distortion function of x Rate distortion function for a Gaussian source For a fixed average distortion D : Gaussian r.v.’s : reproduced values where is determined by solving


Download ppt "Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector."

Similar presentations


Ads by Google