Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791.

Similar presentations


Presentation on theme: "Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791."— Presentation transcript:

1 Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791 (CMU ECE) : 42-735 (CMU BME) Dr. John Galeotti & Prahlad G Menon

2 General notes about the book The book is an overview of many concepts Top quality design requires: – Reading the cited literature – Reading more literature – Experimentation & validation 2

3 Two themes in Text Book Consistency – A conceptual tool implemented in many/most algorithms – Often must fuse information from local measurements to make global conclusions about the image – Is the fused input information conflicting..? Optimization – Mathematical mechanism to solve matrix equations. – The “workhorse” of machine vision 3

4 Image Processing Topics Enhancement Coding – Compression Restoration – “Fix” an image – Requires model of image degradation Reconstruction 4

5 Machine Vision Topics AKA: – Computer vision – Image analysis – Image understanding Pattern recognition: 1.Measurement of features Features characterize the image, or some part of it 2.Pattern classification Requires knowledge about the possible classes 5 Our Focus Feature Extraction Classification & Further Analysis Original Image

6 Feature measurement 6 Noise removal Segmentation Original Image Shape AnalysisConsistency Analysis Matching Features Restoration Ch. 6-7 Ch. 8 Ch. 10-11 Ch. 12-16 Ch. 9

7 Linear Operations What is a linear operation..? How to represent it as a Matrix operator..? Rules for basic matrix multiplication. Matrix inverse & unitary matrix. What is linear independence..? What is a basis..? What is an orthonormal basis set..? Why is it useful..? What is function minimization..? Why is it useful..? Examples of useful linear operations / operators. What is an inverse linear transform..? Example. 7

8 Linear algebra Unit vector: |x| = 1 Orthogonal vectors: x T y = 0 Orthogonal square matrices: A T = A -1 Orthonormal: orthogonal unit vectors Inner product of continuous functions – Orthogonality & orthonormality apply here too 8

9 Linear transforms For a matrix, A Quadratic form: Positive definite: – Applies to A if 9

10 Linear independence No one vector is a linear combination of the others – x j   a i x i for any a i across all i  j Any linearly independent set of d vectors {x i=1…d } is a basis set that spans the space  d – Any other vector in  d may be written as a linear combination of {x i } Often convenient to use orthonormal basis sets Projection: if y=  a i x i then a i =y T x i 10

11 Vector norms 11

12 Examples of linear operators Rotation, translation, scaling – spatial transforms Convolution Wolfram – definition of a linear operator / transformation: 12

13 Simple example of convolution of input image (matrix, S) and impulse response (kernel) in 2D spatial. Y[n] = (S * h) [n]. Notice that the kernel matrix is flipped both horizontal and vertical direction before multiplying the overlapped input data. The Convolution Operator – Example… 13

14 Eigen values & Eigen vectors 14 Let A be an n x n matrix. The number is an eigenvalue of A if there exists a non-zero vector v such that Av = λ v (The Characteristic equation) In this case, vector v is called an eigen-vector of A corresponding to λ. What does this mean physically..? A maps v onto itself with only a change in length.

15 Derivatives Of a scalar function of x : – Called the gradient – Really important! Of a vector function of x – Called the Jacobian Hessian = matrix of 2nd derivatives of a scalar function 15

16 Linear INVERSE problems – de-blurring an image 16 Given measured data, g, and information about the system H and noise model, recover an estimate ~f that closely approximates f. When H is square, solve g = Hf via f = H -1 g directly (exactly) via Fourier Transform methods. Eg: Lucy–Richardson deconvolution, is an iterative procedure for recovering a latent image that has been blurred by a known point spread function.deconvolutioniterative procedurelatent imageblurredpoint spread function

17 Least Squares Fitting – solving as a linear system (assignment) 17 What kind of basis is ideal for A, here..?

18 Function minimization Find the vector x which produces a minimum of some function f (x) – x is a parameter vector – f(x) is a scalar function of x The “objective function” The minimum value of f is denoted: The minimizing value of x is denoted: 18

19 Other Numerical minimization techniques Gradient descent – The derivative points away from the minimum – Take small steps, each one in the “down-hill” direction Local vs. global minima Combinatorial optimization: – Use simulated annealing Image optimization: – Use mean field annealing 19

20 Fourier Orthonormal Basis Many imaging problems easier to understand if we choose building blocks (k) carefully. = Σ n Φ* k [n] Φ l [n] = 0 if k != l, 20 (1/N) Ac = s, is simply c = A*s: The coefficients c[k] of are the Discrete Fourier Transform of the data s[n].

21 Orthonormal Expansion Relate the problem of expanding these approximations to solving Ax = b. Now x are the expansion coefficients, b is an image, and A = [Φ1,Φ2, … ], where each Φi is a column vector representing an orthonormal basis. When columns of A are orthonormal: (A*)(A) = I, so x = A T b. b[p] = Σ k x k A p,k, or, s[p] = Σ k c k Φ k [p], where c k may be termed Fourier cofficients, if Φ k [p] refers to the Fourier basis. k[n] = e -j2kn/N = cos(2kn/N) + j sin(2kn/N). = N for Fourier. 21

22 Many useful properties of Fourier basis… Interesting fact: k-space  MRI data is acquired in frequency space (or Fourier space) and then converted back to image space. 22

23 Probability Probability of an event a occurring: – Pr(a) Independence – Pr(a) does not depend on the outcome of event b, and vice-versa Joint probability – Pr(a,b) = Prob. of both a and b occurring Conditional probability – Pr(a|b) = Prob. of a if we already know the outcome of event b – Read “probability of a given b ” 23

24 Probability for continuously-valued functions Probability distribution function: P(x) = Pr(z<x) Probability density function: Expected value 24

25 Markov models For temporal processes: – The probability of something happening is dependent on a thing that just recently happened. For spatial processes – The probability of something being in a certain state is dependent on the state of something nearby. – Example: The value of a pixel is dependent on the values of its neighboring pixels. 25

26 Markov chain Simplest Markov process – automatic random transition between finite states with determinable: Example: symbols transmitted one at a time – What is the probability that the next symbol will be w ? For a Markov chain: – “The probability conditioned on all of history is identical to the probability conditioned on the last symbol received.” 26 – Initial probabilities – Transition probabilities

27 Hidden Markov models (HMMs) 27

28 HMM switching Governed by a finite state machine (FSM) 28 Output 1 st Process Output 2 nd Process HMM generates observation sequence

29 The HMM Task Given only the output f (t), determine: 1.The most likely state sequence of the switching FSM – Use the Viterbi algorithm – Computational complexity = (# state changes) * (# state values) 2 – Much better than brute force, which = (# state values) (# state changes) 2.The parameters of each hidden Markov model – Use the iterative process in the book – Better, use someone else’s debugged code that they’ve shared 29

30 Hidden Markov models (HMMs) 30 1 st Markov Process 2 nd Markov Process f ( t )

31 Markov models for Image Segmentation 31 http://www.stat.ucla.edu/~s czhu/papers/DDMCMC_re print.pdf Code: http://pages.ucsd.edu/~ztu/ Download_ddmcmc.htm


Download ppt "Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791."

Similar presentations


Ads by Google