The content of these slides by John Galeotti, © 2012 - 2015 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN276201000580P,

Slides:



Advertisements
Similar presentations
The content of these slides by John Galeotti, © 2013 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Advertisements

CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © 2012 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Lecture 12 Level Sets & Parametric Transforms sec & ch
This work by John Galeotti and Damion Shelton, © , was made possible in part by NIH NLM contract# HHSN P, and is licensed under a Creative.
This work by John Galeotti and Damion Shelton, © , was made possible in part by NIH NLM contract# HHSN P, and is licensed under a Creative.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Hidden Markov Models Theory By Johan Walters (SR 2003)
Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : (CMU RI)
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Principal Component Analysis
Function Optimization Newton’s Method. Conjugate Gradients
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Lecture #1COMP 527 Pattern Recognition1 Pattern Recognition Why? To provide machines with perception & cognition capabilities so that they could interact.
Function Optimization. Newton’s Method Conjugate Gradients Method
Bayesian Estimation (BE) Bayesian Parameter Estimation: Gaussian Case
The content of these slides by John Galeotti, © 2008 to 2015 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E
Linear Algebra and Image Processing
: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri Principles of.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
0 Pattern Classification, Chapter 3 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Biointelligence Laboratory, Seoul National University
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
Lecture 9 Segmentation (ch 8) ch. 8 of Machine Vision by Wesley E
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
M Machine Learning F# and Accord.net.
John Lafferty Andrew McCallum Fernando Pereira
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Optimization Indiana University July Geoffrey Fox
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Unsupervised Learning II Feature Extraction
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Lecture 16: Image alignment
Eigenfaces (for Face Recognition)
Intelligent Information System Lab
Lecture 5 Image Characterization ch. 4 of Machine Vision by Wesley E
2D transformations (a.k.a. warping)
CONTEXT DEPENDENT CLASSIFICATION
Lecture 3 Math & Probability Background ch
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Lecture 3 Math & Probability Background ch
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Presentation transcript:

The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P, and is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. To view a copy of this license, visit or send a letter to Creative Commons, 171 2nd Street, Suite 300, San Francisco, California, 94105, USA. Permissions beyond the scope of this license may be available either from CMU or by ing The most recent version of these slides may be accessed online via Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2015 BioE 2630 (Pitt) : (CMU RI) (CMU ECE) : (CMU BME) Dr. John Galeotti

General notes about the book  The book is an overview of many concepts  Top quality design requires:  Reading the cited literature  Reading more literature  Experimentation & validation 2

Two themes  Consistency  A conceptual tool implemented in many/most algorithms  Often must fuse information from many local measurements and prior knowledge to make global conclusions about the image  Optimization  Mathematical mechanism  The “workhorse” of machine vision 3

Image Processing Topics  Enhancement  Coding  Compression  Restoration  “Fix” an image  Requires model of image degradation  Reconstruction 4

Machine Vision Topics  AKA:  Computer vision  Image analysis  Image understanding  Pattern recognition: 1.Measurement of features Features characterize the image, or some part of it 2.Pattern classification Requires knowledge about the possible classes 5 Our Focus Feature Extraction Classification & Further Analysis Original Image

Feature measurement 6 Noise removal Segmentation Original Image Shape AnalysisConsistency Analysis Matching Features Restoration Ch. 6-7 Ch. 8 Ch Ch Ch. 9

Probability  Probability of an event a occurring:  Pr(a)  Independence  Pr(a) does not depend on the outcome of event b, and vice-versa  Joint probability  Pr(a,b) = Prob. of both a and b occurring  Conditional probability  Pr(a|b) = Prob. of a if we already know the outcome of event b  Read “probability of a given b ” 7

Probability for continuously- valued functions  Probability distribution function: P(x) = Pr(z<x)  Probability density function: 8

Linear algebra  Unit vector: |x| = 1  Orthogonal vectors: x T y = 0  Orthonormal: orthogonal unit vectors  Inner product of continuous functions  Orthogonality & orthonormality apply here too 9

Linear independence  No one vector is a linear combination of the others  x j   a i x i for any a i across all i  j  Any linearly independent set of d vectors {x i=1…d } is a basis set that spans the space  d  Any other vector in  d may be written as a linear combination of {x i }  Often convenient to use orthonormal basis sets  Projection: if y=  a i x i then a i =y T x i 10

Linear transforms  = a matrix, denoted e.g. A  Quadratic form:  Positive definite:  Applies to A if 11

More derivatives  Of a scalar function of x :  Called the gradient  Really important!  Of a vector function of x  Called the Jacobian  Hessian = matrix of 2nd derivatives of a scalar function 12

Misc. linear algebra  Derivative operators  Eigenvalues & eigenvectors  Translates “most important vectors”  Of a linear transform (e.g., the matrix A)  Characteristic equation:  A maps x onto itself with only a change in length  is an eigenvalue  x is its corresponding eigenvector 13

Function minimization  Find the vector x which produces a minimum of some function f (x)  x is a parameter vector  f(x) is a scalar function of x  The “objective function”  The minimum value of f is denoted:  The minimizing value of x is denoted: 14

Numerical minimization  Gradient descent  The derivative points away from the minimum  Take small steps, each one in the “down-hill” direction  Local vs. global minima  Combinatorial optimization:  Use simulated annealing  Image optimization:  Use mean field annealing 15

Markov models  For temporal processes:  The probability of something happening is dependent on a thing that just recently happened.  For spatial processes  The probability of something being in a certain state is dependent on the state of something nearby.  Example: The value of a pixel is dependent on the values of its neighboring pixels. 16

Markov chain  Simplest Markov model  Example: symbols transmitted one at a time  What is the probability that the next symbol will be w ?  For a “simple” (i.e. first order) Markov chain:  “The probability conditioned on all of history is identical to the probability conditioned on the last symbol received.” 17

Hidden Markov models (HMMs) 18 1 st Markov Process 2 nd Markov Process f ( t )

HMM switching  Governed by a finite state machine (FSM) 19 Output 1 st Process Output 2 nd Process

The HMM Task  Given only the output f (t), determine: 1.The most likely state sequence of the switching FSM  Use the Viterbi algorithm  Computational complexity = (# state changes) * (# state values) 2  Much better than brute force, which = (# state values) (# state changes) 2.The parameters of each hidden Markov model  Use the iterative process in the book  Better, use someone else’s debugged code that they’ve shared 20