Statistics and Shape Analysis

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Spatial autoregressive methods
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
(Includes references to Brian Clipp
Topic 6: Introduction to Hypothesis Testing
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Maximum likelihood (ML) and likelihood ratio (LR) test
Basics of discriminant analysis
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Probability Grid: A Location Estimation Scheme for Wireless Sensor Networks Presented by cychen Date : 3/7 In Secon (Sensor and Ad Hoc Communications and.
Linear and generalised linear models
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Computer vision: models, learning and inference Chapter 3 Common probability distributions.
Market Risk VaR: Historical Simulation Approach
Maximum likelihood (ML)
Appendix A of: Symmetry and Lattice Conditional Independence in a Multivariate Normal Distribution. by Andersson& Madsen. Presented by Shaun Deaton.
H IERARCHICAL B AYESIAN M ODELLING OF THE S PATIAL D EPENDENCE OF I NSURANCE R ISK L ÁSZLÓ M ÁRKUS and M IKLÓS A RATÓ Eötvös Loránd University Budapest,
Particle Filtering in Network Tomography
Bringing Inverse Modeling to the Scientific Community Hydrologic Data and the Method of Anchored Distributions (MAD) Matthew Over 1, Daniel P. Ames 2,
The horseshoe estimator for sparse signals CARLOS M. CARVALHO NICHOLAS G. POLSON JAMES G. SCOTT Biometrika (2010) Presented by Eric Wang 10/14/2010.
MCMC: Particle Theory By Marc Sobel. Particle Theory: Can we understand it?
Principles of Pattern Recognition
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Multimodal Interaction Dr. Mike Spann
Statistical Decision Theory
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
Transformations Teacher Version Level
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
1 A Bayesian statistical method for particle identification in shower counters IX International Workshop on Advanced Computing and Analysis Techniques.
First topic: clustering and pattern recognition Marc Sobel.
Distributions of the Sample Mean
Graphical Models for Machine Learning and Computer Vision.
Essential Statistics Chapter 31 The Normal Distributions.
Image Analysis, Random Fields and Dynamic MCMC By Marc Sobel.
MCMC (Part II) By Marc Sobel. Monte Carlo Exploration  Suppose we want to optimize a complicated distribution f(*). We assume ‘f’ is known up to a multiplicative.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Multidimensional Scaling By Marc Sobel. The Goal  We observe (possibly non-euclidean) proximity data. For each pair of objects number ‘i’ and ‘j’ we.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: MLLR For Two Gaussians Mean and Variance Adaptation MATLB Example Resources:
Tracking with dynamics
Using the Particle Filter Approach to Building Partial Correspondences between Shapes Rolf Lakaemper, Marc Sobel Temple University, Philadelphia,PA,USA.
Bayesian Approach Jake Blanchard Fall Introduction This is a methodology for combining observed data with expert judgment Treats all parameters.
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
Gibbs Sampling and Hidden Markov Models in the Event Detection Problem By Marc Sobel.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Chapter 5 Probability 5.2 Random Variables 5.3 Binomial Distribution
Ch3: Model Building through Regression
LECTURE 03: DECISION SURFACES
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Neural Firing.
Bayesian Models in Machine Learning
POINT ESTIMATOR OF PARAMETERS
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Statistical NLP: Lecture 4
Basic Practice of Statistics - 3rd Edition The Normal Distributions
Particle Filters for Event Detection
Learning From Observed Data
Random Variables Random variable a variable (typically represented by x) that takes a numerical value by chance. For each outcome of a procedure, x takes.
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
LECTURE 11: Exam No. 1 Review
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Mathematical Foundations of BME Reza Shadmehr
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Presentation transcript:

Statistics and Shape Analysis By Marc Sobel

Shape similarity Humans recognize shapes via both local and global features. (i) matching local features between shapes like curvature, distance to centroid can be statistically modeled via building statistics and parameters to reflect the matching. (ii) matching the relationship between global features of shapes (are they both apples or not?)

Incorporating both local and global features in shape matching How can we incorporate both local and global features in shape matching? An obvious paradigm is to model global features as governed by priors, and local features given global features as a likelihood.

Definitions and Notation Let u1,…,un be the vertices of one shape and v1,…,vm the vertices of another shape. We’d like to biuld correspondences between the vertices which properly reflect the relationship between the shapes. We use the notation (ui,vj) for a correspondence of this type. We use the terminology for a particle consisting of a set of such correspondences. Let Xi,l be the l’th local feature measure for vertex i of the first shape and Yj,l the l’th local feature measure for vertex j of the second shape. For now assume these feature measures are observed. We’d like to biuld a particle which reflects the local and global features of interest.

Contiguity: An important global feature. If shapes result from one another via rotation and scaling then the order of shape 1 correspondence points should match the order of shape 2 correspondence points: i.e., if (i1,j1) is one correspondence and (i2,j2) is another, then either i1<i2 and j1<j2 or i1>i2 and j1>j2. We can incorporate this into a prior.

Notation: We have that:

Simple Likelihood Based on the observed features we form weight statistics: Let W denote the weight matrix associated with the features. Therefore given that a correspondence ‘C’ belongs in the ‘true’ set of correspondences, we write the simple likelihood in the form,

Complicated Likelihoods At stage t, putting ω as the parameter, we define the likelihood:

Simple and Complicated Priors Model a prior for all sets of correspondences which are strongly contiguous: a) a simple prior (we use ω for the weight variable) b) I] a prior giving more weight to diagonals than other correspondences. II] we can define such a prior sequentially based on the fact that

Complicated Prior Put With ‘DIAG[i,j]’ referring to the positively oriented diagonal to which (i,j) belong.

Simulating the Posterior Distribution: Simple Prior We would like to simulate the posterior distribution of contiguous correspondences. We do this by calculating the weights:

Simulating the Posterior Distribution: Complicated Prior Here we simulate:

A Simpler Model Define the posterior probabilities: For parameter λ, described below.

Weights for the simpler model The weights for the simpler model are particularly easy: Choosing λ tending to infinity properly, we get convergence to the MAP estimator of the simple particle filter.

Shape Similarity: A More complicated model employing curvature and distance parameters We have:

Simple Likelihood Based on the observed features we form weight parameters: Let W denote the weight matrix associated with the features. Therefore given that a correspondence ‘C’ belongs in the ‘true’ set of correspondences, we write the likelihood in the form,

Particle Likelihood We write the likelihood in the form:

Particle Prior We assume standard priors for the mu’s and nu’s. We also assume a prior for the set of contiguous correspondences. The particle is updated as follows: define,

Particle Prior At stage t we have particles, Their weights are given by: