Computer vision: models, learning and inference

Slides:



Advertisements
Similar presentations
NORMAL OR GAUSSIAN DISTRIBUTION Chapter 5. General Normal Distribution Two parameter distribution with a pdf given by:
Advertisements

The Normal Distribution
Copula Regression By Rahul A. Parsa Drake University &
Computer vision: models, learning and inference Chapter 8 Regression.
Computer vision: models, learning and inference
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Computer vision: models, learning and inference
Chapter 5. Operations on Multiple R. V.'s 1 Chapter 5. Operations on Multiple Random Variables 0. Introduction 1. Expected Value of a Function of Random.
Computer vision: models, learning and inference Chapter 3 Common probability distributions.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
The Multivariate Normal Distribution, Part 2 BMTRY 726 1/14/2014.
Modern Navigation Thomas Herring
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Computer vision: models, learning and inference Chapter 5 The Normal Distribution.
Computer vision: models, learning and inference Chapter 6 Learning and Inference in Vision.
Chapter Two Probability Distributions: Discrete Variables
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Chapter 14 Monte Carlo Simulation Introduction Find several parameters Parameter follow the specific probability distribution Generate parameter.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Computer vision: models, learning and inference Chapter 19 Temporal models.
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 5: Stat.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
Gaussian Processes Li An Li An
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
1 Matrix Algebra and Random Vectors Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
The Multivariate Gaussian
–The shortest distance is the one that crosses at 90° the vector u Statistical Inference on correlation and regression.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Probability and statistics review ASEN 5070 LECTURE.
Geology 6600/7600 Signal Analysis 02 Sep 2015 © A.R. Lowry 2015 Last time: Signal Analysis is a set of tools used to extract information from sequences.
Reasoning With Properties of Algebra
Estimation of covariance matrix under informative sampling Julia Aru University of Tartu and Statistics Estonia Tartu, June 25-29, 2007.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Computer vision: models, learning and inference Chapter 2 Introduction to probability.
Review of statistical modeling and probability theory Alan Moses ML4bio.
Fitting normal distribution: ML 1Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Gaussian Process Networks Nir Friedman and Iftach Nachman UAI-2K.
- 1 - Preliminaries Multivariate normal model (section 3.6, Gelman) –For a multi-parameter vector y, multivariate normal distribution is where  is covariance.
Objectives: Normal Random Variables Support Regions Whitening Transformations Resources: DHS – Chap. 2 (Part 2) K.F. – Intro to PR X. Z. – PR Course S.B.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Geology 6600/7600 Signal Analysis 04 Sep 2014 © A.R. Lowry 2015 Last time: Signal Analysis is a set of tools used to extract information from sequences.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Basics of Multivariate Probability
Computer vision: models, learning and inference
Computer vision: models, learning and inference
Chapter 3: Maximum-Likelihood Parameter Estimation
Modeling of geochemical processes Linear system of differential equations J. Faimon.
Review of Linear Algebra
Information Management course
Basic simulation methodology
CH 5: Multivariate Methods
Computer vision: models, learning and inference
Computer vision: models, learning and inference
Computer vision: models, learning and inference
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
ECE 417 Lecture 4: Multivariate Gaussians
Derivative of scalar forms
The Multivariate Normal Distribution, Part 2
Matrix Algebra and Random Vectors
Sampling Distributions
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Chapter 14 Monte Carlo Simulation
Multivariate Methods Berlin Chen
The Multivariate Normal Distribution, Part I
Multivariate Methods Berlin Chen, 2005 References:
Multivariable Linear Systems
Probabilistic Surrogate Models
Presentation transcript:

Computer vision: models, learning and inference Chapter 5 The normal distribution Please send errata to s.prince@cs.ucl.ac.uk

Univariate Normal Distribution For short we write: Univariate normal distribution describes single continuous variable. Takes 2 parameters m and s2>0 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Multivariate Normal Distribution For short we write: Multivariate normal distribution describes multiple continuous variables. Takes 2 parameters a vector containing mean position, m a symmetric “positive definite” covariance matrix S Positive definite: is positive for any real Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Types of covariance Symmetric Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Diagonal Covariance = Independence Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Decomposition of Covariance Consider green frame of reference: Relationship between pink and green frames of reference: Substituting in: Full covariance can be decomposed into rotation matrix and diagonal Conclusion: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Transformation of Variables If and we transform the variable as The result is also a normal distribution: Can be used to generate data from arbitrary Gaussians from standard deviation of one Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Marginal Distributions Marginal distributions of a multivariate normal are also normal If then Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Conditional Distributions If then Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Conditional Distributions Renormalize each scan line! For spherical / diagonal case, x1 and x2 are independent so all of the conditional distributions are the same. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Product of two normals (self-conjugacy w.r.t mean) The product of any two normal distributions in the same variable is proportional to a third normal distribution Amazingly, the constant also has the form of a normal! Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Change of Variables If the mean of a normal in x is proportional to y then this can be re-expressed as a normal in y that is proportional to x where Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Conclusion Normal distribution is used ubiquitously in computer vision Important properties: Marginal dist. of normal is normal Conditional dist. of normal is normal Product of normals prop. to normal Normal under linear change of variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince