240-572: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri 240-650 Principles of.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Review of Probability. Definitions (1) Quiz 1.Let’s say I have a random variable X for a coin, with event space {H, T}. If the probability P(X=H) is.
Random Variables ECE460 Spring, 2012.
Probability Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Visual Recognition Tutorial
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Modern Navigation Thomas Herring
Stats & Linear Models.
Multivariate Data and Matrix Algebra Review BMTRY 726 Spring 2012.
Lecture 28 Dr. MUMTAZ AHMED MTH 161: Introduction To Statistics.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Today Wrap up of probability Vectors, Matrices. Calculus
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Review of Probability.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 2nd Mathematical Preparations (1): Probability and statistics Kazuyuki Tanaka Graduate.
OUTLINE Probability Theory Linear Algebra Probability makes extensive use of set operations, A set is a collection of objects, which are the elements.
Machine Learning Queens College Lecture 3: Probability and Statistics.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Chapter 12 Review of Calculus and Probability
Principles of Pattern Recognition
PBG 650 Advanced Plant Breeding
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
Linear algebra: matrix Eigen-value Problems
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Physics Fluctuomatics/Applied Stochastic Process (Tohoku University) 1 Physical Fluctuomatics Applied Stochastic Process 3rd Random variable, probability.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
What is the determinant of What is the determinant of
1 Matrix Algebra and Random Vectors Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 3rd Random variable, probability distribution and probability density function Kazuyuki.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Machine Learning CUNY Graduate Center Lecture 2: Math Primer.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Introduction to Vectors and Matrices
Pattern Recognition Probability Review
Review of Probability.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Matrices and vector spaces
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
CH 5: Multivariate Methods
Matrices and Vectors Review Objective
Graduate School of Information Sciences, Tohoku University
Lecture on Linear Algebra
Numerical Analysis Lecture 16.
Tutorial 9: Further Topics on Random Variables 2
1.3 Vector Equations.
5.4 General Linear Least-Squares
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Introduction to Vectors and Matrices
Presentation transcript:

: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri Principles of Pattern Recognition

: Appendix A: Mathematical Foundations 2 Appendix A Mathematical Foundations

: Appendix A: Mathematical Foundations 3 Linear Algebra Notation and Preliminaries Inner Product Outer Product Derivatives of Matrices Determinant and Trace Matrix Inversion Eigenvalues and Eigenvectors

: Appendix A: Mathematical Foundations 4 Notation and Preliminaries A d-dimensional column vector x and its transpose x t can be written as

: Appendix A: Mathematical Foundations 5 Inner Product The inner product of two vectors having the same dimensionality will be denoted as x t y and yields a scalar:

: Appendix A: Mathematical Foundations 6 Euclidian Norm (Length of vector) We call a vector normalized if ||x|| = 1 The angle between two vectors

: Appendix A: Mathematical Foundations 7 Cauchy-Schwarz Inequality If x t y = 0 then the vectors are orthogonal If ||x t y|| = ||x||||y| then the vectors are colinear.

: Appendix A: Mathematical Foundations 8 Linear Independence A set of vectors {x 1, x 2, x 3, …, x n } is linearly independent if no vector in the set can be written as a linear combination of any of the others. A set of d L.I. vectors spans a d-dimensional vector space, i.e. any vector in that space can be written as a linear combination of such spanning vectors.

: Appendix A: Mathematical Foundations 9 Outer Product The outer product of 2 vectors yields a matrix

: Appendix A: Mathematical Foundations 10 Determinant and Trace Determinant of a matrix is a scalar It reveals properties of the matrix If columns are considered as vectors, and if these vector are not L.I. then the determinant vanishes. Trace is the sum of the matrix’s diagonal elements

: Appendix A: Mathematical Foundations 11 Eigenvectors and Eigenvalues A very important class of linear equations is of the form The solution vector x=e i and corresponding scalar are called the eigenvector and associated eigenvalue, respectively Eigenvalues can be obtained by solving the characteristic equation:

: Appendix A: Mathematical Foundations 12 Example Let find eigenvalues and associated eigenvectors Characteristic Eqn:

: Appendix A: Mathematical Foundations 13 Example (cont’d) Solution: Eigenvalues are: Each eigenvector can be found by substituting each eigenvalue into the equation then solving for x 1 in term of x 2 (or vice versa)

: Appendix A: Mathematical Foundations 14 Example (cont’d) The eigenvectors associated with both eigenvalues are:

: Appendix A: Mathematical Foundations 15 Trace and Determinant Trace = sum of eigenvalues Determinant = product of eigenvalues

: Appendix A: Mathematical Foundations 16 Probability Theory Let x be a discrete RV that can assume any of the finite number of m of different values in the set X = {v 1, v 2, …, v m }. We denote p i the probability that x assumes the value v i : p i = Pr[x=v i ], i = 1..m p i must satisfy 2 conditions

: Appendix A: Mathematical Foundations 17 Probability Mass Function Sometimes it is more convenient to express the set of probabilities {p 1, p 2, …, p m } in terms of the probability mass function P(x), which must satisfy the following conditions: For Discrete x

: Appendix A: Mathematical Foundations 18 Expected Value The expected value, mean or average of the random variable x is defined by If f(x) is any function of x, the expected value of f is defined by

: Appendix A: Mathematical Foundations 19 Second Moment and Variance Second moment Variance Where is the standard deviation of x

: Appendix A: Mathematical Foundations 20 Variance and Standard Deviation Variance can be viewed as the moment of inertia of the probability mass function. The variance is never negative. Standard deviation tells us how far values of x are likely to depart from the mean.

: Appendix A: Mathematical Foundations 21 Pairs of Discrete Random Variables Joint probability Joint probability mass function Marginal distributions

: Appendix A: Mathematical Foundations 22 Statistical Independence Variables x and y are said to be statistically independent if and only if Knowing the value of x did not give any knowledge about the possible values of y

: Appendix A: Mathematical Foundations 23 Expected Values of Functions of Two Variables The expected value of a function f(x,y) of two random variables x and y is defined by

: Appendix A: Mathematical Foundations 24 Means and Variances

: Appendix A: Mathematical Foundations 25 Covariance Using vector notation, the notations of mean and covariance become

: Appendix A: Mathematical Foundations 26 Uncorrelated The covariance is one measure of the degree of statistical dependence between x and y. If x and y are statistically independent then and The variables x and y are said to be uncorrelated

: Appendix A: Mathematical Foundations 27 Conditional Probability conditional probability of x given y In terms of mass functions

: Appendix A: Mathematical Foundations 28 The Law of Total Probability If an event A can occur in m different ways, A 1, A 2, …, A m, and if these m subevents are mutually exclusive then the probability of A occurring is the sum of the probabilities of the subevents A i.

: Appendix A: Mathematical Foundations 29 Bayes Rule Likelihood = P(y|x) Prior probability = P(x) Posterior distribution P(x|y) X = cause Y = effect

: Appendix A: Mathematical Foundations 30 Normal Distributions