Download presentation
Presentation is loading. Please wait.
Published byMargaretMargaret Hubbard Modified over 9 years ago
1
240-572: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha montri@coe.psu.ac.th http://fivedots.coe.psu. ac.th/~montri 240-650 Principles of Pattern Recognition
2
240-572: Appendix A: Mathematical Foundations 2 Appendix A Mathematical Foundations
3
240-572: Appendix A: Mathematical Foundations 3 Linear Algebra Notation and Preliminaries Inner Product Outer Product Derivatives of Matrices Determinant and Trace Matrix Inversion Eigenvalues and Eigenvectors
4
240-572: Appendix A: Mathematical Foundations 4 Notation and Preliminaries A d-dimensional column vector x and its transpose x t can be written as
5
240-572: Appendix A: Mathematical Foundations 5 Inner Product The inner product of two vectors having the same dimensionality will be denoted as x t y and yields a scalar:
6
240-572: Appendix A: Mathematical Foundations 6 Euclidian Norm (Length of vector) We call a vector normalized if ||x|| = 1 The angle between two vectors
7
240-572: Appendix A: Mathematical Foundations 7 Cauchy-Schwarz Inequality If x t y = 0 then the vectors are orthogonal If ||x t y|| = ||x||||y| then the vectors are colinear.
8
240-572: Appendix A: Mathematical Foundations 8 Linear Independence A set of vectors {x 1, x 2, x 3, …, x n } is linearly independent if no vector in the set can be written as a linear combination of any of the others. A set of d L.I. vectors spans a d-dimensional vector space, i.e. any vector in that space can be written as a linear combination of such spanning vectors.
9
240-572: Appendix A: Mathematical Foundations 9 Outer Product The outer product of 2 vectors yields a matrix
10
240-572: Appendix A: Mathematical Foundations 10 Determinant and Trace Determinant of a matrix is a scalar It reveals properties of the matrix If columns are considered as vectors, and if these vector are not L.I. then the determinant vanishes. Trace is the sum of the matrix’s diagonal elements
11
240-572: Appendix A: Mathematical Foundations 11 Eigenvectors and Eigenvalues A very important class of linear equations is of the form The solution vector x=e i and corresponding scalar are called the eigenvector and associated eigenvalue, respectively Eigenvalues can be obtained by solving the characteristic equation:
12
240-572: Appendix A: Mathematical Foundations 12 Example Let find eigenvalues and associated eigenvectors Characteristic Eqn:
13
240-572: Appendix A: Mathematical Foundations 13 Example (cont’d) Solution: Eigenvalues are: Each eigenvector can be found by substituting each eigenvalue into the equation then solving for x 1 in term of x 2 (or vice versa)
14
240-572: Appendix A: Mathematical Foundations 14 Example (cont’d) The eigenvectors associated with both eigenvalues are:
15
240-572: Appendix A: Mathematical Foundations 15 Trace and Determinant Trace = sum of eigenvalues Determinant = product of eigenvalues
16
240-572: Appendix A: Mathematical Foundations 16 Probability Theory Let x be a discrete RV that can assume any of the finite number of m of different values in the set X = {v 1, v 2, …, v m }. We denote p i the probability that x assumes the value v i : p i = Pr[x=v i ], i = 1..m p i must satisfy 2 conditions
17
240-572: Appendix A: Mathematical Foundations 17 Probability Mass Function Sometimes it is more convenient to express the set of probabilities {p 1, p 2, …, p m } in terms of the probability mass function P(x), which must satisfy the following conditions: For Discrete x
18
240-572: Appendix A: Mathematical Foundations 18 Expected Value The expected value, mean or average of the random variable x is defined by If f(x) is any function of x, the expected value of f is defined by
19
240-572: Appendix A: Mathematical Foundations 19 Second Moment and Variance Second moment Variance Where is the standard deviation of x
20
240-572: Appendix A: Mathematical Foundations 20 Variance and Standard Deviation Variance can be viewed as the moment of inertia of the probability mass function. The variance is never negative. Standard deviation tells us how far values of x are likely to depart from the mean.
21
240-572: Appendix A: Mathematical Foundations 21 Pairs of Discrete Random Variables Joint probability Joint probability mass function Marginal distributions
22
240-572: Appendix A: Mathematical Foundations 22 Statistical Independence Variables x and y are said to be statistically independent if and only if Knowing the value of x did not give any knowledge about the possible values of y
23
240-572: Appendix A: Mathematical Foundations 23 Expected Values of Functions of Two Variables The expected value of a function f(x,y) of two random variables x and y is defined by
24
240-572: Appendix A: Mathematical Foundations 24 Means and Variances
25
240-572: Appendix A: Mathematical Foundations 25 Covariance Using vector notation, the notations of mean and covariance become
26
240-572: Appendix A: Mathematical Foundations 26 Uncorrelated The covariance is one measure of the degree of statistical dependence between x and y. If x and y are statistically independent then and The variables x and y are said to be uncorrelated
27
240-572: Appendix A: Mathematical Foundations 27 Conditional Probability conditional probability of x given y In terms of mass functions
28
240-572: Appendix A: Mathematical Foundations 28 The Law of Total Probability If an event A can occur in m different ways, A 1, A 2, …, A m, and if these m subevents are mutually exclusive then the probability of A occurring is the sum of the probabilities of the subevents A i.
29
240-572: Appendix A: Mathematical Foundations 29 Bayes Rule Likelihood = P(y|x) Prior probability = P(x) Posterior distribution P(x|y) X = cause Y = effect
30
240-572: Appendix A: Mathematical Foundations 30 Normal Distributions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.