Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 417 Lecture 4: Multivariate Gaussians

Similar presentations


Presentation on theme: "ECE 417 Lecture 4: Multivariate Gaussians"β€” Presentation transcript:

1 ECE 417 Lecture 4: Multivariate Gaussians
Mark Hasegawa-Johnson 9/7/2017

2 Content Vector of i.i.d. Gaussians
Vector of Gaussians that are independent, but not identically distributed Some facts about linear algebra The Mahalanobis form of the multivariate Gaussian The Mahalanobis form for Gaussians that are not independent More facts about linear algebra More facts about ellipses

3 Vector of I.I.D. Gaussian Variables
Suppose we have a frame containing N samples from a Gaussian white noise process, π‘₯ 1 ,…, π‘₯ 𝑁 . Let’s stack them up to make a vector: π‘₯ = π‘₯ 1 ∢ π‘₯ 𝑁 This whole frame is random. In fact, we could say that π‘₯ is a sample value for a Gaussian random vector called 𝑋 , whose elements are 𝑋 1 ,…, 𝑋 𝑁 : 𝑋 = 𝑋 1 ∢ 𝑋 𝑁

4 Vector of I.I.D. Gaussian Variables
Suppose that the N samples are i.i.d., each one has the same mean, πœ‡, and the same variance, 𝜎 2 . Then the pdf of this random vector is 𝑓 𝑋 π‘₯ =𝒩 π‘₯ ; πœ‡ , 𝜎 2 𝐼 = 𝑛=1 𝑁 1 2πœ‹ 𝜎 2 𝑒 βˆ’ 1 2 π‘₯ 𝑛 βˆ’πœ‡ 𝜎 2

5 Vector of I.I.D. Gaussian Variables
For example, here’s an example from Wikipedia with mean of 50 and standard deviation of about 12. Attribution: Piotrg,

6 Independent Gaussians that aren’t identically distributed
Suppose that the N samples are independent Gaussians that aren’t identically distributed, i.e., 𝑋 𝑑 has mean πœ‡ 𝑑 and variance 𝜎 𝑑 2 . The pdf of 𝑋 𝑑 is 𝑓 𝑋 𝑑 π‘₯ 𝑑 =𝒩 π‘₯ 𝑑 ; πœ‡ 𝑑 , 𝜎 𝑑 2 = 1 2πœ‹ 𝜎 𝑑 2 𝑒 βˆ’ 1 2 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2 The pdf of this random vector is 𝑓 𝑋 π‘₯ =𝒩 π‘₯ ; πœ‡ ,Ξ£ = 𝑑=1 𝐷 1 2πœ‹ 𝜎 𝑑 2 𝑒 βˆ’ 1 2 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2

7 Independent Gaussians that aren’t identically distributed
Another useful form is: 𝑑=1 𝐷 1 2πœ‹ 𝜎 𝑑 2 𝑒 βˆ’ 1 2 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2 = 1 2πœ‹ 𝐷/2 𝑑=1 𝐷 𝜎 𝑑 𝑒 βˆ’ 1 2 𝑑=1 𝐷 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2

8 Example Suppose that πœ‡ 1 =1, πœ‡ 2 =βˆ’1, 𝜎 1 2 =1, 𝜎 2 2 =4. Then 𝑓 𝑋 π‘₯ = 𝑑= πœ‹ 𝜎 𝑑 2 𝑒 βˆ’ 1 2 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2 = 1 4πœ‹ 𝑒 βˆ’ 1 2 π‘₯ 1 βˆ’ π‘₯ The pdf has its maximum value, 𝑓 𝑋 π‘₯ = 1 4πœ‹ , at π‘₯ = πœ‡ = 1 βˆ’1 . It drops to 1 4πœ‹ 𝑒 at π‘₯ = πœ‡ 1 Β± 𝜎 1 πœ‡ 2 and at π‘₯ = πœ‡ 1 πœ‡ 2 Β± 𝜎 2 . It drops to 1 4πœ‹ 𝑒 2 at π‘₯ = πœ‡ 1 Β± 2𝜎 1 πœ‡ 2 and at π‘₯ = πœ‡ 1 πœ‡ 2 Β± 2𝜎 2 .

9 Example

10 Example OK, things are going to get even more complicated, so let’s remember what that means. It means that there are two Gaussian random variables, x1 and x2. X1 is Gaussian with an average value of 1, and a variance of 1. X2 is Gaussian with an average value of -1, and a variance of 4. Got it? OK. Let’s keep going.

11 Facts about linear algebra #1: determinant of a diagonal matrix
Suppose that Ξ£ is a diagonal matrix, with variances on the diagonal: Ξ£= 𝜎 𝜎 2 2 … 0 … 𝜎 𝐷 2 Then the determinant is Ξ£ = 𝑑=1 𝐷 𝜎 𝑑 2 So we can write the Gaussian pdf as 1 2πœ‹ 𝐷/2 Ξ£ 1/2 𝑒 βˆ’ 1 2 𝑑=1 𝐷 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2 = 1 2πœ‹Ξ£ 1/2 𝑒 βˆ’ 1 2 𝑑=1 𝐷 π‘₯ 𝑑 βˆ’ πœ‡ 𝑑 𝜎 𝑑 2

12 Facts about linear algebra #2: inner product
Suppose that π‘₯ = π‘₯ 1 ∢ π‘₯ 𝐷 and πœ‡ = πœ‡ 1 ∢ πœ‡ 𝐷 Then π‘₯ βˆ’ πœ‡ 𝑇 π‘₯ βˆ’ πœ‡ = π‘₯ 1 βˆ’ πœ‡ 1 2 +…+ π‘₯ 𝐷 βˆ’ πœ‡ 𝐷 2

13 Facts about linear algebra #3: inverse of a diagonal matrix
Suppose that Ξ£ is a diagonal matrix, with variances on the diagonal: Ξ£= 𝜎 𝜎 2 2 … 0 … 𝜎 𝐷 2 Then its inverse, Ξ£ βˆ’1 , is Ξ£ βˆ’1 = 1 𝜎 𝜎 2 2 … 0 … 1 𝜎 𝐷 2

14 Facts about linear algebra #4: squared Mahalanobis distance with a diagonal covariance matrix
Suppose that all of the things on the previous slides are true. Then the squared Mahalanobis distance is 𝑑 Ξ£ 2 π‘₯ , πœ‡ = π‘₯ βˆ’ πœ‡ 𝑇 Ξ£ βˆ’1 π‘₯ βˆ’ πœ‡ = π‘₯ 1 βˆ’ πœ‡ 1 , …, π‘₯ 𝐷 βˆ’πœ‡ 𝐷 1 𝜎 𝜎 2 2 … 0 … 1 𝜎 𝐷 2 π‘₯ 1 βˆ’ πœ‡ 1 ∢ π‘₯ 𝐷 βˆ’πœ‡ 𝐷 = π‘₯ 1 βˆ’ πœ‡ 1 2 𝜎 1 2 +…+ π‘₯ 𝐷 βˆ’ πœ‡ 𝐷 2 𝜎 𝐷 2

15 Mahalanobis form of the multivariate Gaussian, independent dimensions
So we can write the multivariate Gaussian as 𝑓 𝑋 π‘₯ =𝒩 π‘₯ ; πœ‡ ,Ξ£ = 1 2πœ‹Ξ£ 1/2 𝑒 βˆ’ 1 2 π‘₯ βˆ’ πœ‡ 𝑇 Ξ£ βˆ’1 π‘₯ βˆ’ πœ‡ 𝑓 𝑋 π‘₯ =𝒩 π‘₯ ; πœ‡ ,Ξ£ = 1 2πœ‹Ξ£ 1/2 𝑒 βˆ’ 1 2 𝑑 Ξ£ 2 π‘₯ βˆ’ πœ‡

16 Facts about ellipses The formula 1= π‘₯ βˆ’ πœ‡ 𝑇 Ξ£ βˆ’1 π‘₯ βˆ’ πœ‡ … or equivalently 1= π‘₯ 1 βˆ’ πœ‡ 1 2 𝜎 1 2 +…+ π‘₯ 𝐷 βˆ’ πœ‡ 𝐷 2 𝜎 𝐷 2 … is the formula for an ellipsoid (an ellipse in two dimensions; a football shaped object in three dimensions; etc.). The ellipse is centered at the point πœ‡ , and it has a volume proportional to Ξ£ . (In 2D the area of an ellipse is πœ‹ Ξ£ 1/2 , in 3D it’s 4 3 πœ‹ Ξ£ 1/2 , etc.)

17 Facts about ellipses 𝑐= π‘₯ βˆ’ πœ‡ 𝑇 Ξ£ βˆ’1 π‘₯ βˆ’ πœ‡ … is equivalent to 𝑓 𝑋 π‘₯ = 1 2πœ‹Ξ£ 1/2 𝑒 βˆ’ 1 2 𝑐 Therefore the contour plot of a Gaussian pdf --- the curves of constant 𝑓 𝑋 π‘₯ --- are ellipses. If Ξ£ is diagonal, the main axes of the ellipse are parallel to the π‘₯ 1 , π‘₯ 2 , etc. axes. If Ξ£ is NOT diagonal, the main axes of the ellipse are tilted.

18 Mahalanobis form of the multivariate Gaussian, dependent dimensions
If the dimensions are dependent, and jointly Gaussian, then we can still write the multivariate Gaussian as 𝑓 𝑋 π‘₯ =𝒩 π‘₯ ; πœ‡ ,Ξ£ = 1 2πœ‹Ξ£ 1/2 𝑒 βˆ’ 1 2 π‘₯ βˆ’ πœ‡ 𝑇 Ξ£ βˆ’1 π‘₯ βˆ’ πœ‡

19 Example Suppose that π‘₯ 1 and π‘₯ 2 are linearly correlated Gaussians with means 1 and -1, respectively, and with variances 1 and 4, and covariance 1. πœ‡ = 1 βˆ’1 Remember the definitions of variance and covariance: 𝜎 1 2 =𝐸 π‘₯ 1 βˆ’ πœ‡ 1 2 =1 𝜎 2 2 =𝐸 π‘₯ 2 βˆ’ πœ‡ 2 2 =4 𝜎 12 = 𝜎 21 =𝐸 π‘₯ 1 βˆ’ πœ‡ 1 π‘₯ 2 βˆ’ πœ‡ 2 =1 Ξ£=

20 Determinant and inverse of a 2x2 matrix
You should know the determinant and inverse of a 2x2 matrix. If Ξ£= π‘Ž 𝑏 𝑐 𝑑 Then Ξ£ =π‘Žπ‘‘βˆ’π‘π‘ and Ξ£ βˆ’1 = 1 Ξ£ 𝑑 βˆ’π‘ βˆ’π‘ π‘Ž You should be able to verify the inverse, for yourself, by multiplying ΣΣ βˆ’1 and discovering that the result is the identity matrix.

21 Example Therefore the contour lines of this Gaussian are ellipses centered at πœ‡ = 1 βˆ’1 . The contour lines are ellipses that satisfy this equation. Each different value of 𝑐 gives a different ellipse: 𝑐= 4 3 π‘₯ 1 βˆ’ π‘₯ βˆ’ 1 3 π‘₯ 1 βˆ’1 π‘₯ 2 +1

22 Example


Download ppt "ECE 417 Lecture 4: Multivariate Gaussians"

Similar presentations


Ads by Google