Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tutorial 9: Further Topics on Random Variables 2

Similar presentations


Presentation on theme: "Tutorial 9: Further Topics on Random Variables 2"β€” Presentation transcript:

1 Tutorial 9: Further Topics on Random Variables 2
Weiwen LIU March 27, 2017

2 Covariance and Correlation
Covariance and correlation describe the degree to which two random variables or sets of random variables tend to deviate from their expected values in similar ways. Independent random variables are uncorrelated, but NOT vice versa.

3 Example 1 Let X and Y be continuous random variables with joint pdf
𝑓 𝑋,π‘Œ π‘₯,𝑦 = 3π‘₯, 0≀𝑦≀π‘₯≀1, 0, π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’. Compute cov 𝑋,π‘Œ ; Are 𝑋 and π‘Œ independent?

4 Example 1 The marginal pdfs and expectations of X and Y are
𝑓 𝑋 𝑋 = 0 π‘₯ 3π‘₯𝑑𝑦 =3 π‘₯ 2 , 0≀π‘₯≀1, E 𝑋 = 0 1 π‘₯β‹…3 π‘₯ 2 𝑑π‘₯ = π‘₯ = 3 4 , 𝑓 π‘Œ 𝑦 = 𝑦 1 3π‘₯𝑑π‘₯ = π‘₯ 2 𝑦 1 = βˆ’ 𝑦 2 , 0≀𝑦≀1, E π‘Œ = 0 1 𝑦⋅ βˆ’ 𝑦 2 𝑑𝑦 = 𝑦 2 2 βˆ’ 𝑦 = 3 8 , E π‘‹π‘Œ = π‘₯ π‘₯𝑦⋅3π‘₯𝑑𝑦𝑑π‘₯ = π‘₯ = 3 10

5 Example 1 Then the covariance is
cov X,Y =E XY βˆ’E X E Y = 3 10 βˆ’ 3 4 Γ— 3 8 = 𝑋 and π‘Œ are not independent as it is not true that 𝑓 𝑋,π‘Œ π‘₯,𝑦 = 𝑓 𝑋 π‘₯ 𝑓 π‘Œ 𝑦 .

6 Variance of Summations
Let 𝑋 1 , 𝑋 2 ,β‹―,𝑋 𝑛 be random variables with finite variance, then we have var 𝑖 𝑋 𝑖 = 𝑖 var( 𝑋 𝑖 ) + 𝑖≠𝑗 cov( 𝑋 𝑖 , 𝑋 𝑗 )

7 Example 2 If the variance of verbal GRE were 64, the variance of quantitative SAT were 81 and the correlation between these two tests were 0.50, then what is the variance of total SAT (verbal + quantitative)?

8 Example 2 Denote 𝑋 1 , 𝑋 2 as the score for verbal and quantitative respectively, then var 𝑋 1 + 𝑋 2 =var 𝑋 1 +var 𝑋 2 +2cov 𝑋 1 , 𝑋 2 = Γ—0.5Γ— 64 Γ— 81 =217

9 Conditional Expectation Revisit
The conditional expectation E[𝑋|π‘Œ] of a random variable 𝑋 given another random variable π‘Œ, is a new random variable determined by π‘Œ. It’s distribution is determined by the distribution of π‘Œ.

10 Expectation of CE The conditional expectation E 𝑋 π‘Œ is a random variable. It has expectation (over π‘Œ) E[E 𝑋 π‘Œ ]= 𝑦 E 𝑋 π‘Œ=𝑦 𝑝 π‘Œ 𝑦 , or for continuous π‘Œ E[E 𝑋 π‘Œ ]= 𝑅 E 𝑋 π‘Œ=𝑦 𝑓 π‘Œ 𝑦 𝑑𝑦.

11 Expectation of CE - Properties
If X has a finite expectation, we have, concluded straightforwardly from the total expectation theorem of the Law of iterated expectations E[E 𝑋 π‘Œ ]=E 𝑋 . For any function g, we have E[𝑋𝑔(π‘Œ)]=𝑔(π‘Œ)E 𝑋|π‘Œ .

12 Example 3 A class has 𝑛 students with score π‘₯ 1 ,β‹―, π‘₯ 𝑛 . Let π‘š= 1 𝑛 𝑖 π‘₯ 𝑖 . Divide them into π‘˜ disjoint subsets 𝐴 1 ,β‹―, 𝐴 𝑛 . Let 𝑛 𝑠 = 𝐴 𝑠 , and π‘š 𝑠 = 1 𝑛 𝑠 π‘–βˆˆ 𝐴 𝑠 π‘₯ 𝑖 be the averaged score of the 𝑠-th section. We have 𝑠 𝑛 𝑠 𝑛 π‘š 𝑠 =π‘š.

13 Example 3 The result 𝑠 𝑛 𝑠 𝑛 π‘š 𝑠 =π‘š can be viewed from a CE perspective. Let 𝑋 be the score of a random student, and π‘Œ be the section of a random student, we have 𝑠 𝑛 𝑠 𝑛 π‘š 𝑠 = 𝑠 E 𝑋 π‘Œ=𝑠 P(π‘Œ=𝑠 ) =E[E 𝑋 π‘Œ ] =E 𝑋 =π‘š

14 Example 4 Let 𝑋 be the sales of the next year, and π‘Œ be the sales of the first quarter of next year. Suppose we have a forecast system giving the joint distribution of 𝑋 and π‘Œ. We view Z= E[𝑋|π‘Œ]βˆ’E[𝑋] as the forecast revision E 𝑍 =E E 𝑋 π‘Œ βˆ’E E 𝑋 =0 Intuitively, if 𝑍 is positive, the forecast system underestimates E[𝑋|π‘Œ].

15 CE as Estimator If we could observe π‘Œ who provides information about 𝑋, it’s natural we estimate π‘Œ using 𝑋, as 𝑋 =E 𝑋|π‘Œ . The estimation error 𝑋 = 𝑋 βˆ’π‘‹ is a random variable satisfying E 𝑋 |π‘Œ =E 𝑋 |π‘Œ βˆ’E 𝑋|π‘Œ =0 And hence E 𝑋 =0

16 CE as Estimator An important property is that the estimation 𝑋 is uncorrelated with the estimator error 𝑋 . In fact, E 𝑋 𝑋 =E[E 𝑋 𝑋 |π‘Œ]=E[ 𝑋 E 𝑋 |π‘Œ ]=0. Hence cov 𝑋 , 𝑋 =E 𝑋 𝑋 βˆ’E 𝑋 E 𝑋 =0. As an result, var 𝑋 =var 𝑋 +var 𝑋

17 Conditional Variance The conditional variance is defined as
var 𝑋 π‘Œ =E 𝑋 2 |π‘Œ =E (π‘‹βˆ’E[𝑋|π‘Œ]) 2 |π‘Œ . As usual, var 𝑋|π‘Œ=𝑦 =E 𝑋 2 |π‘Œ=𝑦 . Also, var 𝑋 =E 𝑋 2 =E E 𝑋 2 |π‘Œ =E[var(𝑋|π‘Œ)].

18 Law of Total Variance Law of Total Variance
var 𝑋 =E var 𝑋 π‘Œ +var(𝐸 𝑋 π‘Œ ). It’s especially useful to calculate variances of random variables. See the following examples

19 Example 5 Consider we tossing a coin, of whom the probability of heads is a uniform random variable π‘Œ, the number of heads 𝑋 satisfies E 𝑋|π‘Œ =π‘›π‘Œ and var E 𝑋 π‘Œ =var π‘›π‘Œ = 𝑛 2 12

20 Example 5 Meanwhile we have var 𝑋 π‘Œ =π‘›π‘Œ(1βˆ’π‘Œ), and E[var 𝑋 π‘Œ ] = 𝑛 6
Then using law of total variance, var 𝑋 =E var 𝑋 π‘Œ +var(𝐸 𝑋 π‘Œ ). = 𝑛 6 + 𝑛 2 12

21 Transforms The transform provides us with an alternative representation of its probability law (PMF or PDF). It is not particularly intuitive, but it is often convenient for certain types of mathematical manipulations. The transform (also referred to as the associated moment generating function) is defined as: 𝑀 𝑋 𝑠 =E 𝑒 𝑠𝑋 , which is a function of a scalar parameter 𝑠.

22 Inversion Property Suppose that 𝑀 𝑋 (𝑠) is finite for all 𝑠 in an interval of the form [βˆ’π‘Ž, π‘Ž], where a is a positive number. Then, 𝑀 𝑋 determines uniquely the CDF of the random variable 𝑋. If 𝑀 𝑋 (𝑠) = 𝑀 π‘Œ (𝑠) < ∞, for all 𝑠 ∈ [βˆ’π‘Ž, π‘Ž], where π‘Ž is a positive number, then the random variables 𝑋 and π‘Œ have the same CDF.

23 Sums of Independent Variables
Generally, 𝑋 1 ,β‹―, 𝑋 𝑛 is a collection of independent random variables, and 𝑍= 𝑋 1 +β‹―+ 𝑋 𝑛 Then, 𝑀 𝑍 𝑠 = 𝑀 𝑋 1 𝑠 β‹― 𝑀 𝑋 𝑛 𝑠

24 Sums of A Random Number of Independent Random Variables
Let π‘Œ= 𝑋 1 +β‹―+ 𝑋 𝑁 , where 𝑁 is a random variable that takes nonnegative integer values. Then, E π‘Œ =E E π‘Œ 𝑁 =E 𝑁 E 𝑋 , varE π‘Œ =𝐄 𝑁 var 𝑋 + E 𝑋 2 var 𝑁 , 𝑀 π‘Œ 𝑠 = 𝑛=0 ∞ 𝑀 𝑋 𝑠 𝑛 𝑝 𝑁 𝑛 .

25 Example 6 Let 𝑁 be geometrically distributed with parameter 𝑝, and let each random variable 𝑋 𝑖 be geometrically distributed with parameter π‘ž. We assume that all of these random variables are independent. Let π‘Œ = 𝑋 1 +… + 𝑋 𝑁 . We have 𝑀 𝑁 (𝑠) = 𝑝 𝑒 𝑠 1 βˆ’ 1 βˆ’ 𝑝 𝑒 𝑠 , 𝑀 𝑋 (𝑠) = π‘ž 𝑒 𝑠 1 βˆ’ 1 βˆ’ π‘ž 𝑒 𝑠 . What is the distribution of π‘Œ?

26 Example 6 To determine 𝑀 π‘Œ (𝑠), we start with the formula for 𝑀 𝑁 (𝑠) and replace each occurrence of 𝑒 𝑠 with 𝑀 𝑋 (𝑠). This yields 𝑀 π‘Œ 𝑠 = 𝑝 𝑀 𝑋 𝑠 1 βˆ’ 1 βˆ’ 𝑝 𝑀 𝑋 𝑠 and, after some algebra 𝑀 π‘Œ 𝑠 = π‘π‘ž 𝑒 𝑠 1 βˆ’ 1 βˆ’ π‘π‘ž 𝑒 𝑠 . We conclude that π‘Œ is geometrically distributed, with parameter π‘π‘ž.


Download ppt "Tutorial 9: Further Topics on Random Variables 2"

Similar presentations


Ads by Google