Section 5.5 Important Theorems in the Text: Let X 1, X 2, …, X n be independent random variables with respective N(  1,  1 2 ), N(  2,  2 2 ), …, N(

Slides:



Advertisements
Similar presentations
Week 91 Example A device containing two key components fails when and only when both components fail. The lifetime, T 1 and T 2, of these components are.
Advertisements

8.2 Kernel And Range.
Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Recall Taylor’s Theorem from single variable calculus:
1. (a) (b) The random variables X 1 and X 2 are independent, and each has p.m.f.f(x) = (x + 2) / 6 if x = –1, 0, 1. Find E(X 1 + X 2 ). E(X 1 ) = E(X 2.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Probability Theory STAT 312 STAT 312 Dr. Zakeia AlSaiary.
Multivariate distributions. The Normal distribution.
Section 5.1 Let X be a continuous type random variable with p.d.f. f(x) where f(x) > 0 for a < x < b, where a = – and/or b = + is possible; we also let.
Probability theory 2010 Order statistics  Distribution of order variables (and extremes)  Joint distribution of order variables (and extremes)
Section 6.2 Suppose X 1, X 2, …, X n are observations in a random sample from a N( ,  2 ) distribution. Then,
Section 5.4 is n  a i X i i = 1 n  M i (a i t). i = 1 M 1 (a 1 t) M 2 (a 2 t) … M n (a n t) = Y = a 1 X 1 + a 2 X 2 + … + a n X n = If X 1, X 2, …, X.
Statistics Lecture 18. Will begin Chapter 5 today.
Section 5.7 Suppose X 1, X 2, …, X n is a random sample from a Bernoulli distribution with success probability p. Then Y = X 1 + X 2 + … + X n has a distribution.
Section 6.1 Let X 1, X 2, …, X n be a random sample from a distribution described by p.m.f./p.d.f. f(x ;  ) where the value of  is unknown; then  is.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
TECHNIQUES OF INTEGRATION
Lecture 5 Probability and Statistics. Please Read Doug Martinson’s Chapter 3: ‘Statistics’ Available on Courseworks.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Section 5.3 Suppose X 1, X 2, …, X n are independent random variables (which may be either of the discrete type or of the continuous type) each from the.
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Section 8.3 Suppose X 1, X 2,..., X n are a random sample from a distribution defined by the p.d.f. f(x)for a < x < b and corresponding distribution function.
Sections 4.1, 4.2, 4.3 Important Definitions in the Text:
Section 10.6 Recall from calculus: lim= lim= lim= x  y  — x x — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives.
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
Chapter 4: Joint and Conditional Distributions
Section 5.6 Important Theorem in the Text: The Central Limit TheoremTheorem (a) Let X 1, X 2, …, X n be a random sample from a U(–2, 3) distribution.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
VECTOR FUNCTIONS 13. VECTOR FUNCTIONS Later in this chapter, we are going to use vector functions to describe the motion of planets and other objects.
Separate multivariate observations
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
Component Reliability Analysis
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Jointly Distributed Random Variables
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Chap. 4 Continuous Distributions
Section 3.6 Recall that y –1/2 e –y dy =   0 (Multivariable Calculus is required to prove this!)  (1/2) = Perform the following change of variables.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
Fourier analysis in two dimensions University of Illinois at Chicago ECE 427, Dr. D. Erricolo University of Illinois at Chicago ECE 427, Dr. D. Erricolo.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
1 Two Functions of Two Random Variables In the spirit of the previous lecture, let us look at an immediate generalization: Suppose X and Y are two random.
Techniques of Integration
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Consider a transformation T(u,v) = (x(u,v), y(u,v)) from R 2 to R 2. Suppose T is a linear transformation T(u,v) = (au + bv, cu + dv). Then the derivative.
Chapter 5a:Functions of Random Variables Yang Zhenlin.
Multivariate distributions Suppose we are measuring 2 or more different properties of a system –e.g. rotational and radial velocities of stars in a cluster.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Distributions of Functions of Random Variables November 18, 2015
Joint Moments and Joint Characteristic Functions.
Ver Chapter 5 Continuous Random Variables 1 Probability/Ch5.
Chapter 5: The Basic Concepts of Statistics. 5.1 Population and Sample Definition 5.1 A population consists of the totality of the observations with which.
CHAPTER 9.10~9.17 Vector Calculus.
Sect. 4.5: Cayley-Klein Parameters 3 independent quantities are needed to specify a rigid body orientation. Most often, we choose them to be the Euler.
Independent Samples: Comparing Means Lecture 39 Section 11.4 Fri, Apr 1, 2005.
Chapter 8: Fundamental Sampling Distributions and Data Descriptions:
More about Normal Distributions
How accurately can you (1) predict Y from X, and (2) predict X from Y?
Addition of Independent Normal Random Variables
6.3 Sampling Distributions
Chapter 8: Fundamental Sampling Distributions and Data Descriptions:
9. Two Functions of Two Random Variables
Berlin Chen Department of Computer Science & Information Engineering
Berlin Chen Department of Computer Science & Information Engineering
Fundamental Sampling Distributions and Data Descriptions
Presentation transcript:

Section 5.5 Important Theorems in the Text: Let X 1, X 2, …, X n be independent random variables with respective N(  1,  1 2 ), N(  2,  2 2 ), …, N(  n,  n 2 ) distributions, and let c 1, c 2, …, c n be constants. Then the random variable Y = c 1 X 1 + c 2 X 2 + … + c n X n = has adistribution. (Proof of this theorem is addressed in Class Exercise #1.) Theorem n  c i X i i = 1

1. (a) Let X 1, X 2, …, X n be independent random variables with respective N(  1,  1 2 ), N(  2,  2 2 ), …, N(  n,  n 2 ) distributions, and let c 1, c 2, …, c n be constants. Define the random variable Y = c 1 X 1 + c 2 X 2 + … + c n X n =. n  c i X i i = 1 Find the m.g.f. for Y. Is it possible to tell from the m.g.f. what the distribution of Y is? From Theorem 5.4-1, we have that M Y (t) = e  1 c 1 t +  1 2 c 1 2 t 2 /2 e  2 c 2 t +  2 2 c 2 2 t 2 /2 … e =  n c n t +  n 2 c n 2 t 2 /2 e (c 1  1 + c 2  2 + … + c n  n )t + (c 1 2  c 2 2  … + c n 2  n 2 )t 2 /2 From this m.g.f., we recognize that Y must have a distribution. N(,) c 1  1 + c 2  2 + … + c n  n c 1 2  c 2 2  … + c n 2  n 2 M X (c 1 t) M X (c 2 t) … M X (c n t) = 12n

Let X 1, X 2, …, X n be independent random variables with respective N(  1,  1 2 ), N(  2,  2 2 ), …, N(  n,  n 2 ) distributions, and let c 1, c 2, …, c n be constants. Then the random variable Y = c 1 X 1 + c 2 X 2 + … + c n X n = has adistribution. (Proof of this theorem is addressed in Class Exercise #1.) Theorem n  c i X i i = 1 N,N, n  c i  i i = 1 n  c i 2  i 2 i = 1 If X 1, X 2, …, X n are a random sample from a N( ,  2 ) distribution, then the sample mean X =has a distribution. Corollary n  X i i = 1 n N( ,  2 / n)

2W + 5X – 8Y has adistribution. W / 3 has adistribution. – W / 3 has adistribution. N(– 4, 2720) N(– 7/3, 4/9) N(7/3, 4/9) 1.-continued (b) Suppose the random variables W, X, and Y are independent with respective N(–7, 4), N(10, 16), and N(5, 36) distributions. Complete each of the following statements:

Let X 1, X 2, …, X n be a random sample from a N( ,  2 ) distribution, with sample mean X = and sample variance S 2 =. Then,(1) (2) (Proof of this theorem is addressed in Class Exercises #2 & #3.) Theorem Theorem (Proof of this theorem is addressed in Class Exercise #7.) n  X i i = 1 n n  (X i – X) 2 i = 1 n – 1 the sample mean X and sample variance S 2 are independent random variables, the random variable has a  2 (n – 1) distribution. (n – 1)S 2 ——— =  2 n  (X i – X) 2 i = 1 22

2. (a) (b) The random variables X and Y are independent, and each has a N( ,  2 ) distribution. We define the random variables V = X + Y and W = X – Y. Find the joint p.d.f. of X and Y. Since X and Y are independent, their joint p.d.f. is f(x,y) =if –  < x < , –  < y <  2  2 exp (x –  ) 2 + (y –  ) 2 – ——————— 2  2 Use the change-of-variables method to find the joint p.d.f. of V and W. First, we find the space of V and W as follows: –  < x <  < v <  –  < y <  < w < –  

2.-continued Then, we find the inverse transformation as follows: v = x + yx =  w = x – yy = v + w —— 2 v – w —— 2 Next, we find the Jacobian determinant as follows: J = det xx——vwyy——vwxx——vwyy——vw—— 1/2 = det 1/2 – 1/2 =

The joint p.d.f. of V and W is g(v, w) = 2  2 exp ([v + w]/2 –  ) 2 + ([v – w]/2 –  ) 2 – —————————————— 2  2 1 — = 2 4  2 exp ([v + w] – 2  ) 2 + ([v – w] – 2  ) 2 – ————————————— 8  2 if –  < v < , –  < w < 

2.-continued (c) Show that the random variables V = X + Y and W = X – Y are independent with each having a normal distribution, and find the mean and variance for each normal distribution. First, we algebraically rewrite the joint p.d.f. of V and W as follows: 4  2 exp ([v – 2  ] + w) 2 + ([v – 2  ] – w) 2 – ————————————— 8  2 = 4  2 exp 2[v – 2  ] 2 + 2w 2 – ——————— 8  2 = 4  2 exp (v – 2  ) 2 + w 2 – —————— 4  2 =

(2  ) 1/2 2 1/2  exp (v – 2  ) 2 – ———— 4  2 (2  ) 1/2 2 1/2  exp w 2 – —— 4  2 This is the p.d.f. for a N(, ) distribution. 22 2222 0 2222 Consequently, V and W must be independent random variables with the respective normal distributions stated above, since the joint p.d.f. is the product of the normal p.d.f. for W and the normal p.d.f. for V.

3. (a) If X 1, X 2, …, X n are a random sample from a N( ,  2 ) distribution, then X = is the sample mean, and S 2 = is the sample variance. Suppose n = 2, that is, the random sample is X 1, X 2. n  X i i = 1 n n  (X i – X) 2 i = 1 n – 1 Let V = X 1 + X 2 and W = X 1 – X 2, and show that X = and S 2 =. V — 2 W 2 — 2 X = X 1 + X 2 ——— = 2 V — 2 S 2 = (X 1 – X) 2 + (X 2 – X) 2 ———————— = 2 – 1 X 1 + X 2 X 1 + X 2 X 1 – ——— + X 2 – ——— = 2 2 2

X 1 – X 2 X 2 – X 1 ——— +——— = 2 22 X 1 – X 2 ——— = W 2 — 2 S 2 = (X 1 – X) 2 + (X 2 – X) 2 ———————— = 2 – 1 X 1 + X 2 X 1 + X 2 X 1 – ——— + X 2 – ——— = (b)Complete the following statements: From Class Exercise #2, we find that V = X 1 + X 2 and W = X 1 – X 2 are independent random variables with respective and distributions. From Theorem 3.6-1, we find that has a distribution. From Theorem 3.6-2, we find that has a distribution. N(2 , 2  2 ) N(0, 2  2 ) W 2 —– 2  2 N(0,1)  2 (1) W ——  2 

3.-continued (c) Explain how the results from part (b) prove Theorem when n = 2. Since V and W are independent, then it seems clear that X = V / 2 and S 2 = W 2 / 2 are independent. (It is actually somewhat complicated to write a rigorous proof that if V and W are independent random variables, then u 1 (V) and u 2 (W) are independent random variables, even though this result seems obvious and can be extended to any number of random variables.) (1) (2) (n – 1)S 2 ———— =  2 W 2 —– 2  2 has a  2 (1) =  2 (n – 1) distribution. (A complete proof of Theorem requires matrix algebra.) S 2 —– =  2

4. (a) (b) (c) (d) (e) (f) The random variables X and Y are independent with respective N(10, 16) and N(5, 36) distributions. Use an appropriate previous result to find the distribution for each of the following random variables: X + Y has adistributionN(15, 52)(from Theorem 5.5-1). X – Y has adistributionN(5, 52) X – 3Y has a distributionN(–5, 340) (X – 10) / 4 has a distributionN(0, 1) (X – 10) 2 / 16 has a distribution  2 (1) (X – 10) 2 / 16 + (Y – 5) 2 / 36 has a distribution  2 (2) (from Theorem 5.5-1). (from Theorem 3.6-1). (from Theorem and Theorem 3.6-2). (from Corollary 5.4-3).

5. (a) (b) The random sample X 1, X 2, …, X 6 is taken from a N(10, 81) distribution, and the following random variables are defined: Q =W = Find constants a and b such that P(a < Q < b) =  (X i – 10) 2 i =  (X i – X) 2 i = 1 81 Q has adistribution.  2 (6) P( < Q < ) = Find P(W > 11.07). W has adistribution.  2 (5) P(W > 11.07) = 0.05

The random variables X and Y are independent, and each has a N(0, 1) distribution. The random variables R and  are defined to be the polar coordinates of the point (X,Y). 6. (a) (b) Find the joint p.d.f. of X and Y. Since X and Y are independent, their joint p.d.f. is f(x,y) = if –  < x < , –  < y <  22 e – (x 2 + y 2 ) / 2 Use the change-of-variables method to find the joint p.d.f. of R and . First, we find the space of R and  as follows: –  < x <  r <  –  < y <   < 0  0 22

6.-continued Then, we find the inverse transformation as follows: r =x =   =y = (x 2 + y 2 ) 1/2 angle between vector (x,y) and positive x axis r cos  r sin  Next, we find the Jacobian determinant as follows: J = det  x —  r   y —  r  = cos  det – r sin  sin  r cos  = r

The joint p.d.f. of R and  is g(r,  ) = if 0 < r < , 0 <  < 2  22 r e – r 2 / 2 (c) Find the marginal p.d.f. for R and the marginal p.d.f. for . g 1 (r) = 0 22 d  =  = 0 22  = r e if 0 < r 22 r e – r 2 / 2 22 r e – r 2 / 2 R has p.d.f.  has p.d.f. g 2 (  ) = 0  dr = r = 0  = 1 — if 0 <  < 2  2  22 r e – r 2 / 2 22 – e – r 2 / 2 We recognize that  has a distribution.U(0, 2  )

g(r,  ) = if 0 < r < , 0 <  < 2  22 r e – r 2 / 2 g 1 (r) = r e if 0 < r – r 2 / 2 g 2 (  ) = 1 — if 0 <  < 2  2  We note that g(r,  ) = g1(r)g1(r) g 2 (  ) which implies that the random variables R and  are independent.

The random variables Z and U are independent and have respectively a N(0, 1) distribution and a  2 (r) distribution. 7. (a) (b) Find the joint p.d.f. of (Z, U). Since Z and U are independent, their joint p.d.f. is (2  ) 1/2 e – z 2 / 2  (r/2) 2 r/2 u r/2 – 1 e – u/2 if –  < z < , 0 < u <  f(z,u) = Define the random variable T =. Use the distribution function method to find the p.d.f. of T by completing the steps outlined, and making use of the two facts from calculus reviewed in Class Exercise #7 in Section 5.2. Z ———  U / r The space of T = is {t | –  < t <  } Z ———  U / r

The distribution function of T = is G(t) = P(T  t) = Z ———  U / r P Z ———  t =  U / r P( Z  t  U / r ) = 0  –  t  u / r 1 ————  (r/2)  2 (r+1)/2 e – z 2 / 2 dzu r/2 – 1 e – u/2 du

The p.d.f. of T is g(t) = G / (t) = 7. - continued 0  1 ————  (r/2)  u r/2 – 1 e – u/2 du = 2 (r+1)/2 e – (u / 2)(t 2 / r)  u / r 0  1 ————— (  r) 1/2  (r/2) u (r+1)/2 – 1 du 2 (r+1)/2 e – (u / 2)(1 + t 2 / r) (To simplify this p.d.f., we could either (1) make an appropriate change of variables in the integral, as is done in the proof of Theorem in the textbook, or (2) do some algebra to make the formula under the integral a p.d.f. which we know must integrate to (one) 1, as we shall do here.)

0  (  r) 1/2  (r/2) du [2 / (1 + t 2 / r)] (r+1)/2 eu (r+1)/2 – 1 u – —————— 2 / (1 + t 2 / r) (1 + t 2 / r) (r+1)/2  [(r + 1)/2] This is the p.d.f. for a distribution, so the integral must equal (one) 1. gamma[(r + 1)/2, 2 / (1 + t 2 / r)] This must be the p.d.f. we seek, for –  < t < . 0  1 ————— (  r) 1/2  (r/2) u (r+1)/2 – 1 du = 2 (r+1)/2 e – (u / 2)(1 + t 2 / r) This is the p.d.f. for a random variable having a Student’s t distribution with r degrees of freedom. This distribution is important in some future applications of the theory of statistics.

8. (a) (b) (c) (d) Suppose the random variable T has a t distribution with r degrees of freedom If r = 15, then P(T < 2.131) = If r = 15, then P(T < –2.131) =0.025 If r = 15, then P(–1.753 < T < 2.602) =0.99 – 0.05 = 0.94 If r = 8, then find a constant c such that P(|T| < c) = P(|T| < c) = 0.99P(– c < T < c) = 0.99 P(T < c) – P(T < – c) = 0.99P(T < c) – (1 – P(T < c)) = 0.99 P(T < c) = 0.995c = t (8) = (e)(f) (g)(h) t (15) =–2.947 t 0.10 (8) =1.397 t 0.90 (20) =–1.325 t (57) =–2.576

If X 1, X 2, …, X n are a random sample from a N( ,  2 ) distribution, then the sample mean X =has a distribution. n  X i i = 1 n N( ,  2 / n) According to Corollary 5.5-1: This motivates the following question, which turns out to be extremely important in the development of statistical analysis: If X 1, X 2, …, X n are a random sample from a non-normal distribution, then what type of distribution does the sample mean X = have? n  X i i = 1 n In order to thoroughly answer this question, we shall cover Sections 10.5 and 10.6 before continuing in Chapter 5 with Section 5.6.