Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.

Slides:



Advertisements
Similar presentations
Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
Advertisements

Joint Probability Distributions and Random Samples
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 4 Mathematical Expectation.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Sampling Variability & Sampling Distributions.
Statistics Lecture 18. Will begin Chapter 5 today.
Sampling Distributions
Review.
Copyright © 2009 Pearson Education, Inc. Chapter 16 Random Variables.
TODAY Homework 4 comments Week in review -Joint Distribution -Binomial Distribution -Central Limit Theorem Sample problems.
Discrete Random Variables and Probability Distributions
The Analysis of Variance
Copyright © Cengage Learning. All rights reserved. 6 Point Estimation.
Inferences About Process Quality
The joint probability distribution function of X and Y is denoted by f XY (x,y). The marginal probability distribution function of X, f X (x) is obtained.
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Copyright © Cengage Learning. All rights reserved. 3.5 Hypergeometric and Negative Binomial Distributions.
Joint Probability Distributions and Random Samples
Copyright © Cengage Learning. All rights reserved. 3.4 The Binomial Probability Distribution.
Copyright © Cengage Learning. All rights reserved. 8 Introduction to Statistical Inferences.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 16 Random Variables.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 16 Random Variables.
Copyright © Cengage Learning. All rights reserved. Elementary Probability Theory 5.
Lecture 15: Statistics and Their Distributions, Central Limit Theorem
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
The Mean of a Discrete RV The mean of a RV is the average value the RV takes over the long-run. –The mean of a RV is analogous to the mean of a large population.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
Discrete Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4)
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
3.3 Expected Values.
Chapter 8 Sampling Variability and Sampling Distributions.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 16 Random Variables.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Topic 5 - Joint distributions and the CLT
Engineering Statistics ECIV 2305
Topic 6: The distribution of the sample mean and linear combinations of random variables CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Joint Moments and Joint Characteristic Functions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
1 Two Discrete Random Variables The probability mass function (pmf) of a single discrete rv X specifies how much probability mass is placed on each possible.
Copyright © Cengage Learning. All rights reserved. 15 Distribution-Free Procedures.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 14 Goodness-of-Fit Tests and Categorical Data Analysis.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 16 Quality Control Methods.
Discrete Random Variables and Probability Distributions
Sampling Distribution Estimation Hypothesis Testing
Statistics Lecture 19.
Joint Probability Distributions and Random Samples
4 Elementary Probability Theory
Joint Probability Distributions and Random Samples
Chapter 7: Sampling Distributions
4 Elementary Probability Theory
Discrete Random Variables and Probability Distributions
Mathematical Expectation
Presentation transcript:

Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples

Copyright © Cengage Learning. All rights reserved. 5.5 The Distribution of a Linear Combination

3 The sample mean X and sample total T o are special cases of a type of random variable that arises very frequently in statistical applications. Definition Given a collection of n random variables X 1,..., X n and n numerical constants a 1,..., a n, the rv is called a linear combination of the X i ’s. (5.7)

4 The Distribution of a Linear Combination For example, 4X 1 – 5X 2 + 8X 3 is a linear combination of X 1, X 2, and X 3 with a 1 = 4, a 2 = –5, and a 3 = 8. Taking a 1 = a 2 =... = a n = 1 gives Y = X X n = T o, and a 1 = a 2 =... = a n = yields Notice that we are not requiring the X i ’s to be independent or identically distributed. All the X i ’s could have different distributions and therefore different mean values and variances. We first consider the expected value and variance of a linear combination.

5 The Distribution of a Linear Combination Proposition Let X 1, X 2,..., X n have mean values  1,...,  n, respectively, and variances respectively. 1. Whether or not the X i ’s are independent, E(a 1 X 1 + a 2 X a n X n ) = a 1 E(X 1 ) + a 2 E(X 2 ) a n E(X n ) = a 1  a n  n 2. If X 1,..., X n are independent, V(a 1 X 1 + a 2 X a n X n ) (5.8) (5.9)

6 The Distribution of a Linear Combination And 3. For any X 1,..., X n, (5.10) (5.11)

7 The Distribution of a Linear Combination Proofs are sketched out at the end of the section. A paraphrase of (5.8) is that the expected value of a linear combination is the same as the linear combination of the expected values—for example, E(2X 1 + 5X 2 ) = 2   2. The result (5.9) in Statement 2 is a special case of (5.11) in Statement 3; when the X i ’s are independent, Cov (X i, X j ) = 0 for i  j and = V (X i ) for i = j (this simplification actually occurs when the X i ’s are uncorrelated, a weaker condition than independence). Specializing to the case of a random sample (X i ’s iid) with a i = 1/n for every i gives E(X) =  and V(X) =  2 /n. A similar comment applies to the rules for T o.

8 Example 29 A gas station sells three grades of gasoline: regular, extra, and super. These are priced at $3.00, $3.20, and $3.40 per gallon, respectively. Let X 1, X 2, and X 3 denote the amounts of these grades purchased (gallons) on a particular day. Suppose the X i ’s are independent with  1 = 1000,  2 = 500,  3 = 300,  1 = 100,  2 = 80, and  3 = 50.

9 Example 29 The revenue from sales is Y = 3.0X X X 3, and E(Y) = 3.0    3 = $5620 cont’d

10 The Difference Between Two Random Variables

11 The Difference Between Two Random Variables An important special case of a linear combination results from taking n = 2, a 1 = 1, and a 2 = –1: Y = a 1 X 1 + a 2 X 2 = X 1 – X 2 We then have the following corollary to the proposition. Corollary E(X 1 – X 2 ) = E(X 1 ) – E(X 2 ) for any two rv’s X 1 and X 2. V(X 1 – X 2 ) = V(X 1 ) + V(X 2 ) if X 1 and X 2 are independent rv’s.

12 The Difference Between Two Random Variables The expected value of a difference is the difference of the two expected values, but the variance of a difference between two independent variables is the sum, not the difference, of the two variances. There is just as much variability in X 1 – X 2 as in X 1 + X 2 [writing X 1 – X 2 = X 1 + (– 1)X 2, (–1)X 2 has the same amount of variability as X 2 itself].

13 Example 30 A certain automobile manufacturer equips a particular model with either a six-cylinder engine or a four-cylinder engine. Let X 1 and X 2 be fuel efficiencies for independently and randomly selected six-cylinder and four-cylinder cars, respectively. With  1 = 22,  2 = 26,  1 = 1.2, and  2 = 1.5, E(X 1 – X 2 ) =  1 –  2 = 22 – 26 = –4

14 Example 30 If we relabel so that X 1 refers to the four-cylinder car, then E(X 1 – X 2 ) = 4, but the variance of the difference is still cont’d

15 The Case of Normal Random Variables

16 The Case of Normal Random Variables When the X i ’s form a random sample from a normal distribution, X and T o are both normally distributed. Here is a more general result concerning linear combinations. Proposition If X 1, X 2,..., X n are independent, normally distributed rv’s (with possibly different means and/or variances), then any linear combination of the X i ’s also has a normal distribution. In particular, the difference X 1 – X 2 between two independent, normally distributed variables is itself normally distributed.

17 Example 31 The total revenue from the sale of the three grades of gasoline on a particular day was Y = 3.0X X X 3, and we calculated   = 5620 and (assuming independence)   = If the X i s are normally distributed, the probability that revenue exceeds 4500 is

18 The Case of Normal Random Variables The CLT can also be generalized so it applies to certain linear combinations. Roughly speaking, if n is large and no individual term is likely to contribute too much to the overall value, then Y has approximately a normal distribution.