Joint Probability Distributions and Random Samples

Slides:



Advertisements
Similar presentations
Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
Advertisements

Central Limit Theorem. So far, we have been working on discrete and continuous random variables. But most of the time, we deal with ONE random variable.
Joint Probability Distributions and Random Samples
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Sampling Variability & Sampling Distributions.
Sampling Distributions
Copyright © Cengage Learning. All rights reserved. 6 Point Estimation.
Joint Probability Distributions and Random Samples
Copyright © Cengage Learning. All rights reserved. 3.4 The Binomial Probability Distribution.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
1 STAT 500 – Statistics for Managers STAT 500 Statistics for Managers.
Copyright © Cengage Learning. All rights reserved. Elementary Probability Theory 5.
Lecture 15: Statistics and Their Distributions, Central Limit Theorem
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
Chapter 4 DeGroot & Schervish. Variance Although the mean of a distribution is a useful summary, it does not convey very much information about the distribution.
Expected value. Question 1 You pay your sales personnel a commission of 75% of the amount they sell over $2000. X = Sales has mean $5000 and standard.
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
3.3 Expected Values.
Chapter 8 Sampling Variability and Sampling Distributions.
Topic 5 - Joint distributions and the CLT
Engineering Statistics ECIV 2305
Topic 6: The distribution of the sample mean and linear combinations of random variables CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
1 Two Discrete Random Variables The probability mass function (pmf) of a single discrete rv X specifies how much probability mass is placed on each possible.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Copyright © Cengage Learning. All rights reserved. 14 Goodness-of-Fit Tests and Categorical Data Analysis.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Copyright © Cengage Learning. All rights reserved. 16 Quality Control Methods.
Chapter 6: Random Variables
The normal distribution
Copyright © Cengage Learning. All rights reserved.
Discrete Random Variables and Probability Distributions
Sampling Distribution Estimation Hypothesis Testing
Section 7.3: Probability Distributions for Continuous Random Variables
Sampling Variability & Sampling Distributions
Applied Discrete Mathematics Week 11: Relations
4 Elementary Probability Theory
Joint Probability Distributions and Random Samples
Chapter 6: Random Variables
Linear Combination of Two Random Variables
South Dakota School of Mines & Technology Introduction to Probability & Statistics Industrial Engineering.
4 Elementary Probability Theory
Elementary Statistics: Picturing The World
Some Rules for Expectation
Transforming and Combining Random Variables
Discrete Random Variables and Probability Distributions
The normal distribution
Normal Probability Distributions
Introduction to Probability & Statistics Joint Expectations
Lecture 7 Sampling and Sampling Distributions
Handout Ch 4 實習.
6.3 Sampling Distributions
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 2. Random Variables
Distribution-Free Procedures
Introduction to Econometrics, 5th edition
Mathematical Expectation
Presentation transcript:

Joint Probability Distributions and Random Samples 5 Copyright © Cengage Learning. All rights reserved.

5.5 The Distribution of a Linear Combination Copyright © Cengage Learning. All rights reserved.

The Distribution of a Linear Combination The sample mean X and sample total To are special cases of a type of random variable that arises very frequently in statistical applications. Definition

The Distribution of a Linear Combination For example, consider someone who owns 100 shares of stock A, 200 shares of stock B, and 500 shares of stock C. Denote the share prices of these three stocks at some particular time by 𝑋 1, 𝑋 2 , and 𝑋 3 , respectively. Then the value of this individual’s stock holdings is the linear combination Y = 100 𝑋 1 +200 𝑋 2 +500 𝑋 3 . Taking a1 = a2 = . . . = an = 1 gives Y = X1 + . . . + Xn = To, and a1 = a2 = . . . = an = yields

The Distribution of a Linear Combination Notice that we are not requiring the Xi’s to be independent or identically distributed. All the Xi’s could have different distributions and therefore different mean values and variances. We first consider the expected value and variance of a linear combination.

The Distribution of a Linear Combination Proposition

The Distribution of a Linear Combination Proofs are sketched out at the end of the section. A paraphrase of (5.8) is that the expected value of a linear combination is the same as the linear combination of the expected values—for example, E(2X1 + 5X2) = 21 + 52. The result (5.9) in Statement 2 is a special case of (5.11) in Statement 3; when the Xi’s are independent, Cov (Xi, Xj) = 0 for i  j and = V (Xi) for i = j (this simplification actually occurs when the Xi’s are uncorrelated, a weaker condition than independence). Specializing to the case of a random sample (Xi’s iid) with ai = 1/n for every i gives E(X) =  and V(X) = 2/n. A similar comment applies to the rules for To.

Example 5.30 A gas station sells three grades of gasoline: regular, extra, and super. These are priced at $3.00, $3.20, and $3.40 per gallon, respectively. Let X1, X2, and X3 denote the amounts of these grades purchased (gallons) on a particular day. Suppose the Xi’s are independent with 1 = 1000, 2 = 500, 3 = 300, 1 = 100, 2 = 80, and 3 = 50.

Example 5.30 The revenue from sales is Y = 3.0X1 + 3.2X2 + 3.4X3, and cont’d The revenue from sales is Y = 3.0X1 + 3.2X2 + 3.4X3, and E(Y) = 3.01 + 3.22 + 3.43 = $5620

The Difference Between Two Random Variables

The Difference Between Two Random Variables An important special case of a linear combination results from taking n = 2, a1 = 1, and a2 = –1: Y = a1X1 + a2X2 = X1 – X2 We then have the following corollary to the proposition. Corollary

The Difference Between Two Random Variables The expected value of a difference is the difference of the two expected values, but the variance of a difference between two independent variables is the sum, not the difference, of the two variances. There is just as much variability in X1 – X2 as in X1 + X2 [writing X1 – X2 = X1 + (– 1)X2, (–1)X2 has the same amount of variability as X2 itself].

Example 5.31 A certain automobile manufacturer equips a particular model with either a six-cylinder engine or a four-cylinder engine. Let X1 and X2 be fuel efficiencies for independently and randomly selected six-cylinder and four-cylinder cars, respectively. With 1 = 22, 2 = 26, 1 = 1.2, and 2 = 1.5, E(X1 – X2) = 1 – 2 = 22 – 26 = –4

Example 5.31 cont’d If we relabel so that X1 refers to the four-cylinder car, then E(X1 – X2) = 4, but the variance of the difference is still 3.69.

The Case of Normal Random Variables

The Case of Normal Random Variables When the Xi’s form a random sample from a normal distribution, X and To are both normally distributed. Here is a more general result concerning linear combinations. Proposition

Example 5.32 The total revenue from the sale of the three grades of gasoline on a particular day was Y = 3.0X1 + 3.2X2 + 3.4X3, and we calculated g = 5620 and (assuming independence) g = 429.46. If the Xis are normally distributed, the probability that revenue exceeds 4500 is

The Case of Normal Random Variables The CLT can also be generalized so it applies to certain linear combinations. Roughly speaking, if n is large and no individual term is likely to contribute too much to the overall value, then Y has approximately a normal distribution.