Jiaping Wang Department of Mathematical Science 04/22/2013, Monday

Slides:



Advertisements
Similar presentations
Discrete Uniform Distribution
Advertisements

The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.2. Expected Values of Random Variables Jiaping.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Statistics review of basic probability and statistics.
Teaching Basic Statistics with R: An Introduction to Interactive Packages Shuen-Lin Jeng National Cheng Kung University.
ฟังก์ชั่นการแจกแจงความน่าจะเป็น แบบไม่ต่อเนื่อง Discrete Probability Distributions.
Chapter 2 Discrete Random Variables
Review of Basic Probability and Statistics
Chapter 4 Discrete Random Variables and Probability Distributions
Review.
Probability Distributions
Section 10.6 Recall from calculus: lim= lim= lim= x  y  — x x — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives.
1 Probability distribution Dr. Deshi Ye College of Computer Science, Zhejiang University
Approximations to Probability Distributions: Limit Theorems.
Standard error of estimate & Confidence interval.
Chapter 5 Discrete Probability Distribution I. Basic Definitions II. Summary Measures for Discrete Random Variable Expected Value (Mean) Variance and Standard.
The Binomial, Poisson, and Normal Distributions Modified after PowerPoint by Carlos J. Rosas-Anderson.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 4 and 5 Probability and Discrete Random Variables.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 8 Continuous.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Review of Exam 2 Sections 4.6 – 5.6 Jiaping Wang Department of Mathematical Science 04/01/2013, Monday.
Binomial Distributions Calculating the Probability of Success.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Review and Preview This chapter combines the methods of descriptive statistics presented in.
Bernoulli Trials Two Possible Outcomes –Success, with probability p –Failure, with probability q = 1  p Trials are independent.
CHAPTER Discrete Models  G eneral distributions  C lassical: Binomial, Poisson, etc Continuous Models  G eneral distributions 
Convergence in Distribution
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Review of Exam I Sections Jiaping Wang Department of Mathematical Science 02/18/2013, Monday.
King Saud University Women Students
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 2 Random variables 2.1 Random variables Definition. Suppose that S={e} is the sampling space of random trial, if X is a real-valued function.
1 Since everything is a reflection of our minds, everything can be changed by our minds.
Chapter 4-5 DeGroot & Schervish. Conditional Expectation/Mean Let X and Y be random variables such that the mean of Y exists and is finite. The conditional.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Chapter 5 Special Distributions Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Business Statistics,
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Review of Final Part I Sections Jiaping Wang Department of Mathematics 02/29/2013, Monday.
R. Kass/W04 P416 Lec 3 1 Lecture 3 The Gaussian Probability Distribution Function Plot of Gaussian pdf x p(x)p(x) Introduction l The Gaussian probability.
Parameter, Statistic and Random Samples
Chapter 3 Applied Statistics and Probability for Engineers
Sampling and Sampling Distributions
The binomial probability distribution
MAT 446 Supplementary Note for Ch 3
ONE DIMENSIONAL RANDOM VARIABLES
Lecture 3 B Maysaa ELmahi.
Chapter 2 Discrete Random Variables
Chapter 5 Joint Probability Distributions and Random Samples
The Gaussian Probability Distribution Function
Discrete random variable X Examples: shoe size, dosage (mg), # cells,…
Parameter, Statistic and Random Samples
Discrete Probability Distributions
Jiaping Wang Department of Mathematical Science 04/10/2013, Wednesday
Chapter 5 Some Important Discrete Probability Distributions
Introduction to Probability and Statistics
Discrete random variable X Examples: shoe size, dosage (mg), # cells,…
Example Suppose X ~ Uniform(2, 4). Let . Find .
ORDER STATISTICS AND LIMITING DISTRIBUTIONS
Quantitative Methods Varsha Varde.
If the question asks: “Find the probability if...”
Handout Ch 4 實習.
Handout Ch 4 實習.
Chapter 5 Limits and Continuity.
ORDER STATISTICS AND LIMITING DISTRIBUTIONS
Further Topics on Random Variables: Covariance and Correlation
Chapter 8 Estimation.
Discrete Probability Distributions
Further Topics on Random Variables: Covariance and Correlation
Chapter 11 Probability.
Presentation transcript:

Jiaping Wang Department of Mathematical Science 04/22/2013, Monday Chapter 8. Some Approximations to Probability Distributions: Limit Theorems Sections 8.2 -- 8.3: Convergence in Probability and in Distribution Jiaping Wang Department of Mathematical Science 04/22/2013, Monday

Outline Convergence in Probability Convergence in Distribution

Part 1. Convergence in Probability

Introduction Suppose that a coin has probability p, with 0≤p≤1, of coming up heads on a single flip. Suppose that we flip the coin n times, what can we say about the fraction of heads observed in the n flips? For example, if p=0.5, we draw different numbers of trials in a simulation, the result is given in the table From here, we can find when n∞, the ratio is closer to 0.5 and thus the difference is closer to zero. n 100 200 300 400 % 0.4700 0.5200 0.4833 0.5050 |%-0.5| 0.03 0.02 0.0167 0.005

Definition 8.1 In mathematical notations, let X denote the number of heads observed in the n tosses. Then E(X)=np, V(X)=np(1-p). One way to measure the closeness of X/n to p is to ascertain the probability that the distance | 𝑋 𝑛 −𝑝| will be less than a pre-assigned small value ε so that 𝑃 𝑋 𝑛 −𝑝 <𝜀 →1. Definition 8.1: The sequence of random variables X1,X2, .., Xn is said to convergence in probability to the constant c, if for every positive number ε, lim 𝑛→∞ 𝑃 𝑋𝑛−𝑐 <𝜖 =1 .

Theorem 8.1 Weak Law of Large Numbers: Let X1,X2, .., Xnbe independent and identical distributed random variables, with E(Xi)=μ and V(Xi)=σ2<∞ for each i=1,…, n. Let 𝑋𝑛 = 1 n 𝑖=1 𝑛 𝑋𝑖. Then, for any positive real number ε, lim 𝑛→∞ 𝑃 𝑋 𝑛−𝜇 ≥𝜀 =0 Or lim 𝑛→∞ 𝑃 𝑋 𝑛−𝜇 <𝜀 =1 . Thus, 𝑋 𝑛 converges in probability toward μ. The proof can be shown based on the Tchebysheff’s theorem with X replaced by 𝑋 𝑛and σ2 by σ2/n, then let 𝑘= 𝜀 𝜎 𝑛 .

Theorem 8.2 Suppose that Xn converges in probability toward μ1 and Yn converges in probability toward μ2. Then the following statements are also true. 1. Xn+Yn converges in probability toward u1+u2. 2. XnYnconverges in probability toward u1u2. 3. Xn/Yn converges in probability toward u1/u2, provided u2≠0. 4. Xn converges in probability toward u1 , provided P(Xn≥0)=1.

Example 8.1 Let X be a binomial random variable with probability of success p and number of trials n. Show that X/n converges in probability toward p. Answer: We have seen that we can write X as ∑Yi with Yi=1 if the i-th trial results in Success, and Yi=0 otherwise. Then X/n=1/n ∑Yi . Also E(Yi)=p and V(Yi)=p(1-p). Then the conditions of Theorem 8.1 are fulfilled with μ=p and σ2=p(1-p)< ∞ and thus we can conclude that, for any positive ε, limn∞P(|X/n-p| ≥ε)=0.

Example 8.2 Suppose that X1, X2, …, Xn are independent and identically distributed random Variables with 𝐸(𝑋𝑖)=𝜇1, 𝐸(𝑋𝑖2)=𝜇2, 𝐸(𝑋𝑖3)=𝜇3, 𝐸(𝑋𝑖4)=𝜇4 and all assumed finite. Let S2 denote the sample variance given by 𝑆2= 1 𝑛 ∑ 𝑋𝑖− 𝑋 2. Show that S2 converges in probability to V(Xi). Answer: Notice that 𝑆2= 1 𝑛 𝑖=1 𝑛 𝑋𝑖2− 𝑋 2 where 𝑋 = 1 𝑛 𝑖=1 𝑛 𝑋𝑖. The quantity 1 𝑛 𝑖=1 𝑛 𝑋𝑖2 is the average of n independent and identical distributed variables of the form 𝑋𝑖2 with E(𝑋𝑖2 )= 𝜇2, and V (𝑋𝑖2 )= 𝜇4 - 𝜇22, which is finite. Thus Theorem 8.1 tell us that 1 𝑛 𝑖=1 𝑛 𝑋𝑖2 converges to 𝜇2 in probability. Finally, based on Theorem 8.2, we can have 𝑆2= 1 𝑛 𝑖=1 𝑛 𝑋𝑖2− 𝑋 2 converges in probability to 𝜇2 - 𝜇12 =V(Xi). This example shows that for large samples, the sample variance has a high probability of being close to the population variance.

Part 2. Convergence in Distribution

Definition 8.2 In the last section, we only study the convergence of certain random variables Toward constants. In this section, we study the probability distributions of certain type random variables as n tends toward infinity. Definition 8.2: Let Xn be a random variable with distribution function Fn(x). Let X be a random variable with distribution function F(x). If limn∞Fn(x)=F(x) At every point x for which F(x) is continuous, then Xn is said to converge in distribution toward X. F(x) is called the limiting distribution function of Xn.

Example 8.3 Let X1, X2, …, Xn be independent uniform random variables over the interval (θ, 0) for a negative constant θ. In addition, let Yn=min(X1, X2, …, Xn). Find the limiting distribution of Yn. Answer: The distribution function for the uniform random variable Xi is 𝐹(𝑋𝑖)=𝑃(𝑋𝑖≤𝑥)= 0, 𝑥<𝜃 𝑥−𝜃 −𝜃 , 𝜃≤𝑥≤0 1, 𝑥>0. We know 𝐺 𝑦 =𝑃 𝑌𝑛≤𝑦 =1−𝑃 𝑌𝑛>𝑦 =1−𝑃 min 𝑋1, 𝑋2, …, 𝑋𝑛 >𝑦 =1−𝑃 𝑋1>𝑦 𝑃 𝑋2>𝑦 …𝑃 𝑋𝑛>𝑦 =1− 1−𝐹𝑋 𝑦 𝑛 = 0, 𝑦<0 1− 𝑦 𝜃 𝑛, 𝜃≤𝑦≤0 1, 𝑦>0. so we can find lim 𝑛→∞ 𝐺(𝑦) = 0, 𝑦<0 lim 𝑛→∞ 1− 𝑦 𝜃 𝑛 , 𝜃≤𝑦≤0 1, 𝑦>0. = 0, 𝑦<𝜃 1, 𝑦≥𝜃.

Theorem 8.3 Let Xn and X be random variables with moment-generating functions Mn(t) and M(t), respectively. If limn∞Mn(t)=M(t) For all real t, then Xn converges in distribution toward X.

Example 8.4 Let Xn be a binomial random variable with n trials and probability p of success on each trial. If n tends toward infinity and p tends zero with np remaining fixed. Show that Xn converges in distribution toward a Poisson random variable. Answer: We know the moment-generating function for the binomial random variables Xn, Mn(t) is given as 𝑀𝑛 𝑡 = 𝑞+𝑝𝑒𝑡 𝑛= 1+𝑝 𝑒𝑡−1 𝑛 𝑎𝑠 𝑞=1−𝑝 = 1+ λ 𝑛 𝑒𝑡−1 𝑛 based on np=λ . Recall that lim 𝑛→∞ 1+ 𝑘 𝑛 𝑛=𝑒𝑘. Letting k=λ(et-1), we have lim 𝑛→∞ 𝑀𝑛 𝑡 = exp λ 𝑒𝑡−1 which is the moment generating function of the Poisson random variable. As an example, when n=10 and p=0.1, we can find the true probability from the binomial Distribution is 0.73609 for X is less than 2 and the approximate value from the Poisson Is 0.73575, they are very close. So we can approximate the probability from binomial Distribution by the Poisson distribution when n is large and p is small.

Example 8.5 In monitoring for a pollution, an experiment collects a small volume of water and counts the number of bacteria in the sample. Unlike earlier problems, we have only one observation. For purposes of approximating the probability distribution of counts, we can think of the volume as the quantity that is getting large. Let X denote the bacteria count per cubic centimeter of water and assume that X has a Poisson probability distribution with mean λ, which we do by showing that 𝑌= 𝑋−𝜆 λ converges in distribution toward a standard normal random variable as λ tends toward infinity. Specifically, if the allowable pollution in a water supply is a count of 110 bacteria per cubic centimeter, approximate the probability that X will be at most 110, assuming that λ=100.

Solution Answer: We know the mgf for Poisson random variable X is 𝑀𝑋(𝑡)=exp⁡[𝜆(𝑒𝑡−1)], thus We can have the mgf of Y as 𝑀𝑌 𝑡 = exp −𝑡 λ exp⁡[λ(exp⁡(𝑡/ λ )-1)]. The term (exp⁡(𝑡/ λ )-1) can be written as exp⁡(𝑡/ λ )−1= t/ λ + 𝑡2 2λ + 𝑡3 6λ λ + ⋯ Thus MY(t)=exp[−𝑡 λ + λ(t/ λ + 𝑡2 2λ + 𝑡3 6λ λ + ⋯)]=exp[ 𝑡2 2 + 𝑡3 6 λ + ⋯)] When λ∞, MY(t) exp(t2/2) which is the mgf of the standard normal distribution. So we can approximate the probability of the Poisson random variable by the standard normal distribution when λ is large enough (for example, λ≥25). 𝑃 𝑋≤110 =𝑃 𝑋−𝜆 λ ≤ 110−100 10 =𝑃 𝑌≤1 =0.8413.