Presentation is loading. Please wait.

Presentation is loading. Please wait.

Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives.

Similar presentations


Presentation on theme: "Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives."— Presentation transcript:

1 Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives of f(x) up to order k are all continuous on an interval about 0 (zero), then for all x on this interval, we have f(x) = f(0) +(x – 0) f [1] (0) + (x – 0) 2 f [2] (0) —————– + 2! for 0 < h < x. (x – 0) 3 f [3] (0) —————– + … 3! (x – 0) k f [k] (h) + —————–. k!

2 1. (a) Let X 1, X 2, …, X n be a random sample from a Bernoulli distribution with success probability p. The following random variables are defined: V =X =W =  np(1 – p) n  X i i = 1 n n  X i i = 1 Find the m.g.f. for each of V and X. From Corollary 5.4-1, we have that (1) the m.g.f. of the random variable V = is M V (t) = (2) the m.g.f. of the random variable X =is M X (t) = n  X i i = 1 n  (1 – p + pe t ) = i = 1 (1 – p + pe t ) n. (We recognize that V has a distribution.) b(n,p) n  X i – np i = 1 V— n V— n M V ( ) = t — n (1 – p + pe t / n ) n.

3 (b)Find the limiting distribution for V with np equal to a given constant λ as n tends to infinity, forcing p to go to 0 (zero). Since np = is fixed, then lim M V (t) = lim (1 – p + pe t ) n = n  np npe t lim 1 – — +—— = n n  n e t lim 1 – — +—— = n n  n (e t – 1) lim 1 + ———— = n n  n e (e t – 1) The limiting distribution of V is adistribution. Poisson( ) Consequently, for small (or large!) values of p, a binomial distribution can be approximated by a Poisson distribution with mean = np. This should not be surprising, since the Poisson distribution was derived as the limit of a sequence of binomial distributions where p tended to zero.

4 Find the limiting distribution for V as n tends to infinity, with p a fixed constant. lim M V (t) = lim (1 – p + pe t ) n = n   We cannot find a limiting distribution for V. 1.-continued (c)

5 (d)Find the limiting distribution for X as n tends to infinity, with p a fixed constant. lim M X (t) = lim (1 – p + pe t/n ) n = n  lim (1 – p + p[1 + t/n + (t/n) 2 /2! + (t/n) 3 /3! + …]) n = n  lim (1 + p[t/n + (t/n) 2 /2! + (t/n) 3 /3! + …]) n = n  lim 1 += n  pt + pt 2 /(2!n) + pt 3 /(3!n 2 ) + … ———————————— n n It is intuitively obvious that all terms in the numerator except the first go to 0 as n , and (from advanced calculus) the terms going to 0 can be ignored. lim 1 + = n  pt — n n e pt This is the moment generating function corresponding to a distribution where the value p has probability 1 (one).

6 Suppose X 1, X 2, …, X n is a random sample from any distribution with finite mean  and finite variance  2. Let M(t) be the common moment generating function of X i, that is, for each i = 1, 2, …, n, we have M(t) = From Corollary 5.4-1(b), we have that the moment generating function of the random variable X =is M X (t) = E(e ) tX i n  X i i = 1 n n  M( ) = i = 1 t — n M( ). t — n n With M(t) and M / (t) both continuous on an interval about 0 (zero), we have that for all t on this interval, M(t) = M(0) +t M / (h) =1 + t M / (h)for 0 < h < t.

7 Consequently, we have that for all t on this interval, M X (t) = M( ) = t — n n 1 +M / (h) = t — n n 1 +[M / (h) – M / (0)] t —  + n n t — n for 0 < h <. t — n To investigate the limiting distribution of X as n , we consider n  lim M X (t) =lim n  1 + [M / (h) – M / (0)]  t + t ————————— n n It is intuitively obvious that the second term in the numerator goes to 0 as n , and (from advanced calculus) this term can be ignored. = lim n  1 +  t — = n n etet This is the moment generating function corresponding to a distribution where the value  has probability 1 (one).

8 For i = 1, 2, …, n, suppose Y i =, and let W =. Let m(t) be the common m.g.f. for each Y i. Then for each i = 1, 2, …, n, we have E(Y i ) = m / (0) =, and Var(Y i ) = E(Y i 2 ) = m // (0) =. X i –  ———  n  Y i i = 1 nn = n  X i – n  i = 1 (n)(n) = X –   /  n 01 From Theorem 5.4-1, we have that the moment generating function of the random variable W is M W (t) = n  m( ) = i = 1 t —  n m( ). t —  n n With m(t) and m / (t) both continuous on an interval about 0 (zero), we have that for all t on this interval, m(t) = m(0) +t m / (0) +1 + t 2 m // (h) for 0 < h < t. t 2 m // (h) = 1 — 2 1 — 2

9 Consequently, we have that for all t on this interval, M W (t) = m( ) = t —  n n 1 +m // (h) = t2—2n t2—2n n 1 +[m // (h) – m // (0)] t 2 — (1) + 2n n t2—2n t2—2n for 0 < h <. t —  n To investigate the limiting distribution of W as n , we consider n  lim M W (t) =lim n  1 + [m // (h) – m // (0)] t 2 / 2 + (t 2 / 2) ————————————– n n It is intuitively obvious that the second term in the numerator goes to 0 as n , and (from advanced calculus) this term can be ignored. = lim n  1 + t 2 / 2 —— = n n e This is the moment generating function corresponding to a standard normal (N(0,1)) distribution. t 2 / 2

10 This proves the following important Theorem in the text: Central Limit TheoremTheorem 5.6-1

11 1.-continued (e) Find the limiting distribution for W as n tends to infinity, with p a fixed constant. From the Central Limit Theorem, we have that limiting distribution for W = is  np(1 – p) n  X i – np i = 1 = X – p —————–  p(1 – p) / n a N(0,1) (standard normal) distribution. For each i,  = E(X i ) =, and  2 = Var(X i ) =. p p(1  p) n n  n  X i – n  i = 1 =

12 2. (a) (b) Suppose Y has a b(400, p) distribution, and we want to approximate P(Y  3). If p = 0.001, explain why a Poisson distribution can be expected to give a good approximation of P(Y ≥ 3), and use the Poisson approximation to find this probability. In Class Exercise #1(b), we found that the limiting distribution of a sequence of b(n, p) distributions as n tends to infinity is Poisson when np remains fixed, which forces p to tend to 0 (zero). This suggests that the Poisson approximation to a binomial distribution is better when p is close to zero (or one). What other distribution may potentially be used to approximate a binomial probability when p is not sufficiently close to zero (or one)? = np = (400)(0.001) = 0.4 P(Y  3) = 1 – 0.992 = 0.008 The Central Limit Theorem tells us that with a sufficiently large sample size n, the normal distribution can be used.


Download ppt "Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives."

Similar presentations


Ads by Google