Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X 2 +.... + X n of course.

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

MOMENT GENERATING FUNCTION AND STATISTICAL DISTRIBUTIONS
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Random Variable A random variable X is a function that assign a real number, X(ζ), to each outcome ζ in the sample space of a random experiment. Domain.
ORDER STATISTICS.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009.
Review of Basic Probability and Statistics
Introduction to stochastic process
1 Chap 5 Sums of Random Variables and Long-Term Averages Many problems involve the counting of number of occurrences of events, computation of arithmetic.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
1 Review of Probability Theory [Source: Stanford University]
Math Camp 2: Probability Theory Sasha Rakhlin. Introduction  -algebra Measure Lebesgue measure Probability measure Expectation and variance Convergence.
2. Random variables  Introduction  Distribution of a random variable  Distribution function properties  Discrete random variables  Point mass  Discrete.
The moment generating function of random variable X is given by Moment generating function.
Review of Probability and Statistics
Approximations to Probability Distributions: Limit Theorems.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
Standard error of estimate & Confidence interval.
1 Performance Evaluation of Computer Systems By Behzad Akbari Tarbiat Modares University Spring 2009 Introduction to Probabilities: Discrete Random Variables.
DATA ANALYSIS Module Code: CA660 Lecture Block 3.
Limits and the Law of Large Numbers Lecture XIII.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
CSE 531: Performance Analysis of Systems Lecture 2: Probs & Stats review Anshul Gandhi 1307, CS building
DATA ANALYSIS Module Code: CA660 Lecture Block 3.
Tch-prob1 Chap 3. Random Variables The outcome of a random experiment need not be a number. However, we are usually interested in some measurement or numeric.
IRDM WS Chapter 2: Basics from Probability Theory and Statistics 2.1 Probability Theory Events, Probabilities, Random Variables, Distributions,
Winter 2006EE384x1 Review of Probability Theory Review Session 1 EE384X.
1 Lecture 4. 2 Random Variables (Discrete) Real-valued functions defined on a sample space are random vars. determined by outcome of experiment, we can.
0 K. Salah 2. Review of Probability and Statistics Refs: Law & Kelton, Chapter 4.
Central Limit Theorem Example: (NOTE THAT THE ANSWER IS CORRECTED COMPARED TO NOTES5.PPT) –5 chemists independently synthesize a compound 1 time each.
Convergence in Distribution
The Mean of a Discrete RV The mean of a RV is the average value the RV takes over the long-run. –The mean of a RV is analogous to the mean of a large population.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
STA347 - week 51 More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function.
1 ORDER STATISTICS AND LIMITING DISTRIBUTIONS. 2 ORDER STATISTICS Let X 1, X 2,…,X n be a r.s. of size n from a distribution of continuous type having.
1 Since everything is a reflection of our minds, everything can be changed by our minds.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Math b (Discrete) Random Variables, Binomial Distribution.
STA347 - week 31 Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5’s in the 6 rolls. Let X = number of.
Stat 13 Lecture 19 discrete random variables, binomial A random variable is discrete if it takes values that have gaps : most often, integers Probability.
Probability Refresher. Events Events as possible outcomes of an experiment Events define the sample space (discrete or continuous) – Single throw of a.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Machine Learning Chapter 5. Evaluating Hypotheses
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Probability (outcome k) = Relative Frequency of k
Topic 5 - Joint distributions and the CLT
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Probability and Moment Approximations using Limit Theorems.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Lecture 15 Parameter Estimation Using Sample Mean Last Time Sums of R. V.s Moment Generating Functions MGF of the Sum of Indep. R.Vs Sample Mean (7.1)
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
K. Shum Lecture 26 Singular random variable Strong law of large number.
Chebyshev’s Inequality Markov’s Inequality Proposition 2.1.
Random Variables By: 1.
Supplemental Lecture Notes
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Chapter 5 Joint Probability Distributions and Random Samples
Sample Mean Distributions
Chapter 7: Sampling Distributions
ASV Chapters 1 - Sample Spaces and Probabilities
STOCHASTIC HYDROLOGY Random Processes
Bernoulli Trials Two Possible Outcomes Trials are independent.
Chapter 5: Discrete Probability Distributions
Experiments, Outcomes, Events and Random Variables: A Revisit
Presentation transcript:

Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X X n of course

Thus Var ( X + Y) = Var (X) + Var (Y) only if Cov ( X,Y) = 0 If X k are i.i.d ( Independent,Identically distributed) ~ ,  2 E (S n ) = n  Var (S n ) = n  2

PDF of sums of independent R.V.’s: because of independence

Sample Mean: If X k are iid measurements of a R.V. X, the sample mean M n is a random variable. M n is an unbiased estimator of .

Clearly In fact, this holds even when  2 does not exist. Weak Law of Large Numbers: If X k are i.i.d samples of X, E(X) = ,  is finite then, for all  > 0 Strong Law of Large Numbers: If X k are i.i.d samples of X, E(X) =  and Var(X) =  2 ,  2 finite limit of probability  1

The Weak law states that : for a large enough n, M n is close to  with high probability. The Strong law states that : with probability 1, a sequence of M n ‘s calculated using n samples converges to  as n   Note that the weak law does not say anything about any particular sequence of M n ‘s converging as a function of n. It only gives the probability of being close to  for any fixed n, and this probability approaches 1 as n  .

Central Limit Theorem Let X 1,X 2,..... Be a sequence of i.i.d R.V. ‘s with mean  and variance  2 If  and  2 exist. i.e. The sum of “enough” iid R.V.’s, properly normalized, is approximately normally distributed.

More general version: If X k are independent ( not necessarily identically distributed), and where  s and  2 s are mean and variance of S n. If X k are iid with mean  and variance  2  s = n   2 s = n  2

if i.e. for at least some  > 2, the  th moments exist for all X k. 1) is true if  k >  > 0  k 2) is true if all are 0 outside the interval [-c,c] and in many other situations.

In the discrete case, if S n takes values k, e.g. If X k are 0/1 outcomes of a Bernoulli trials, S n has a binomial distribution.  s = n p,  2 s = n pq ( q = 1-p ) By the CLT which is called as de Moivre - Laplace Theorem.

Convergence of R.V. Sequences How does a sequence of R.V.’s, X 1,X 2,.... Converge to R.V X ? X n  X as n   Sequence of R.V.’s: A function that assigns a countably infinite number of real values, X k, to each outcome, , from a sample space, S. sequence = { X n (  ) } or { X n } e.g. S = (0,1) ( i.e.   (0,1) ) X n (  ) =  ( 1- 1/n ) So X n (  )  

Convergence: X n  X if, for any  > 0,  integer N such that | X n – X | N Cauchy Criterion: X n  X iff, for any  > 0,  integer N such that | X n – X m | N XnXn nN X }2 

Types of Convergence Sure Convergence: {X n (  ) }  X (  ) surely If X n (  )  X (  ) as n      S. i.e. The sample sequence for each  converges, though possible to different values for different  ’s. Almost Sure Convergence: almost {X n (  ) }  X (  ) surely If P(  : X n (  )  X (  ) as n   ) = 1 So there might be some outcomes for which X n (  ) does not converge, but they have probability 0. e.g. strong LOLN.

Mean-square Convergence: MS {X n (  ) }  X (  ) If E [ ( X n (  ) - X (  ) ) 2 ]  0 as n   [ In Cauchy criterion terms E [ ( X n (  ) - X m (  ) ) 2 ]  0 as m, n   Convergence in probability: prob {X n (  ) }  X (  ) if for any  > 0 P( | X n (  ) - X (  ) | >  )  0 as n   i.e. The probability of being within 2  of X (  ) converges, not X n (  ) themselves. ( Weak law of large numbers for X n = M n, X =  )

Convergence in distribution: Sequence {X n } with cdf’s {F n (x) } converges to X with cdf F(x) if F n (x)  F(x) as n    x at which F(x) is continuous. e.g. Central Limit Theorem. distprob a.s s m.s. Relationships between convergence modes

Example :

But Y n has a well defined distribution So Y n converges in distribution as n  