Probability theory 2011 Convergence concepts in probability theory  Definitions and relations between convergence concepts  Sufficient conditions for.

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Random Variables ECE460 Spring, 2012.
Discrete Uniform Distribution
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
By : L. Pour Mohammad Bagher Author : Vladimir N. Vapnik
Probability theory 2010 Order statistics  Distribution of order variables (and extremes)  Joint distribution of order variables (and extremes)
Sampling Distributions
Probability theory 2010 Main topics in the course on probability theory  Multivariate random variables  Conditional distributions  Transforms  Order.
1 Chap 5 Sums of Random Variables and Long-Term Averages Many problems involve the counting of number of occurrences of events, computation of arithmetic.
Chapter 6 Introduction to Sampling Distributions
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Probability theory 2011 Main topics in the course on probability theory  The concept of probability – Repetition of basic skills  Multivariate random.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Multiple random variables Transform methods (Sec , 4.5.7)
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
1 Review of Probability Theory [Source: Stanford University]
Probability theory 2010 Outline  The need for transforms  Probability-generating function  Moment-generating function  Characteristic function  Applications.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
4. Convergence of random variables  Convergence in probability  Convergence in distribution  Convergence in quadratic mean  Properties  The law of.
13. The Weak Law and the Strong Law of Large Numbers
Probability theory 2008 Conditional probability mass function  Discrete case  Continuous case.
Math Camp 2: Probability Theory Sasha Rakhlin. Introduction  -algebra Measure Lebesgue measure Probability measure Expectation and variance Convergence.
Probability theory 2010 Conditional distributions  Conditional probability:  Conditional probability mass function: Discrete case  Conditional probability.
The moment generating function of random variable X is given by Moment generating function.
Section 5.6 Important Theorem in the Text: The Central Limit TheoremTheorem (a) Let X 1, X 2, …, X n be a random sample from a U(–2, 3) distribution.
The Central Limit Theorem For simple random samples from any population with finite mean and variance, as n becomes increasingly large, the sampling distribution.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
Standard error of estimate & Confidence interval.
Distribution Function properties. Density Function – We define the derivative of the distribution function F X (x) as the probability density function.
Chapter 6 Sampling and Sampling Distributions
Review of Probability.
Introduction to AEP In information theory, the asymptotic equipartition property (AEP) is the analog of the law of large numbers. This law states that.
Chapter 1 Probability and Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Limits and the Law of Large Numbers Lecture XIII.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology The Weak Law and the Strong.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 9 Samples.
Convergence in Distribution
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
One Random Variable Random Process.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Chapter 01 Probability and Stochastic Processes References: Wolff, Stochastic Modeling and the Theory of Queues, Chapter 1 Altiok, Performance Analysis.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Chapter 5. Continuous Random Variables. Continuous Random Variables Discrete random variables –Random variables whose set of possible values is either.
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Chebyshev’s Inequality Markov’s Inequality Proposition 2.1.
Chapter 5: The Basic Concepts of Statistics. 5.1 Population and Sample Definition 5.1 A population consists of the totality of the observations with which.
Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X X n of course.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
Chapter 1: Outcomes, Events, and Sample Spaces men 1.
Renewal Theory Definitions, Limit Theorems, Renewal Reward Processes, Alternating Renewal Processes, Age and Excess Life Distributions, Inspection Paradox.
Jiaping Wang Department of Mathematical Science 04/22/2013, Monday
Lecture 3 B Maysaa ELmahi.
Main topics in the course on probability theory
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Introduction to Probability & Statistics The Central Limit Theorem
13. The Weak Law and the Strong Law of Large Numbers
Chapter 5 Properties of a Random Sample
13. The Weak Law and the Strong Law of Large Numbers
Presentation transcript:

Probability theory 2011 Convergence concepts in probability theory  Definitions and relations between convergence concepts  Sufficient conditions for almost sure convergence  Convergence via transforms  The law of large numbers and the central limit theorem

Probability theory 2011 Coin-tossing: relative frequency of heads Convergence of each trajectory? Convergence in probability?

Probability theory 2011 Convergence to a constant The sequence {X n } of random variables converges almost surely to the constant c if and only if P({  ; X n (  )  c as n   }) = 1 The sequence {X n } of random variables converges in probability to the constant c if and only if, for all  > 0, P({  ; | X n (  ) – c| >  })  0 as n  

Probability theory 2011 An (artificial) example Let X 1, X 2, … be a sequence of independent binary random variables such that P(X n = 1) = 1/n and P(X n = 0) = 1 – 1/n Does X n converge to 0 in probability? Does X n converge to 0 almost surely? Common exception set?

Probability theory 2011 The law of large numbers for random variables with finite variance Let {X n } be a sequence of independent and identically distributed random variables with mean  and variance  2, and set S n = X 1 + … + X n Then Proof: Assume that  = 0. Then.

Probability theory 2011 Convergence to a random variable: definitions The sequence {X n } of random variables converges almost surely to the random variable X if and only if P({  ; X n (  )  X(  ) as n   }) = 1 Notation: The sequence {X n } of random variables converges in probability to the random variable X if and only if, for all  > 0, P({  ; | X n (  ) – X(  )| >  })  0 as n   Notation:

Probability theory 2011 Convergence to a random variable: an example Assume that the concentration of NO in air is continuously recorded and let X t, be the concentration at time t. Consider the random variables: Does Y n converge to Y in probability? Does Y n converge to Y almost surely?

Probability theory 2011 Convergence in distribution: an example Let X n  Bin(n, c/n). Then the distribution of X n converges to a Po(c) distribution as n  . p = 0.1)

Probability theory 2011 Convergence in distribution and in norm The sequence X n converges in distribution to the random variable X as n   iff for all x where F X (x) is continuous. Notation: The sequence X n converges in quadratic mean to the random variable X as n   iff Notation:

Probability theory 2011 Relations between the convergence concepts Almost sure convergence Convergence in r-mean Convergence in probability Convergence in distribution

Probability theory 2011 Convergence in probability implies convergence in distribution Note that, for all  > 0,

Probability theory 2011 Convergence almost surely - convergence in r-mean Consider a branching process in which the offspring distribution has mean 1. Does it converge to zero almost surely? Does it converge to zero in quadratic mean? Let X 1, X 2, … be a sequence of independent random variables such that P(X n = n 2 ) = 1/n 2 and P(X n = 0) = 1 – 1/n 2 Does X n converge to 0 in probability? Does X n converge to 0 almost surely? Does X n converge to 0 in quadratic mean?

Probability theory 2011 Relations between different types of convergence to a constant Almost sure convergence Convergence in r-mean Convergence in probability Convergence in distribution

Probability theory 2011 Convergence via generating functions Let X, X 1, X 2, … be a sequence of nonnegative, integer- valued random variables, and suppose that Then Is the limit function of a sequence of generating functions a generating function?

Probability theory Convergence via moment generating functions Let X, X 1, X 2, … be a sequence of random variables, and suppose that Then Is the limit function of a sequence of moment generating functions a moment generating function?

Probability theory 2011 Convergence via characteristic functions Let X, X 1, X 2, … be a sequence of random variables, and suppose that Then Is the limit function of a sequence of characteristic functions a characteristic function?

Probability theory 2011 Convergence to a constant via characteristic functions Let X 1, X 2, … be a sequence of random variables, and suppose that Then

Probability theory 2011 The law of large numbers (for variables with finite expectation) Let {X n } be a sequence of independent and identically distributed random variables with expectation , and set S n = X 1 + … + X n Then.

Probability theory 2011 The strong law of large numbers (for variables with finite expectation) Let {X n } be a sequence of independent and identically distributed random variables with expectation , and set S n = X 1 + … + X n Then.

Probability theory 2011 The central limit theorem Let {X n } be a sequence of independent and identically distributed random variables with mean  and variance  2, and set S n = X 1 + … + X n Then Proof: If  = 0, we get.

Probability theory 2011 Rate of convergence in the central limit theorem Example: X  U(0,1).

Probability theory 2011 Sums of exponentially distributed random variables

Probability theory 2011 Convergence of empirical distribution functions Proof: Write F n (x) as a sum of indicator functions Bootstrap techniques: The original distribution is replaced with the empirical distribution

Probability theory 2011 Resampling techniques - the bootstrap method Sampling with replacement Resampled data Observed data

Probability theory 2011 Characteristics of infinite sequences of events Let {A n, n = 1, 2, …} be a sequence of events, and define Example: Consider a queueing system and let A n = {the queueing system is empty at time n }

Probability theory 2011 The probability that an event occurs infinitely often - Borel-Cantelli’s first lemma Let {A n, n = 1, 2, …} be an arbitrary sequence of events. Then Example: Consider a queueing system and let A n = {the queueing system is empty at time n } Is the converse true?

Probability theory 2011 The probability that an event occurs infinitely often - Borel-Cantelli’s second lemma Let {A n, n = 1, 2, …} be a sequence of independent events. Then

Probability theory 2011 Necessary and sufficient conditions for almost sure convergence of independent random variables Let X 1, X 2, … be a sequence of independent random variables. Then

Probability theory 2011 Exercises: Chapter VI 6.1, 6.6, 6.9, 6.10, 6.17, 6.21, 6.25, 6.49