Download presentation
Presentation is loading. Please wait.
1
Chapter 5 Properties of a Random Sample
wenyan
2
Contents Basic Concepts of Random samples
Sums of Random Variables from a Random Sample Sampling from the Normal Distribution Order Statistics Convergence Concepts Generating a Random Sample
3
5.1 Basic Concepts of Random Samples
What is “iid”? iid=independent and identically distributed random variables with pdf or pmf f(x) Is the fact of “infinite population or finite population “ important? ------with replacement ------without replacement ------simple random sampling
4
Example 5.1.3 Finite population model
with replacement without replacement P(X1>200 · …. · X10>200) =P(X1>200) · …. · P(X10>200) = = P(X1>200 · …. · X10>200) =P(Y=10) = =
5
We can see that the two result above are so contiguous
Conclusion: When the size of population is much larger than the size of the sample,sample from a finite population is approximately independent.So the Definition is still tenable.
6
5.2 Sums of Random Variables from a Random Sample
Some basic concepts Some basic definitions Some basic tools
7
Basic concept Definition 5.2.1
Let X1……Xn be a random sample of size n from a population and let T(x1….xn) be a real-valued or vector-valued or fuction whose domain includes the sample space of(X1,…,Xn) is called a statistic.The probability distribution of a statistic Y is called the sampling distribution of Y. The only restriction: Statistic cannot be a fuction of a parameter.
8
Basic Definitions Definition 5.2.2
The sample mean is the arithmetic average of the value in a random sample.It is usually denoted by
9
Definition 5.2.3 The sample variance is the statistic defined by
10
Sample standard deviation
11
Basic tools Theorem 5.2.4 Let be any numbers and Then a. b.
12
Proof: a. To prove part(a),just add and subtract to get
13
b. To prove part(b),just take a=0 in the above
equation: One of this equation’s significance is that we can get a more simple method to calculate the sample variance
14
Three useful tools for studying the distributional properties of statistics.
Theorem 5.2.6 Let X1,…,Xn be a random sample from a population with mean and variance <∞.Then a. b. Var c.
15
Proof: a. To prove(a),let g(Xi)=Xi/n,so Eg(Xi)=μ/n. Then ,by Lemma 5.2.5,
16
b. To prove part(b),similar to above,we have
Here we supplement the Lemma 5.2.5 Let X1,…Xn be a random sample from a population and let g(x) be a function such that Eg(X1) and Var g(X1) exist.Then (5.2.1)
17
(5.2.2) of Lemma 5.2.5 Proof to the equation:
18
Considering And we can obtain (5.2.2) of Lemma 5.2.5
19
Come back to Theorem 5.2.6 c. To prove part(c) of the Theorem 5.2.6,we can use Theorem ,we have
20
5.3 Sampling from the Normal Distribution
5.3.1 Properties of the Sample Mean and Variance 5.3.2 The Derived Distributions: Student’s t and Snedecor’s F
21
5.3.1 Properties of the Sample Mean and Variance
Theorem Let X1, ,Xn be a random sample from N(μ, ) Then (a) and are independent rv’s, (b) has a N(μ, /n) distribution, (c) (n − 1) / ~ with n − 1 degrees of freedom.
22
Proof: (b) is obvious so we focus on (a) and (c). We assume without loss of generality that μ=0,σ2 = 1. We first prove (a). Write :
23
Since Thus can be written as a function only of If we can show that these rv’s are joint
24
Independent of then we are done.So this is what we do now.
Consider the transformation
25
The above transformation is a linear transformation with a Jacobian equal to 1/n.
We have
26
Through the theorem 4.6.11 and theorem 4.6.12(p184), (a) is proved.
In order to prove part (c) ,we introduce the Lemma 5.3.2 Lemma 5.3.2 We use the notation to denote a chi squared random variable with p degrees of freedom.
27
a. If Z is a n(0,1) random variable,then
that is ,the square of a standard normal random variable is a chi squarel random variable.(p52——2.1.7) b.If X1,…Xn are independent and Xi ~ , then X1+…Xn~ ;that is ,independent chi squared variables add to a chi squared variable,and the degrees of freedom also add.(p183——4.6.8)
28
5.3.2 The Distributions: Student’s t and Snedecor’s F
In particular,in most practical cases the variance, ,is unknown.Thus , to get any idea of the variability of (as an estimate of ),it is necessary to estimate this variance. Considering the quantity: Which we can use to inference about when
29
is unknows. We have So,we can easily derive the distribution of the quantity.
30
Definition 5.3.4 Let X1,…Xn be a random sample from a n distribution.The quantity has student’s t ditribution with n-1 degrees of freedom.Equivalently, a random variable T has Student’s distribution with p degrees of freedom,and we write T~tp if it has pdf
32
Definition 5.3.6 Let X1,….Xn be a random sample from a
Population, and let Y1,…Ym be a random sample From an independent population.The random variance F= has Snedecor’s F distribution with n-1 and m-1 degrees of freedom.Equivalently,the random Variable F has the F distribution with p and q degrees of freedom if it has pdf
33
Kelker has shown that as long as the parent populations have a certain type of symmetry(spherical symmetry),then the variance ratio will have an F distribution.
34
The difference between sample median and sample mean.
5.4 Order Statistics Definition 5.4.1 The order statistics of a random sample X1,…Xn are the sample values placed in ascending order,They are denoted by X(1),…,X(n). The difference between sample median and sample mean.
35
Definition The notation{b},when appearing in a subject,is defined to be the number b rounded to the nearest integer in the usual.More precisely,if i is an integer and i-0.5<=b<i+0.5.then {b}=i. Theorem Let X1,…Xn be a random sample from a discrete distribution with pmf fx(xi)=pi,where x1<x2<…are the possible values of X in ascending order,Define
36
P0=0 P1=p1 P2=p1+p2 . Pi=p1+p2+…+pi
37
Let X(1),…X(n) denote the order statistics from the sample.Then
(5.4.2) and (5.4.3)
38
Theorem Let X(1),…,X(n) denote the order statistics of a random sample,X1,…,Xn, from a continuous population with cdf Fx(x) and pdf fx(x).Then the pdf of X(j) is
39
Theoren Let X(1),…X(n) denote the order statistics of a random sample,X1,…,Xn,from a continuous population with cdf Fx(x) and pdf fx(x).Then the joint pdf of X(i) and X(j) ,1<=i<j<=n,is
40
5.5 Convergence Concepts 5.5.1 Convergence in Probability
Definition A sequence of random variables,X1,X2,…,converges in probability to a random variable X if ,for every >0,
41
Theorem 5.5.2(Weak Law of Large Numbers)
Let X1,X2,…be iid random variables with EXi=u and Var Xi= < Define Then,for every >0,
42
5.5.2 Almost Sure Convergence
Definition A sequence of random variables,X1,X2,…,converges almost surely to a random variable X if,for every >0,
43
See Example 5.5.8,which has shown the difference between Converge in Probability and Almost Sure Convergence. Theorem 5.5.9(Strong Law of Large Numbers)Let X1,X2,…be iid random variable with EXi= and Var Xi= and define Then for every
44
that is , converges almost surely to .
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.