Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.

Slides:



Advertisements
Similar presentations
Distributions of sampling statistics Chapter 6 Sample mean & sample variance.
Advertisements

Chapter 7. Statistical Estimation and Sampling Distributions
Statistical Estimation and Sampling Distributions
Sampling Distributions (§ )
THE CENTRAL LIMIT THEOREM The “World is Normal” Theorem.
Descriptive statistics Experiment  Data  Sample Statistics Sample mean Sample variance Normalize sample variance by N-1 Standard deviation goes as square-root.
Chapter 6 Introduction to Sampling Distributions
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
Statistics Lecture 20. Last Day…completed 5.1 Today Parts of Section 5.3 and 5.4.
Lecture 3 Sampling distributions. Counts, Proportions, and sample mean.
Review of Probability and Statistics
Statistical inference Population - collection of all subjects or objects of interest (not necessarily people) Sample - subset of the population used to.
Chapter 7 ~ Sample Variability
The Lognormal Distribution
Approximations to Probability Distributions: Limit Theorems.
Clt1 CENTRAL LIMIT THEOREM  specifies a theoretical distribution  formulated by the selection of all possible random samples of a fixed size n  a sample.
Continuous Probability Distribution  A continuous random variables (RV) has infinitely many possible outcomes  Probability is conveyed for a range of.
Sampling Theory Determining the distribution of Sample statistics.
1 Ch6. Sampling distribution Dr. Deshi Ye
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
AP Statistics Chapter 9 Notes.
MTH 161: Introduction To Statistics
Convergence in Distribution
Chapter 7: Sample Variability Empirical Distribution of Sample Means.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
8 Sampling Distribution of the Mean Chapter8 p Sampling Distributions Population mean and standard deviation,  and   unknown Maximal Likelihood.
Physics 270 – Experimental Physics. Let say we are given a functional relationship between several measured variables Q(x, y, …) x ±  x and x ±  y What.
Consistency An estimator is a consistent estimator of θ, if , i.e., if
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
Confidence Interval & Unbiased Estimator Review and Foreword.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
Sampling and estimation Petter Mostad
Review Normal Distributions –Draw a picture. –Convert to standard normal (if necessary) –Use the binomial tables to look up the value. –In the case of.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Chapter 5 Sampling Distributions. The Concept of Sampling Distributions Parameter – numerical descriptive measure of a population. It is usually unknown.
Probability and Moment Approximations using Limit Theorems.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Week 111 Some facts about Power Series Consider the power series with non-negative coefficients a k. If converges for any positive value of t, say for.
Week 31 The Likelihood Function - Introduction Recall: a statistical model for some data is a set of distributions, one of which corresponds to the true.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Chapter 5: The Basic Concepts of Statistics. 5.1 Population and Sample Definition 5.1 A population consists of the totality of the observations with which.
MATH Section 4.4.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
Conditional Expectation
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Sampling Distributions Chapter 18. Sampling Distributions A parameter is a number that describes the population. In statistical practice, the value of.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate accuracy.
Parameter, Statistic and Random Samples
Chapter 6: Sampling Distributions
Sampling and Sampling Distributions
Chapter 7 Review.
Supplemental Lecture Notes
Chapter 6: Sampling Distributions
Chapter 5 Joint Probability Distributions and Random Samples
Sample Mean Distributions
Parameter, Statistic and Random Samples
Lecture 13 Sections 5.4 – 5.6 Objectives:
Linear Combination of Two Random Variables
t distribution Suppose Z ~ N(0,1) independent of X ~ χ2(n). Then,
Tests for Two Means – Normal Populations
Sampling Distribution Models
ASV Chapters 1 - Sample Spaces and Probabilities
MATH 2311 Section 4.4.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
MATH 2311 Section 4.4.
Presentation transcript:

week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know its value. A statistic is a function of the sample data, i.e., it is a quantity whose value can be calculated from the sample data. It is a random variable with a distribution function. Statistics are used to make inference about unknown population parameters. The random variables X 1, X 2,…, X n are said to form a (simple) random sample of size n if the X i ’s are independent random variables and each X i has the sample probability distribution. We say that the X i ’s are iid.

week12 Example – Sample Mean and Variance Suppose X 1, X 2,…, X n is a random sample of size n from a population with mean μ and variance σ 2. The sample mean is defined as The sample variance is defined as

week13 Goals of Statistics Estimate unknown parameters μ and σ 2. Measure errors of these estimates. Test whether sample gives evidence that parameters are (or are not) equal to a certain value.

week14 Sampling Distribution of a Statistic The sampling distribution of a statistic is the distribution of values taken by the statistic in all possible samples of the same size from the same population. The distribution function of a statistic is NOT the same as the distribution of the original population that generated the original sample. The form of the theoretical sampling distribution of a statistic will depend upon the distribution of the observable random variables in the sample.

week15 Sampling from Normal population Often we assume the random sample X 1, X 2,…X n is from a normal population with unknown mean μ and variance σ 2. Suppose we are interested in estimating μ and testing whether it is equal to a certain value. For this we need to know the probability distribution of the estimator of μ.

week16 Claim Suppose X 1, X 2,…X n are i.i.d normal random variables with unknown mean μ and variance σ 2 then Proof:

week17 Recall - The Chi Square distribution If Z ~ N(0,1) then, X = Z 2 has a Chi-Square distribution with parameter 1, i.e., Can proof this using change of variable theorem for univariate random variables. The moment generating function of X is If, all independent then Proof…

week18 Claim Suppose X 1, X 2,…X n are i.i.d normal random variables with mean μ and variance σ 2. Then, are independent standard normal variables, where i = 1, 2, …, n and Proof: …

week19 t distribution Suppose Z ~ N(0,1) independent of X ~ χ 2 (n). Then, Proof:

week110 Claim Suppose X 1, X 2,…X n are i.i.d normal random variables with mean μ and variance σ 2. Then, Proof:

week111 F distribution Suppose X ~ χ 2 (n) independent of Y ~ χ 2 (m). Then,

week112 Properties of the F distribution The F-distribution is a right skewed distribution. i.e. Can use Table 7 on page 796 to find percentile of the F- distribution. Example…

week113 The Central Limit Theorem Let X 1, X 2,…be a sequence of i.i.d random variables with E(X i ) = μ < ∞ and Var(X i ) = σ 2 < ∞. Let Then, for - ∞ < x < ∞ where Z is a standard normal random variable and Ф(z)is the cdf for the standard normal distribution. This is equivalent to saying that converges in distribution to Z ~ N(0,1). Also, i.e. converges in distribution to Z ~ N(0,1).

week114 Example Suppose X 1, X 2,…are i.i.d random variables and each has the Poisson(3) distribution. So E(X i ) = V(X i ) = 3. The CLT says that as n  ∞.

week115 Examples A very common application of the CLT is the Normal approximation to the Binomial distribution. Suppose X 1, X 2,…are i.i.d random variables and each has the Bernoulli(p) distribution. So E(X i ) = p and V(X i ) = p(1- p). The CLT says that as n  ∞. Let Y n = X 1 + … + X n then Y n has a Binomial(n, p) distribution. So for large n, Suppose we flip a biased coin 1000 times and the probability of heads on any one toss is 0.6. Find the probability of getting at least 550 heads. Suppose we toss a coin 100 times and observed 60 heads. Is the coin fair?